id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
16421649
pes2o/s2orc
v3-fos-license
Author Correction: A lake data set for the Tibetan Plateau from the 1960s, 2005, and 2014 An amendment to this paper has been published and can be accessed via a link at the top of the paper. Long-term datasets of number and size of lakes over the Tibetan Plateau (TP) are among the most critical components for better understanding the interactions among the cryosphere, hydrosphere, and atmosphere at regional and global scales. Due to the harsh environment and the scarcity of data over the TP, data accumulation and sharing become more valuable for scientists worldwide to make new discoveries in this region. This paper, for the first time, presents a comprehensive and freely available data set of lakes' status (name, location, shape, area, perimeter, etc.) over the TP region dating back to the 1960s, including three time series, i.e., the 1960s, 2005, and 2014, derived from ground survey (the 1960s) or high-spatialresolution satellite images from the China-Brazil Earth Resources Satellite (CBERS) (2005) and China's newly launched GaoFen-1 (GF-1, which means high-resolution images in Chinese) satellite (2014). The data set could provide scientists with useful information for revealing environmental changes and mechanisms over the TP region. Background & Summary The Tibetan Plateau (TP), known as the core region of the Earth's third pole 1,2 , has attracted great attention from the hydrology, weather, and climate communities. The state of environmental elements of the TP region, such as glacier 3 , permafrost 4 , snow 5 , river 6 , wetland 7 , and lake 8 , is critical for developing a better understanding of the interactions among the cryosphere, hydrosphere, and atmosphere. Lakes, as essential components of the hydrosphere over the TP, play an important role in regional and global biogeochemical processes 9 . Over the last half-century, great efforts have been made to develop a comprehensive understanding of the status and changes of lakes across the TP in previous studies [10][11][12][13][14][15][16][17][18] . The state of lakes in early years was mainly recorded by surveying and mapping or field investigations. Afterwards, remote sensing became a powerful tool for lake monitoring since the 1980s. Algorithms for automatic extraction and mapping of lake water bodies from medium and high resolution satellite images have been widely applied 19,20 . However, the accuracy of water boundary extraction using such automatic algorithms still needs further improvement 21 . For studies that require better accuracy or for nationalscale survey, semi-automatic extraction or even manual interpretation and digitization seems more feasible for building databases with strict quality control 22 . There are several global-scale data sets about lakes' properties, e.g., the Global Lakes and Wetlands Database (GLWD, 1:1 to 1:3 million resolution, freely available at http://www.wwfus.org/science/data. cfm) created using data from many organizations and individuals 23 , the GLObal WAter BOdies database (GLOWABO) produced using GeoCover data set circa 2000 (ref. 24), and a database of summer lake surface temperatures for 291 lakes globally collected in situ and/or by satellites for the period 1985-2009 (freely available at http://portal.lternet.edu/) 25 . However, for regional-scale studies, data sets with higher resolution and longer time series are required. In particular, since the natural environment over the TP region is relatively harsh, data accumulation and sharing can facilitate scientists worldwide in making new discoveries in this region. Although numerous studies on the state of the TP lakes 13,26 have been performed, there has not been public data set associated with the status of lakes (name, location, shape, area, perimeter, etc.) across the TP region from the past to the present, especially no open-access data set derived from high-spatial-resolution satellite images. The objective of this study was therefore to produce and share a data set about the state of lakes (area ≥1 km 2 ) over the TP. The data set includes three sub-datasets, i.e., the 1960s, 2005, and 2014 time series. The 1960s sub-dataset was produced from a valuable historical record through surveying and mapping, while the 2005 and 2014 sub-datasets were produced mainly using satellite images from the China-Brazil Earth Resources Satellite (CBERS) and China's newly launched GaoFen-1 (GF-1) satellite, respectively. The 1960s and 2005 sub-datasets originated from the results of the first and second nationwide lake investigations 27,28 , respectively. The 2014 sub-dataset was the first comprehensive evaluation of the GF-1 data for monitoring of TP lakes. Manual interpretation and digitization approaches were applied to ensure the accuracy of the data set. An overview of the production and validation of the data set is shown in Figure 1, and detailed information on methods will be described in the next section. The data set will provide scientists with a useful data source for revealing environmental changes and mechanisms over the TP region. Moreover, the data set could be used to validate automated mapping procedures (e.g., literature 20 and 29 ), to test theoretical hypotheses about lake distributions (e.g., literature 30 ), or to contribute to meteorological applications (e.g., literature 31 ). For research related to ecology, biogeochemistry, and geomorphology, the value of those even smaller lakes (i.e., area o1 km 2 ) is tremendous 32 . Since the data set in this study was produced by manual extraction, this type of lakes was not included. To fill the time gap in the developed data set in this study as well as to extend it in the future, scientists are welcome and appreciated to add smaller lakes, new time series, and new attributes (e.g., water level) into this data set. Methods The boundary of the TP in this study is defined as above the elevation of 2,500 m 13 using the NASA Shuttle Radar Topographic Mission (SRTM) 90 m Digital Elevation Models (DEM) Database v4.1 (Fig. 2). Two Provinces of China, i.e., Tibet and Qinghai, contribute to the major area of the TP (Fig. 2). To make the comparison and analysis convenient, the TP is further divided into 12 basins, including 9 exorheic drainage basins (i.e., AmuDarya, Brahmaputra, Ganges, Hexi, Indus, Mekong, Salween, Yangtze, and Yellow) and 3 endorheic drainage basins (i.e., Inner TP, Tarim, and Qaidam). The Inner TP is subdivided into 6 small basins ranging from Inner A to Inner F. The whole data set includes three sub-datasets: 1960s, 2005, and 2014, and focuses on all the lakes with areas greater than 1 km 2 . The 1960s sub-dataset After the establishment of the People's Republic of China in 1949, development and utilization of lake resources started to be back on track. New institutions for lake research were established by scientists from governments, universities, and research institudes 27 . In the 1960s, scientists did field surveying and mapping for all the lakes (area more than 10 km 2 ) across China, which was part of China's first nationwide lake investigation and could be traced from literature 27 . All the lakes were coded and published as an industry criterion of China called Code for China Lake Name 33 . A vector database (1:250,000) including the attributes (i.e. location, shape, and area) of the lakes was built. The original version of the 1960s sub-dataset in this paper is a data set extracted from the nationwide 1:250,000 ESRI shapefile format using the TP boundary. Some lakes in the data set have been edited according to the 2005 sub-dataset to be mentioned in the following section, e.g., in the raw 1960 attribute table, one lake may have two or more records directing to the separated parts of this lake, and these parts were merged in this sub-dataset to ensure the uniqueness of the lake attribute. Since the lake surveying was conducted within China, lakes outside the borderline, which were included in the following 2005 and 2014 satellitebased sub-datasets, were not included in the 1960s sub-dataset. were used as the main data source for the investigation. The CBERS is an international technological cooperation program between China and Brazil which developed and operated Earth observation satellites. CBERS-1 was launched in October, 1999, with the CCD camera as its main payload. To obtain intact lake data, images from the Landsat Enhanced Thematic Mapper Plus (ETM+) were used as a supplementary data source during cloudy days for CBERS-1 images. To comprehensively evaluate the state of lakes across the TP, we ensure extraction of information for each lake using two types of images: one was selected in the wet season (i.e., August-October) and the other in the dry season (i.e., April or May). All the CBERS CCD and Landsat ETM+ images were geometrically corrected and geo-rectified to an Albers Equivalent Conical Projection with a Root Mean Square (RMS) uncertainty lower than 30 m. For Qinghai and Tibet Provinces (Fig. 2), images totalling 457 including 408 CBERS CCD images and 49 Landsat ETM+ images were jointly used to extract lake water bodies. The 2005 sub-dataset in this paper consists of two parts: one part is the wet season results of the Qinghai and Tibetan region during the second lake investigation; for lakes outside Qinghai and Tibetan Provinces but inside the TP boundary, we downloaded Landsat ETM+ images (wet season) as supplements to extract the lake boundaries. The two parts were then merged to form the 2005 sub-dataset. The 2014 sub-dataset Images acquired in year 2014 from China's newly launched GF-1 WFV (Wide Field of View Cameras) sensor were used as the main data source for lake water body extraction. China officially started development of the China High-Resolution Earth Observation System (CHEOS) in May 2010, which was established as one of the major national science and technology projects. The Earth Observation System and Data Center of the China National Space Administration (EOSDC-CNSA) is responsible for organizing the construction of the CHEOS. The space-based CHEOS system was designed to launch 7 satellite series in sequence. GF-1, launched in April 2013, is the first satellite configured with one 2 m panchromatic/8 m multi-spectral (PMS) camera and four 16 m multi-spectral WFV cameras. An 800 km swath-width image can be acquired using the four synchronized-working WFV cameras, which greatly improved the revisit time to 4 days 34 . To match the timing and spatial resolution of the 2005 sub-dataset, the 16 m WFV images during the wet season were used in this study. There are 136 GF-1 images and 11 Landsat8 OLI images used to extract the water bodies. All the GF-1 images were ortho-rectificated before water body extraction. Note that to deal with the problem of missing pixels for Landsat ETM+ SLC-off imagery since 2003 (ref. 35), for both of the 2005 and 2014 sub-datasets, we used multi-temporal images to ensure the accuracy of extraction. Water bodies were firstly extracted in each basin and then merged together to form a whole data set for the TP. Water boundary extraction from satellite images In order to strictly control the precision of water boundary extraction from satellite images and to provide users with a comprehensive and reliable data set, we chose to manually interpret and extract the water boundaries of the lakes, given possible uncertainties in automatic extracting methods. Note that in this paper, islands inside the lake boundary were not counted to the total area of the lake water surface. Rules for determining water surface boundaries in the TP region are shown in Fig. 3. Green, yellow, and red lines represent the sketched water boundaries of the 1960s, 2005, and 2014, respectively. The three panels (a1-a3, b1-b3, and c1-c3) represent rules for different situations. Details for the rules are explained as follows: 1) Water body extraction for lakes with different water chemical properties: Lakes in the TP can be divided into three categories according to their water chemical properties, i.e., freshwater lake, semi-saline lake, and saline lake 27 . Figure 3 a1, a2 and a3 show examples of the appearances for the three categories in GF-1 pseudo-color composite images (near-infrared/red/green) individually. Mapam Yumco (salinity 0.1-0.4 g/l 27 ), a freshwater lake in the Indus basin of the southwest TP, shows ultra dark blue (see A1) in the GF-1 image. Zige Tangco, a semi-saline lake (average salinity of 40.7 g/l from field measurements in August, 2010) in the Inner TP basin of the Central Tibet, shows dark blue. The waterlines of both the freshwater lake and semi-saline lake are generally clear in the satellite images. We tracked and drew the waterlines of these lakes while zooming the images into a fixed scale 1:25,000. However, saline lake, like Chabyer Co (salinity 393.5-439.8 g/l 27 ) in Figure 3 a3, sometimes has a layer of salt on top of the water surface, which makes it difficult to determine the waterline. For such cases, to ensure the reliability of the results, we checked multiple images in different seasons, and referred to field investigations recorded in the 1960s (ref. 27). 2) Water body extraction for lakes with different formation mechanisms: Natural lakes can be formed by various processes. For lakes in the TP, tectonic movement, river erosion, glacial activity, and landslide are the primary drivers for the formation of lakes 27 . Most glacial lakes in the TP are small with areas less than 1 km 2,36 . Therefore, we only focus on describing tectonic lakes, barrier lakes, and fluvial lakes. Figure 3 b1, b2, and b3 show examples of the appearances for the three categories in GF-1 pseudo-color images individually. Selin Co (2300.49 km 2 in 2014) in B1, a classic tectonic lake lies in Central Tibet, is now larger than Nam Co (2028.50 km 2 in 2014) and becomes the largest lake in Tibet. The waterline of Selin Co is relatively clear and easy to draw. Ranwu Lake in B2 is a barrier lake formed by the landslideinduced debris flows blocking the river. The water level for barrier lakes is not very high, which makes it light blue shown in the image. In general, the waterline for barrier lakes is pretty clear but it varies with time on occasion. A priori knowledge is important to identify this type of lake. Fluvial lakes are often long and narrow. The lake in Fig. 3 b2 is a new-born lake in 2005 in the source area of the Yellow River. The waterline for this type of lake is highly dentate. 3) Dealing with specific issues: Since this data set only includes water bodies of lakes, islands within a lake were removed from the waterline polygon (Fig. 3 c1). Small water bodies in the bottomland of a lake are not included into the total water surface (e.g., the black circles in Fig. 3 c2), except that the water bodies in the bottomland are large enough and connected to the main water body (e.g., the white rectangle in Fig. 3 c2). Figure 3 c3 shows a lake in year 2014 that was merged from two separate lakes in year 2005 or the 1960s. These cases are normal in the northeast of Central Tibet, since most of the lakes have been expending over the past 50 years. In the data set, if a new merged lake was formed from two or more separate lakes, the new lake as a whole was renamed after the larger/largest one of the two/ more lakes. We also determined two specific types of lakes: new-born lakes and dead lakes. For example, if a lake existed in a certain location in the 2005 image while in the 1960s the same location was identified as land or non-lake water body, this lake was defined as a new-born lake. Similarly, if a lake was found in a certain location in the 1960s while in the 2005 image there was no lake in the same location, this lake was defined as a dead lake. Following the above definitions, all the images for years 2005 and 2014 were examined one by one to determine the two types of lakes. For questionable lakes like ephemeral lakes or salt lakes with salt crusts, we checked and compared their state on both wet-season and dry-season images. If the bottomlands with seasonal-covered water or the salt crusts were located outside the water surface boundaries on both wet/dry-season images, we would not consider them as components of the lake. Otherwise, if they were located inside the water surface boundaries on wet-season images but outside for the dry-season ones, we took the median lines of the water surface boundaries on wet/dry-season images as the lake water boundaries. Data Records The data set is available in three folders. The first folder, 'Data_Information_File', contains detailed information on lakes in each sub-dataset, the data collector, image/images used for water body extraction, citations, etc. The second folder, 'Data_Value_File', contains two subfolders: 'shp_1-10' and 'shp_10-', which store the shapefiles of the 1-10 km 2 and ≥10 km 2 lakes, respectively. The third folder, 'Supplement', contains the boundary files and validation sampling shapefiles. An overall statistical table for numbers and areas in this data set is also included in the 'Supplement' folder. The shapefiles of the three sub-datasets can be linked to the lake information via the ID/NAME_CH/NAME_EN columns. The 1960s and 2005 data used in this study have been published in the literature 22,28,37 . The 2014 data have not been published. The data set can be accessed at http://dx.doi.org/10.6084/m9.figshare.3145369 (Data Citation 1). Table 1 shows data labels and descriptions for the shapefiles in detail. Some lakes may have two or more Chinese or English names. The alias of these lakes are individually recorded after their common names using brackets, e.g., Yazi Lake (Woniu Lake). Lakes that did not have names are recorded as 'Noname'. The 'Noname' lakes are generally small and exist in the 'shp_1-10' file folder. An attribute column called 'IS_NEWBORN' is used to record whether a lake is a new-born lake or not. Number 0 indicates that the lake is not a new-born lake. NOTES_CH Statements for specific cases in Chinese, e.g. two lakes merged to one single lake due to drastic expansions NOTES_EN Statements for specific cases in English, e.g. two lakes merged to one single due to drastic expansions particularly in the Inner TP basin. More sub-datasets are required to study the detailed information on the change characteristics from 1960s-2005. Technical Validation Quality control and validation of the dataset For the 2005 and 2014 sub-datasets, after completing the first round of extracting water body boundaries of all lakes, we had three of the authors of this paper (Wei Wan, Zhongying Han, and Yuan Yuan) crosscheck the initial results basin by basin to ensure that there were no missing or erroneous lake. We organized four graduated students to examine the attribute tables for the data set to ensure the validity and integrity. We paid much attention to determining those new-born or dead lakes. Also, we examined ≥10 km 2 and 1-10 km 2 shapefiles for the same year together to avoid record repetitions. For the 1960s sub-dataset, we could not examine the fundamental data since the historical surveying and mapping work was unrepeatable. Instead, we examined the 1960s sub-dataset by comparison with the two remote sensing-derived sub-datasets to ensure the consistency of the attributes of lakes, e.g. the ID, name, and located basin. To achieve robust and quantitative validation of area and perimeter estimates from GF-1 WFV images, we did a two-step comparison. First, we compared the resulting GF-1 WFV-derived values (WFV for short) to the results derived from the Landsat 8 Operational Land Imager (OLI) images (OLI for short), since WFV and OLI have the same level of resolution, i.e. 16 and 30 m. Second, we compared both of the WFV and OLI results with the results derived from the GF-1 PMS images (PMS for short). Here the PMS results were treated as reference data. To achieve a rational sampling number as well as considering the workload, we divided the total number of lakes by area into 6 categories: ≥ 1000 km 2 , 500-1000 km 2 , 100-500 km 2 , 50-100 km 2 , 10-50 km 2 , and 1-10 km 2 . Approx. 5% of the number of each category was selected as samples, i.e. 1, 1, 3, 3, 13, and 33. Names and attributes of the sampled lakes can be found in the 'Supplement' folder of the data set. For validation, we collected thirteen Landsat 8 OLI images and fifty-nine GF-1 2 m/8 m PMS images during the wet season in 2014-2015. The raw panchromatic/multispectral images from PMS were firstly ortho-rectificated individually and then processed to create the final 2 m pan-sharpened reference images. The sampled lakes were digitalized out of the OLI and PMS under 1:25000 and 1:2500 scales, respectively. We use two morphometric indices mentioned in Liturature 20, the Shoreline Development Index (SDI) and the thickness index Miller (MI), to describe the morphometry of lakes. Table 3 shows statistics of measured parameters (area, perimeter) and calculated parameters (SDI and MI) for WFV, OLI, and PMS, respectively. In general, the mean, minimum, maximum, and standard deviation for the three data sets are at the same level. The PMSderived perimeters always showed relatively higher values than the other two, since higher-resolution images could contain more details of lake boundaries. To evaluate the matching of lake boundaries, we calculated relative deviations (RD) in area and perimeter between the WFV, OLI and the respective PMS data sets. For all the sampled lakes, deviation in estimated area for WFV was RD = 0.012 (median = 0; StDev = 0.044), and for OLI was RD = − 0.014 (median = − 0.012; StDev = 0.052). Similarly, deviation in estimated perimeter for WFV was RD = − 0.065 (median = − 0.074; StDev = 0.069), and for OLI was RD = − 0.082 (median = − 0.07; StDev = 0.06). Figure 4 shows histograms and Gaussian fits of the RDs of area and perimeter for the sampled WFV and OLI results. Note that RDs for both area and perimeter distributions reached good R-square values. Based on the error analysis here, the data source and method to create this data set appears to be reliable and robust. It is worth mentioning that since automatic methods are more efficient than manual interpretation, it will be nice to compare these two methods in the future work. Comparison with other data set After validating the extraction accuracy of the data set developed in this study, we further compared our data set to another two publicly released data sets, i.e. the global-scale data set GLWD 23 and a regionalscale data set created by Yao et al. 38 . The GLWD was produced using multi-datasources gathered from the 1990s. The level 1 and level 2 of the GLWD data were used for comparison. All the GLWD lakes were firstly extracted using the TP boundary and then regrouped into two categories: ≥ 10 km 2 and 1-10 km 2 . There are totalling 1131 lakes with an aggregated area of 38,153 km 2 for the GLWD. This is of the same order of magnitude as that for the data sets in this study. Figure 5 provides an overview of latitudinal, longitudinal and basin-range lake distributions according to different data sets. Number and area values were aggregated at steps of 1°and 2°for the latitudinal and longitudinal distributions, respectively. In general, the data sets in this study showed consistent results as compared to the GLWD. It is reasonable for the inconformity between the two data sets, since they reflect numbers and areas at different time periods. The most striking result of comparing the two data sets, however, is the basic difference in their geolocation for 1-10 km 2 lakes, e.g., lakes distributed in between 80°-90°E and around 35°N (i.e. northwest of the inner TP basin). For this region, We overlaid the shapefiles of the GLWD and the data sets developed in this study with the 1990s (Landsat 4-5 TM) and the 2014 (GF-1) remote sensing images, and found that some of the GLWD lakes were not shown on the 1990s images, and some small lakes were missing. We checked the lakes in our data sets one-by-one with reference to the GLWD data to ensure that there were no missing lakes. Despite of all the small issues discussed above, we believe that both the developed data sets in this study and the GLWD show good quality. The GLWD data could, to some extent, be a good addition to fill the time gaps in the developed data set in this study. The Yao's data set was created over the Hoh Xil region using Landsat TM/ETM+ images acquired in 2000. It is noted that the extracting methods and rules are consistent between Yao's and our data set. There are 44 lakes included in both of the data sets which were selected and used for comparison. Figure 6 shows the area of the target lakes in 2000 (Yao, in black), 2005 (this study, in blue) and 2014 (this study, in red), respectively. Note that for the area of each lake, the 2000 (Yao) data and the 2005 data are basically consistent (R 2 = 0.99). This is reasonable because the changes in lakes should not be prominent in a 3-5 year period. Some images used for Yao's results were from years 2001, 2002, and even 2003, making the comparison more convincing. The validation and comparison once again imply that, for lake monitoring, images from various satellite sensors, i.e., CBERS-1 CCD, GF-1 WFV, and Landsat TM/ETM+, can generate consistent and comparable results. Assessment of trends in lake changes over the last decades Figure 7 shows changing rates of lakes in the TP over the last decade (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014). Blue and red solid circles represent increasing and decreasing rates for individual lakes, respectively. Previous studies have revealed that lakes in some TP regions showed consistently expanding or shrinking trends during certain periods. For example, literature 13,39 , using Landsat TM/ETM+ data, suggest that the area of lakes in the inner plateau expanded at a rapid growth rate between the 1990s and 2009/2010 (~27%). Literature 10,18,40 , using ICESat/GLAS altimetry data, suggest that the water level of lakes in the inner plateau showed a significantly increasing trend between 2003 and 2009. Literature 10,13 reveal that lakes in the Brahmaputra basin showed a decreasing trend in both area and water level. Literature 41 reveals that and northeast of the basin show a more rapid growth rate (rate averages for Inner C, D, E, and F are 15.63, 15.13, 12.58, and 12.38%, respectively), while the south-eastern part shows a relatively slow growth rate (average rates for Inner A and B are 3.19 and 4.79%, respectively). This demonstrates a consistent and continued trend relative to the findings from the published studies. For changing rates of lakes in the Brahmaputra basin, it is clear that lakes in this basin show a decreasing rate (−2.53%) in recent years. This is highly consistent with the above-mentioned published studies. It is worth mentioning that analyzing the rates of lake change using two time intervals in this study could only obtain a general conclusion. To investigate a particular lake or basin-scale water balance, data acquired at more time intervals and effective automatic methods are required.
2019-09-27T14:16:49.501Z
2019-09-27T00:00:00.000
{ "year": 2019, "sha1": "630435cb8d4b6696444fbc431d443e2edb1f08c6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/sdata201639.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0a62ccaf9361fb504c732a854e57eb7fc28234b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
14202910
pes2o/s2orc
v3-fos-license
Semiparametric Inference of the Complier Average Causal Effect with Nonignorable Missing Outcomes Noncompliance and missing data often occur in randomized trials, which complicate the inference of causal effects. When both noncompliance and missing data are present, previous papers proposed moment and maximum likelihood estimators for binary and normally distributed continuous outcomes under the latent ignorable missing data mechanism. However, the latent ignorable missing data mechanism may be violated in practice, because the missing data mechanism may depend directly on the missing outcome itself. Under noncompliance and an outcome-dependent nonignorable missing data mechanism, previous studies showed the identifiability of complier average causal effect for discrete outcomes. In this paper, we study the semiparametric identifiability and estimation of complier average causal effect in randomized clinical trials with both all-or-none noncompliance and the outcome-dependent nonignorable missing continuous outcomes, and propose a two-step maximum likelihood estimator in order to eliminate the infinite dimensional nuisance parameter. Our method does not need to specify a parametric form for the missing data mechanism. We also evaluate the finite sample property of our method via extensive simulation studies and sensitivity analysis, with an application to a double-blinded psychiatric clinical trial. Introduction The noncompliance problem has attracted a lot of attention in the literature. Efron and Feldman (1991) studied the noncompliance problem before the principal stratification framework (Frangakis and Rubin, 2002) was proposed. In the presence of noncompliance, Balke and Pearl (1997) proposed large sample bounds of the ACEs for binary outcomes using the linear programming method. Angrist et al. (1996) discussed the identifiability of the causal effect using the instrumental variable method. Imbens and Rubin (1997) proposed a Bayesian method to estimate the complier average causal effect (CACE). When some outcomes are missing, the identifiability and estimation of CACE are more complicated, and different types of missing data mechanisms have sizable impacts on the identifiability and estimation of CACE. Frangakis and Rubin (1999) established the identifiability and proposed a moment estimator of CACE under the latent ignorable (LI) missing data mechanism. Under the LI missing data mechanism, Zhou and Li (2006) and O'Malley and Normand (2005) proposed Expectation-Maximization (EM) algorithms (Dempster et al., 1977) to find the maximum likelihood estimators (MLEs) of CACE for binary and normally distributed outcomes, respectively. Barnard et al. (2003) proposed a Bayesian approach to estimate CACE with bivariate outcomes and covariate adjustment. Taylor and Zhou (2011) proposed a multiple imputation method to estimate CACE for clustered encouragement design studies. However, the LI assumption may be implausible in some clinical studies when the missing data mechanism may depend on the missing outcome. Chen et al. (2009) and Imai (2009) discussed the identifiability of CACE for discrete outcomes under the outcome-dependent nonignorable (ODN) missing data mechanism. To the best of our knowledge, there are no published papers in the literature studying the identifiability of CACE for continuous outcomes under the ODN assumption. In this paper, we show that CACE is semiparametrically identifiable under some regular conditions, and propose estimation methods for CACE with continuous outcomes under the ODN assumption. For our semiparametric method, we need only assume that the distribution of the outcomes belongs to the exponential family without specifying a parametric form for the missing data mechanism. This paper proceeds as follows. In Section 2, we discuss the notation and assumptions used in this paper and define the parameter of interest. In Section 3, we show the semiparametric identifiability and propose a two-step maximum likelihood estimator (TSMLE). In Section 4, we use several simulation studies to illustrate the finite sample properties of our proposed estimators and consider sensitivity analysis to assess the robustness of our estimation strategy. In Section 5, we analyze a double-blinded randomized clinical trial using the methods proposed in this paper. We conclude with a discussion and provide all proofs in the Appendices. Notation and Assumptions We consider a randomized trial with a continuous outcome. For the i-th subject, let Z i denote the randomized treatment assignment (1 for treatment and 0 for control). Let D i denote the treatment received (1 for treatment and 0 for control). When Z i = D i , there exists noncompliance. Let Y i denote the outcome variable. Let R i denote the missing data indicator of Y i , i.e., R i = 1 if Y i is observed and R i = 0 if Y i is missing. First, we need to make the following fundamental assumption. Assumption 1 (Stable unit treatment value assumption, SUTVA): There is no interference between units, which means that the potential outcomes of one individual do not depend on the treatment status of other individuals (Rubin, 1980), and there is only one version of potential outcome of a certain treatment (Rubin, 1986). Except in the dependent case for infectious diseases (Hudgens and Halloran, 2008), the SUTVA assumption is reasonable in many cases. Under the SUTVA assumption, we define D i (z),Y i (z) and R i (z) as the potential treatment received, the potential outcome measured, and the potential missing data indicator for subject i if he/she were assigned to treatment z. These variables are potential outcomes because only one of the pairs observed treatment received, the observed outcome, and the observed missing data indicator. Under the principal stratification framework (Angrist et al., 1996;Frangakis and Rubin, 2002), we let U i be the compliance status of subject i, defined as follows: where a, c, d and n represent "always-taker", "complier", "defier" and "never-taker", respectively. Here U i is an unobserved variable, because we can observe only D i (1) or D i (0) for subject i, but not both. The CACE of Z to Y is the parameter of interest, defined as CACE is a subgroup causal effect for the compliers, with incompletely observed compliance status. Next, we give some sufficient conditions about the latent variables to make CACE(Z → Y ) identifiable, in the presence of noncompliance and nonignorable missing outcomes. Assumption 2 (Randomization): The treatment assignment Z is completely randomized. Randomization means that Z is independent of {D(1), D(0),Y (1),Y (0), R(1), R(0)}, and we define ξ = P{Z = 1 | D(1), D(0),Y (1),Y (0), R(1), R(0)} = P(Z = 1). Under the randomization assumption, The monotonicity of D i (z) implies that there are no defiers. Define ω u = P(U = u) for u = a, c, d, n, and the monotonicity assumption implies ω d = 0. Assumption 3 is plausible when the treatment assignment has a nonnegative effect on the treatment received for each subject, and it holds directly when the treatment is not available to subjects in the control arm, meaning D i (0) = 0 for all subjects. The monotonicity assumption implies a positive ACE of Z on D. However, under general circumstances, Assumption 3 is not fully testable, since only one of D i (1) and D i (0) can be observed. Assumption 5 (Compound exclusion restrictions): For never-takers and always-takers, we assume The traditional exclusion restriction assumes P{Y (1) | U = u} = P{Y (0) | U = u} for u = a and n. Frangakis and Rubin (1999) extended it to the compound exclusion restrictions, and imposed similar assumption on the joint vector of the outcome and the missing data indicator. Assumption 5 is reasonable in a double-blinded clinical trial, because the patients do not know the treatment assigned to them and thus Z has no "direct effect" on the outcome and the missing data indicator. However, when the missing data indicator depends on the treatment assigned, the compound exclusion restrictions may be violated. When Z is randomized, Assumption 5 is equivalent to P(Y, Assumption 6 (Outcome-dependent nonignorable missing data): For all y; z = 0, 1; d = 0, 1; and u ∈ {a, c, n}, we assume When Z is randomized, the equation (1) becomes . Therefore Assumption 2 and Assumption 6 imply that ρ(y) In previous papers (Frangakis and Rubin, 1999;O'Malley and Normand, 2005;Zhou and Li, 2006), the LI assumption is used for modeling missing data, which means that the potential outcomes and associated potential nonresponse indicators are independent within each principal stratum, that is Under the ODN missing data mechanism, the missing data indicator depends on the possibly missing outcome Y , which may be more reasonable than the LI missing data assumption in some applications. For example, some patients may have higher probabilities to leave the trial if their health outcomes are not good, and they may be more likely to stay in the trial otherwise. We illustrate the LI and ODN missing data mechanisms using the graphical models in Figure 1. Note that the arrows from Z to R are absent because of the compound exclusion restriction assumption. Semiparametric Identifiability and Estimation In this section, we first discuss the difficulty of nonparametric identifiability without assuming a parametric form for both the outcome distribution and the missing data mechanism. If both the distribution of the outcome Y and the missing data mechanism ρ(y) are not specified, the model is essentially not identifiable without further assumptions. We then propose a semiparametric method, specifying only the distribution of Y without assuming any parametric form for the missing data mechanism. We show the identifiability and propose a TSMLE of CACE(Z → Y ) under the assumption that the distribution of the outcome variable Y belongs to the exponential family. Semiparametric Identifiability Under SUTVA, randomization and monotonicity assumptions, These parameters can be identified directly from the observed data. Next we focus on the identification of the parameters of Y . Assumption 7: The conditional density of the outcome variable Y belongs to the following exponential family: where c(·), h(·), p k (·), and T k (·) are known functions, and θ = {θ zu : z = 0, 1; u = c, a, n} are unknown parameters. We denote f (y | Z = z,U = u) simply as f zu hereinafter. The parametric assumption of the outcome is untestable in general, since the missing data mechanism depends arbitrarily on the outcome. But for binary outcome, Small and Cheng (2009) proposed a goodness-of-fit test for the model under the ODN missing data mechanism. When the randomization assumption holds, the CACE is the difference between the expectations of the conditional density of Y , that is Hence if the parameters of f zu (y) are identified, the CACE is also identified. The exponential family defined by Assumption 7 includes many common distributions, such as normal distributions N(µ zu , σ 2 ), exponential distributions Exp(λ zu ) with mean parameter 1/λ zu , Gamma distributions Gamma(α zu , λ ) with shape parameter α zu and rate parameter λ , and the log-normal distributions Lognormal(µ zu , σ 2 ), where CACEs are specified as CACE nor = µ 1c − µ 0c , CACE exp = 1/λ 1c − 1/λ 0c , CACE gam = α 1c /λ − α 0c /λ , and CACE log = exp{µ 1c + σ 2 /2} − exp{µ 0c + σ 2 /2}, respectively. Next, Theorem 1 will show the identifiability of the parameters of θ . The proof of Theorem 1 is provided in Appendix A. Assumption 5 implies θ 1n = θ 0n and θ 1a = θ 0a , which can be simplified as θ n and θ a , respectively. If there exists a one-to-one mapping from the parameter set θ to the vector η, then θ is identifiable and so is CACE. The one-to-one mapping condition seems complicated, but it is reasonable and holds for many widely-used distributions, such as homoskedastic normal distributions, exponential distributions, etc. We will verify the one-to-one mapping condition for normal and exponential distributions in Appendix C and Appendix D. Other distributions such as heteroskedastic normal distributions, Gamma distributions and lognormal distributions can be verified similarly. However, counterexamples do exist, and we provide one in Appendix A. TSMLE of CACE Because we do not specify a parametric form on the missing data mechanism ρ(y), the joint distribution of (Z,U, D,Y, R) is not specified completely. Thus the MLEs of parameters are hard to obtain, since the likelihood depends on the infinite dimensional parameter ρ(y) as shown in Appendix B. In this subsection, we propose a two-step likelihood method to estimate parameters, which can be viewed as an example of the Two-Step Maximum Likelihood studied by Murphy and Topel (2002). In the second step, we propose a conditional likelihood method to estimate the parameter set θ , which is based on the conditional probability of (Z, D) given Y and R = 1. Here the proposed conditional likelihood function does not depend on the nuisance parameter ρ(y), based on the fact that the following equations (5) to (7) do not depend on ρ(y): log log It is obvious that The left hand sides of equations (5) to (7) consist of P(Z = z, D = d | Y = y, R = 1) and ξ , with the latter identified from the first step. The right hand sides of equations (5) to (7) consist of the parameters of interest. Therefore we can estimate θ through a likelihood method. Note the right hand sides do not depend on ρ(y), so we do not need to specify the form of ρ(y). Let p zd (θ , α; y) denote given (Y = y, R = 1) follows a multinomial distribution with four categories, the conditional log-likelihood function of (Z, D) can be written as From the proof of Theorem 1, the parameter θ can be identified from the second likelihood function (9) after identifying α from the first likelihood function (4). Therefore, by maximizing l 2 (θ ,α) over θ , we obtain the maximizerθ . In practice, we can use the bootstrap method to approximate the sampling variance of the estimator of CACE. Simulation Studies and Sensitivity Analysis We report simulation studies and sensitivity analyses in order to evaluate the finite sample properties of the estimating methods proposed in this paper. In Tables 2-4, the columns with labels "bias", "Std. dev.", "95% CP" and "95% CI" represent the average bias, standard deviation, 95% coverage proportion and average 95% confidence interval, respectively. Finally, we compare our methods with the MLE proposed by O' Malley and Normand (2005) under the LI missing data mechanism ("LI" in Table 4). We repeat our simulation 10000 times with sample sizes of 4000 in each case. The data generating processes are the same as "homo normal", but the missing data mechanisms are LI. Denote γ du = P(R = 1 | D = d,U = u), and choose (γ 1c , γ 0c , γ 0n , γ 1a ) = (0.8, 0.75, 0.7, 0.9), (0.9, 0.7, 0.8, 0.7), (0.7, 0.6, 0.6, 0.8) and (0.6, 0.7, 0.9, 0.7) for "LI1" to "LI4" respectively as shown in rows 1-4 and 5-8 of Table 4. Since the missing data mechanisms are LI, the "LI" method exhibits very small biases. Although the assumptions required by the "ODN" methods do not hold, the biases are not very large except for the missing mechanism LI4. The last case, LI4, has the largest variability among the γ zu 's and thus the largest bias for estimating the CACE, since the missing data mechanism has the "strongest" dependence on D and U but not Y . Next we generate data under the ODN assumption, and compare the methods under both ODN and LI assumptions. Let ρ(y; δ ) = I(y ≤ 2) × (0.9 − δ ) + I(y ≥ 7) × (0.9 − 2δ ) + I(2 < y < 7) × 0.9 where 0 < δ < 0.9. As δ increases, the relationship between Y and R becomes stronger. The data are generated from the same joint distribution as "homo normal" except for different ρ(y; δ ). The results are shown in Figure 2. The method under ODN missing data mechanism has small bias and promising coverage property irrespective of δ , but the method under LI missing data mechanism has larger bias and poorer coverage property with larger δ . Application We use the new methods proposed in this paper to re-analyze a psychiatric clinical trial. It is a doubleblinded randomized study comparing the relative effect of clozapine and haloperidol in adults with refractory schizophrenia at fifteen Veterans Affairs medical centers. Clozapine is found to be more efficacious than standard drugs in patients with refractory schizophrenia. Yet it is associated with potentially fatal agranulocytosis. One objective for conducting this trial is to evaluate the clinical effect of two antipsychotic medications. The dataset has been analysed in Rosenheck et al. (1997), Levy et al. (2004), andNormand (2005). Some summary statistics of the data are described in Table 5. More details about the trial can be found in Rosenheck et al. (1997) andNormand (2005). In the treatment arm, 203 patients are randomized to clozapine; in the control arm, 218 patients are randomized to haloperidol. The outcome of interest is the positive and negative syndrome score (PANSS) with higher values indicating more severe symptoms. The baseline PANSS is nearly balanced in both groups. Missing outcome patterns are obviously different in the clozapine group (about 40/203 ≈ 0.20) than in the haloperidol group (about 59/218 ≈ 0.27). Hence it is possible that outcomes are not missing at random. The primary reasons for dropout in the clozapine group are side effects or non-drug-related reasons. The reasons for discontinuing haloperidol are lack of efficacy or worsening of symptoms. Therefore, the missing mechanism may possibly depend on the missing outcome, and we think that the ODN assumption may be more reasonable in this case. The estimates of CACE by different methods are shown in Table 6. In Table 6, the "homo" and "hetero" in parentheses after "ODN" correspond to the homoskedastic and heteroskedastic model assumptions, respectively; and "LI" corresponds to the MLE proposed in O' Malley and Normand (2005). The columns of Table 6 correspond to the methods, point estimates, standard errors, 95% and 90% confidence intervals, respectively. The bootstrap method is used to compute standard errors and confidence intervals for all methods. From Table 6, we can see that under the homoskedastic as- using the data from the psychiatric clinical trial. When we have prior knowledge that the missing mechanism depends only on the treatment received and the compliance status, the method under LI missing mechanism will provide more credible conclusion. However, when we have prior knowledge that the missing mechanism depends directly on the outcome, we recommend our methods under the ODN missing data mechanism. The newly proposed methods can be used as alternatives for the predominate methods assuming the LI missing mechanism in sensitivity analysis. Rubin, 1999;Barnard et al., 2003;O'Malley and Normand, 2005;Zhou and Li, 2006) rely on the LI assumption in order to identify CACE, but the LI assumption may be not reasonable when the missing data mechanism may depend on the outcome. Under the ODN missing data mechanism, Chen et al. (2009) andImai (2009) showed the identifiability and proposed the moment estimator and the MLE of CACE for discrete outcomes. But there are no results for continuous outcomes under both noncompliance and ODN missing data mechanism. As a generalization of Chen et al. (2009) andImai (2009), we study the semiparametric identifiability, and propose estimation methods for CACE with continuous outcomes under the ODN missing data mechanism. The ODN assumption allows the missing data mechanism to depend on the outcome. However, the missing data processes in practical problems may be more complicated, and they may depend on other variables such as Z, U and D. For example, a missing mechanism depending on both the compliance status and the outcome may be reasonable in some real studies. Small and Cheng (2009) proposed a saturated model for P(R = 1 | Z,U,Y ), and the models under LI and ODN are special cases of their model. However, their model is generally not identifiable without restrictions on the parameters. It is worthwhile to study the identifiability of CACE under all possible restrictions of P(R = 1 | Z,U,Y ) and perform sensitivity analysis for models lack of identifiability. We consider only cross-sectional data in this paper, and generalizing our methods to longitudinal data is a future research topic. Since a i and b i can be identified from generalized linear models, we can identify all the parameters from the above equations and obtain the following results: Therefore, we can identify CACE = µ 1c − µ 0c .
2014-09-02T21:29:20.000Z
2014-09-02T00:00:00.000
{ "year": 2014, "sha1": "74925ae1f316e1d799279989f63e7a7aabb7e2d9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1409.0895", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "18ddb1bf2c5e4751b14739b7ae1fc100d174f746", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
244429740
pes2o/s2orc
v3-fos-license
Structural brain abnormalities in children and young adults with severe chronic kidney disease Background The pathophysiology of neurological dysfunction in severe chronic kidney disease (CKD) in children and young adults is largely unknown. We aimed to investigate brain volumes and white matter integrity in this population and explore brain structure under different treatment modalities. Methods This cross-sectional study includes 24 patients with severe CKD (eGFR < 30) aged 8–30 years (median = 18.5, range = 9.1–30.5) on different therapy modalities (pre-dialysis, n = 7; dialysis, n = 7; transplanted, n = 10) and 21 healthy controls matched for age, sex, and parental educational level. Neuroimaging targeted brain volume using volumetric analysis on T1 scans and white matter integrity with tract-based spatial statistics and voxel-wise regression on diffusion tensor imaging (DTI) data. Results CKD patients had lower white matter integrity in a widespread cluster of primarily distal white matter tracts compared to healthy controls. Furthermore, CKD patients had smaller volume of the nucleus accumbens relative to healthy controls, while no evidence was found for abnormal volumes of gray and white matter or other subcortical structures. Longer time since successful transplantation was related to lower white matter integrity. Exploratory analyses comparing treatment subgroups suggest lower white matter integrity and smaller volume of the nucleus accumbens in dialysis and transplanted patients relative to healthy controls. Conclusions Young CKD patients seem at risk for widespread disruption of white matter integrity and to some extent smaller subcortical volume (i.e., nucleus accumbens). Especially patients on dialysis therapy and patients who received a kidney transplant may be at risk for disruption of white matter integrity and smaller volume of the nucleus accumbens. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s00467-021-05276-5. Studies on the pathophysiology of neurological dysfunction in adults with severe CKD reported smaller brain volumes on magnetic resonance imaging (MRI) scans, indicative of brain atrophy [15][16][17][18][19][20][21][22]. Diffusion tensor imaging (DTI) is an advanced MRI method that is particularly sensitive to the microstructure of white matter tracts [23]. Studies using DTI showed disrupted white matter integrity in adult patients with severe CKD [16,20,21,[24][25][26]. Comparisons of treatment subgroups in adults further indicate that dialysis patients are particularly at risk of brain atrophy and reduced white matter integrity, while the anomalies in transplanted patients are less pronounced [15-22, 27, 28]. Pre-dialysis patients appear to be at the lowest risk of brain abnormalities [28]. Longitudinal studies in transplanted adult patients yield inconsistent findings, with some studies showing improvement and others showing further deterioration of MRI parameters after transplantation [9,27,29,30]. Higher uremic toxin levels and longer dialysis duration were related to more severe brain atrophy and disruption of white matter integrity in adults [18,21,22,24,26,31]. Taken together, previous studies indicate that CKD and kidney replacement therapy are related to structural abnormalities in the adult brain. As rapid brain development takes place well into early adulthood [32], children and young adults may be particularly sensitive to the detrimental effects of CKD on the developing brain. Indeed, the limited available studies in young CKD patients indicate that CKD may affect normal brain development [9,[33][34][35]. The exact mechanisms of CKD that impact on the developing brain remain unclear however, and studies comparing patients on conservative therapy, dialysis, or living with a functional kidney graft are lacking in particular. The current MRI study aims to explore the impact of (1) CKD on brain structure (as quantified by brain volumetry and tract-based spatial statistics on DTI as a measure of white matter microstructural integrity) and (2) different treatment modalities on brain structure. Based on the available literature in adults, it was expected that children and young adults with severe CKD had smaller brain volumes and lower white matter integrity compared to matched controls, where patients on dialysis therapy would be particularly at risk. Participants We included 24 patients with severe CKD aged 8.0-30.9 years and 21 healthy controls matched for age, sex, and parental educational level. CKD patients were recruited from the Amsterdam University Medical Centers (n = 21); Erasmus Medical Centre, Netherlands (n = 1); and the University Hospital Antwerp, Belgium (n = 2). Inclusion criteria for the CKD group were (1) CKD stages 4-5 on conservative therapy, on peritoneal or hemodialysis or patients having received a kidney transplant at least two years prior to enrollment (to ensure stable kidney function), and (2) aged between 8 years (in line with national ethical guidelines) and 30 years. Healthy controls were recruited through participating patients (friends or acquaintances, not siblings) or through local schools and sport clubs. For both patients and controls, exclusion criteria were (1) previously established severe intellectual impairment with overt learning disability; (2) insufficient mastery of the Dutch language; (3) primary sensory disorder (hearing or vision impairments); (4) established skull or brain abnormalities not related to CKD; or (5) co-existing disease with primary or secondary central nervous system involvement interfering with the impact of CKD. Treatment subgroups Three treatment subgroups of CKD patients were distinguished: (1) a pre-dialysis group (n = 7) with current estimated glomerular filtration rate (eGFR) < 30 ml/min/1.73 m 2 on conservative treatment at time of assessment; (2) a group on chronic hemodialysis or peritoneal dialysis (total n = 7, hemodialysis n = 2, peritoneal dialysis n = 5); and (3) a transplanted group (n = 10) of patients with a functioning kidney graft for at least 2 years and eGFR > 30 ml/min/1.73 m 2 [36]. CKD patients who previously underwent kidney transplantation, but had an eGFR < 30 at time of assessment, were allocated to either the pre-dialysis or dialysis group according to their current treatment mode. Socio-demographic and CKD clinical parameters Socio-demographic parameters (i.e., age, sex, parental educational level) were collected via an online portal [37], using a self-developed widely used custom-made inventory. Parental educational level was divided into three categories: (1) low education (primary education, lower vocational education, lower and middle general secondary education); (2) middle education (middle vocational education, higher secondary education, pre-university education); and (3) high education (higher vocational education, university) [38]. The following clinical parameters were extracted from each patient's medical file: age at severe CKD diagnosis (when eGFR first dropped below 30 ml/min/1.73 m 2 ), primary disease, history of relevant comorbidities (extreme prematurity [< 32 weeks of gestational age], malignant hypertension [extremely high blood pressure resulting in organ damage], and convulsions), current eGFR, creatinine and urea blood levels obtained closest to the date of study participation (range of lab measurements: − 51 days to + 4 days relative to MRI scan), duration of severe CKD (ratio of the time frame between the moment that eGFR dropped below 30 ml/min/1.73 m 2 and the time of assessment [months] to calendar age [months], expressed as % of life), type of dialysis received during lifetime (hemodialysis, peritoneal dialysis, or both), dialysis duration (ratio of dialysis duration [months] to calendar age [months], i.e., % of life), type of transplantation (pre-emptive or non-preemptive) and time since successful transplantation (ratio of period since transplantation during which eGFR > 30 ml/ min/1.73 m 2 [months] to calendar age [months], i.e., % of life). eGFR was calculated using Schwarz formula for patients aged < 18 years [39] and the abbreviated Modification of Diet in Renal Disease formula was used for patients aged > 18 [40]. Due to large fluctuations in eGFR prior to and after dialysis, eGFR of patients receiving dialysis was conservatively set at 10 [41]. MRI acquisition and pre-processing MRI scans were acquired on a 3.0 T Philips Achieva scanner using a 32-channel head coil. T1-weighted and spin echo diffusion-weighted images using 128 diffusion gradient directions were acquired. Pre-processing of T1 data produced normalized brain volumes of gray matter, white matter, and bilateral subcortical structures (i.e., thalamus, caudate nucleus, putamen, pallidum, hippocampus, amygdala, and nucleus accumbens). Pre-processing of diffusionweighted images produced maps for fractional anisotropy (FA) and mean diffusivity (MD) as the primary measures of white matter integrity (higher FA and lower MD values are consistent with higher integrity [23]). Secondary measures were axial diffusivity (AD) and radial diffusivity (RD), for which a pattern of lower AD and higher RD is consistent with axonal degeneration and/or demyelination [23]. Further details on the MRI acquisition and preprocessing are provided in Supplement 1. Procedure The study protocol was approved by the Medical Ethics Committee of the Amsterdam UMC (NL61708.018.17), and all procedures were performed according to the Declaration of Helsinki. Eligible patients were first approached by their treating physician. Eligible controls were approached by the researcher and received a flyer. Those who responded positively were again contacted, either in person or by phone, for additional information and received a comprehensive information letter. After 2 weeks, potential candidates were re-contacted by telephone to answer remaining questions. After obtaining verbal consent to participate, written informed consent was obtained from legal guardians (for children aged < 16 years old) and/or children and young adults aged ≥ 12 years. One week prior to the MRI scan, participants and/or parents of participants < 18 years old completed the online questionnaires. The 30-min MRI scan took place at the Amsterdam Medical Centre. If children expressed that they wanted to become familiar with scanning procedures, a simulation scanner was available and a mock procedure was performed before actual scanning. Socio-demographic and clinical characteristics Statistical analyses were performed using SPSS 26.0 (IBM Corp., 2019). Independent and dependent variables were tested for normality and screened for outliers (± 3 interquartile ranges below/above the lower or upper quartile), which were rescaled using winsorizing (Field, 2009). All groups (CKD group, healthy control group, and treatment subgroups) were compared with each other by age, sex, and parental educational level and clinical parameters using analysis of variance (ANOVA). Analysis of possible confounders Matching between the total CKD and the healthy control group by age, sex, and parental educational level during recruitment controlled for possible confounding effects of socio-demographic factors in these analyses [32,42]. With regard to treatment subgroup comparisons and analyses of clinical parameters, we explored the association between socio-demographic parameters and all outcome measures using correlation analyses (age), t tests (sex), and ANOVAs (parental educational level). Socio-demographic parameters that showed a significant relationship with a particular outcome measure were added as covariates to treatment subgroup comparisons and analyses of clinical parameters on these specific outcomes. Brain volumes in CKD groups and healthy control group Regarding brain volume, group differences (CKD group, healthy control group) were assessed and treatment subgroup differences (pre-dialysis group, dialysis group, transplanted group, healthy control group) were explored using ANOVA. Main effects of treatment subgroup were followed by planned contrasts comparing each treatment subgroup to the healthy control group. Group differences in brain volumes between non-pre-emptively and pre-emptively transplanted patients were explored using t tests. The relationship between clinical parameters (age at severe CKD diagnosis, current eGFR, severe CKD duration, dialysis duration, and time since successful transplantation) was investigated using multivariate linear regressions with backward elimination (criterion for removal: p > 0.10) on brain volumes for which a significant effect of treatment subgroup was found, in order to reduce amount of comparisons. White matter integrity in CKD groups and healthy control group Statistical analyses of DTI maps were performed using randomize [43]. Group differences and treatment subgroup differences were evaluated using voxel-wise comparisons of the skeletonized FA and MD maps. Group differences between the non-pre-emptive and pre-emptive group were also explored. The relation between clinical parameters and white matter integrity was investigated using voxel-wise regression on FA and MD maps. Only in case a significant cluster was identified for FA or MD maps, the origin of the impact of CKD was further investigated by comparing the mean AD and RD extracted from the cluster affected by CKD. Finally, to identify the white matter tracts contributing to the neuropathology of CKD, masks of (bilateral) white matter tracts were created for the following tracts: genu, body and splenium of corpus callosum (CC), corticospinal tract (CST), anterior thalamic radiation (ATR), superior and longitudinal fasciculus (SLF and ILF), inferior frontal occipital fasciculus (IFOF), forceps major and minor (FMa and FMi), cingulate and hippocampal parts of the cingulum bundle (CB), and uncinate fasciculus (UF). Overlap between each white matter tract and the cluster affected by CKD was used to assess (1) which white matter tracts contributed to the impact of CKD (i.e., percentage overlap of each white matter tract with the total cluster affected by CKD) and (2) to what extent each white matter tract was affected by CKD (i.e., percentage overlap of affected cluster with each complete white matter tract). All statistical testing was two-sided and alpha was set at 0.05. Cohen's d effect sizes are reported where appropriate and were interpreted as small (d < 0.5), medium (0.5 ≥ d < 0.8), or large (d ≥ 0.80) [44]. Socio-demographic and clinical characteristics Socio-demographic and clinical characteristics of the sample are shown in Table 1. Comparisons between the healthy control group, the CKD group, and the treatment subgroups revealed no significant differences on any of the socio-demographic parameters. Regarding clinical parameters, age at severe CKD diagnosis was significantly higher in the dialysis group than in the transplanted group (p = 0.015, d = 1.41). As expected, treatment subgroups differed in terms of eGFR and blood urea level, where the transplanted group had higher eGFR and lower blood urea levels than both the pre-dialysis and dialysis group (ps < 0.001, ds > 2.16), while the pre-dialysis group had lower blood urea levels than the dialysis group (p = 0.029, d = 1.20). As expected, time since successful transplantation was longer in the transplanted group than in the pre-dialysis and dialysis group (p < 0.001, d = 2.07 and p < 0.001, d = 2.09, respectively). Other comparisons on clinical parameters did not reveal significant differences. Table 2 provides the results of the volumetric analysis. As compared to the healthy control group, the CKD group had smaller volume of the nucleus accumbens (p = 0.005, d = − 0.87). No further significant differences between the CKD group and healthy control group were found for volumes of the gray matter, white matter, or subcortical structures (ps > 0.072). Exploratory analyses comparing treatment subgroups showed a significant main effect of treatment type for volume of the nucleus accumbens (p = 0.022). No significant main effect of treatment type was found for volumes of gray and white matter or other subcortical structures (ps > 0.222). Subsequent analyses revealed that, as compared to the healthy control group, both the dialysis and transplanted groups had smaller nucleus accumbens volume (p = 0.037, d = − 0.94; Table 1 Demographic and clinical parameters in the CKD, pre-dialysis, dialysis, transplanted, and healthy control group Values are displayed as median (range), unless otherwise indicated. Abbreviations: CKD = chronic kidney disease; D = dialysis group; eGFR = estimated glomerular filtration rate; GA = gestational age; PD = pre-dialysis group; Tx = transplanted group. a 1.0 = low education, 2.0 = middle education, 3.0 = high education Primary diseases: 1 urethral valves (n = 6), dysplasia (n = 1), 2 malignant hypertension (n = 2), 3 due to asphyxia (n = 1), due to septicemia (n = 2); 4 primary focal segmental glomerulosclerosis (FSGS) (n = 1), anti-neutrophil cytoplasmic autoantibodies (ANCA) vasculitis (n = 1); 5 branchiootorenal (BOR-) syndrome (n = 1), NPHP1 mutation (n = 1), autosomal dominant polycystic kidney disease (ADPKD) (n = 1), Alport's syndrome (n = 1), inherited FSGS due to INF2 mutation (n = 2), Pax-2 mutation (n = 1); 6 tubulointerstitial nephritis (n = 1), unknown cause (n = 2) Table 3). Multiple regression analysis in the total CKD group did not reveal a significant association between clinical parameters and the subcortical structure with an observed effect of treatment subgroup (i.e., nucleus accumbens). White matter integrity in CKD groups and healthy control group Voxel-wise group comparisons revealed that the CKD group had lower FA and higher MD than the healthy control group in a large cluster of white matter tracts (p < 0.001, d = -2.10; p = 0.018, d = 0.74, respectively), as displayed in Fig. 1a. Subsequent analyses showed that within this affected cluster, the CKD group had higher RD (p < 0.001, d = 1.52) and lower AD (p = 0.002, d = -1.01) as compared to the healthy control group. When considering individual white matter tracts, the IFOF, ATR, and SLF had the highest contribution to the cluster affected by CKD and the CC, CB, FMa, and FMi had least or no contribution. The IFOF, UF, ATR, and CST were most extensively affected by CKD and the CC and CB were not or barely affected (Table 4). Exploratory voxel-wise group comparisons of the treatment subgroups on FA and MD maps showed no difference between the pre-dialysis group and healthy control group, while the dialysis group had lower FA in a large cluster of white matter tracts compared to the healthy control group (p < 0.001, d = − 1.92) (Fig. 1b). The transplanted group had both lower FA and higher MD in a large cluster of white matter tracts (p < 0.001, d = − 2.40; p = 0.034, d = 0.81, respectively). Follow-up analyses on AD and RD maps showed that within the cluster of affected white matter, both the dialysis and transplanted group had higher RD (p = 0.005, d = 1.30; p < 0.001 d = 1.68, respectively) and lower AD (p = 0.003, d = -1.52; p = 0.002, d = -1.27, respectively) compared to the healthy control group. No significant differences were found between the non-pre-emptive and pre-emptive transplantation group. Voxel-wise regression analysis revealed that longer time since successful transplantation was significantly related to lower FA (β = − 0.518, p = 0.011, while no relations were found with MD, RD, and AD (Fig. 2). No other significant relations between clinical parameters and white matter integrity were revealed. Discussion The results of this exploratory MRI study suggest that CKD patients aged 8 to 30 years are at risk for widespread abnormality of DTI parameters in the cerebral white matter. These findings are consistent with the hypothesis that CKD results in disruption of white matter integrity indicative of axonal damage and/or demyelination. White matter abnormalities may contribute to the presence of neurocognitive impairments that are considered prominent in the clinical features of CKD [2][3][4][5][6][7]. No evidence was found for smaller brain volume of the gray and white matter, although there were indications for smaller volume of a subcortical structure (i.e., nucleus accumbens). Our findings further suggest that longer time since successful transplantation may be related to more severe disruption of white matter integrity. Additional exploratory comparisons between treatment subgroups further suggest that dialysis and transplanted patients may both be vulnerable for disruption of white matter integrity and to some extent smaller subcortical volumes (i.e., in the nucleus accumbens), with no evident differences between the two groups. In contrast, no evidence was found for structural brain abnormalities in pre-dialysis patients with severe CKD. Findings from DTI analyses suggest that our patients have widespread disruption of white matter integrity. The observed effect of severe CKD on white matter integrity is in line with previous studies showing a negative impact of CKD on the adult brain [16,[20][21][22][24][25][26] and the only existing DTI study in young CKD patients that found relatively modest abnormality of white matter integrity [33]. Our study extends the existing literature by showing that the negative impact of CKD on white matter tracts in young patients is widespread, where the larger distal white matter tracts (i.e., IFOF, ATR, SLF, CST, and UF) seem more prominently involved than medial white matter tracts (i.e., CC, CB, FMA, and FMi). Additionally, findings from volumetric analysis further indicate that young patients with severe CKD may have smaller volume of the nucleus accumbens, while no evidence was found for smaller brain volume of gray and white matter or other subcortical structures. These findings contribute to already inconsistent literature on the impact of CKD on brain volumes of young patients [34,35]. The nucleus accumbens is a subcortical structure involved in behavioral arousal and the regulation of slowwave sleep [45]. This may suggest that abnormal development of this structure plays a role in the clinical presentation of CKD in young patients, as sleep is often disturbed in these patients [46]. The observed effect on the nucleus accumbens contrasts with the single previous study on subcortical volumes in young patients, reporting no effects of CKD on subcortical volumes [34]. This discrepancy may be explained by higher CKD severity in our study sample (eGFR < 30, while also including dialysis patients). However, our results are in line with a study in older dialysis patients, which also reported smaller volume of the nucleus accumbens [16]. Taken together, these findings may support the idea that subcortical structures may be more vulnerable in patients with more severe stages of CKD. To conclude, our combined results of DTI and volumetric analyses indicate that abnormal white matter integrity is prominently implicated in the effect of CKD on the young brain. In addition, there is some evidence indicating a negative effect on subcortical structures, more specifically the nucleus accumbens. Exploratory analyses comparing treatment subgroups indicate that dialysis patients have structural brain abnormalities (i.e., disruption of white matter integrity and smaller volume of the nucleus accumbens) relative to unaffected peers, while no evidence was found for brain abnormalities in pre-dialysis patients. This is in line with previous studies in children and young adults with CKD on conservative therapy [33,34] and adults on dialysis therapy [3,9,16,[25][26][27]31]. No associations were found between dialysis duration and severity of the observed brain abnormalities. Although other studies in adults have reported associations between dialysis duration and the volume of gray and white matter [18,27,28,31], the results from this study suggest that the potential impact of dialysis on brain structure may not linearly increase over time in young patients with CKD. In line with absence of evidence for brain abnormalities in our patients on pre-dialysis therapy, no significant associations were found between eGFR and brain structure. This may indicate that eGFR poorly reflects the concentrations of the wide range of protein-bound uremic toxins that may have a negative impact on the brain [47]. This is the first study to investigate brain structure in young patients on dialysis, presenting evidence suggesting the presence of abnormal white matter integrity and smaller subcortical volume (i.e., nucleus accumbens). Although positive effects of transplantation on white matter integrity could be anticipated from adult studies [27,29,30], findings from our regression analyses suggest that longer time post-transplantation during the patients' lifetime may be related to more pronounced abnormalities in white matter integrity. To our knowledge, this is the first study to expose this relationship in young CKD patients. Our explorative comparisons of treatment subgroups fit with this finding and suggest that transplanted patients may have widespread disruption of white matter integrity and smaller volume of the nucleus accumbens relative to unaffected peers. These findings align with two recent studies in children and young adults [33,34]. We found no evidence for striking differences between the pre-emptively transplanted and non-pre-emptively transplanted patients, although this analysis was limited by very small sample sizes. Considering our observations suggesting a role for dialysis therapy in the presence of brain abnormalities, a history of exposure to dialysis may also play a role in the brain abnormalities observed in the transplanted group. However, our findings do suggest that effects of severe CKD on brain structure may not be reversible after transplantation in young patients and may even worsen over time. The findings of our study suggest that CKD and kidney replacement therapy may impact on axonal integrity. This could imply that white matter abnormalities may become more pronounced over time due to derailed development of white matter tracts after the initial impact of CKD and/ or kidney replacement therapy. Moreover, post-transplant infections and neurotoxic/microvascular damage by immune-suppressive maintenance therapy may also contribute to detrimental effects on brain structure after transplantation [11,13,48]. The notion that transplanted patients with a normal eGFR, including pre-emptively transplanted patients, and dialysis patients both appeared to have brain abnormalities gives rise to the thought that ischemia induced by impaired cerebral perfusion may play a more important role in the impact of severe CKD on brain structure than uremic toxins. Hypertension, small vessel disease, and impaired cerebral perfusion indeed commonly occur in CKD [5,14,49]. This hypothesis fits with the specific structural brain abnormalities revealed in our study, as widespread disruption of white matter integrity has previously been described in adult patients with hypertension, which is in turn associated with impaired cerebral perfusion [50]. This speculation is in line with a recent review, concluding that CKD-related hypertension may be more important as a risk factor for the influence of pediatric CKD on the brain than initially thought [5]. Future studies using a prospective longitudinal design could further clarify the factors that impact on the developing brain, such as duration of CKD, the impact of kidney replacement therapy, course of uncontrolled hypertension, and neurotoxic and vasoactive immuno-suppressive therapy. Obduction or neurophysiological studies may provide further insights in the neuropathology underlying the observed brain abnormalities in patients with CKD. Future research in larger cohorts could take into account the potential roles of co-morbidities and clinical complications of CKD (e.g., premature birth, epilepsy, prolonged severe acidosis). Likewise, longitudinal studies may contribute to better delineation between direct effects of CKD on the brain and secondary effects that may manifest due to derailed brain development. Furthermore, the relevance of structural brain abnormalities for brain function remains to be investigated, for example, in relation to neurocognitive and adaptive functioning in young CKD patients. Strengths and limitations First, we acknowledge our small sample size. Severe CKD is a rare disease in young patients and much effort was done to establish a collaboration with (inter)national child nephrology centers in order to reach as many Dutchspeaking patients as possible. Very cautious interpretation of our findings is necessary, especially with regard to the additional exploratory treatment subgroup analyses. We encourage further investigation of the neuropathology in dialysis and transplanted patients using prospective, long-term longitudinal designs with multiple repeated measurements to increase rigor of the observations and to follow the course of CKD throughout disease and treatment stages. A second limitation is the heterogeneity of our sample in terms of socio-demographic and illness characteristics, which is also partly due to low prevalence of severe CKD in children and young adults. Careful matching of the healthy control group by age, sex, and parental educational level partly accounted for potential group differences and confounding analyses showed that socio-demographic factors did not account for reported group differences. This study also has several strengths, involving the focus on young patients with CKD, the use of a single MRI scanner in an international patient cohort (as multiple scanning sites introduce noise), and the use of advanced quantitative analyses for a comprehensive investigation of brain structure. Conclusion and future directions This study suggests that young patients with severe CKD are at risk of structural brain abnormalities, as detected on MRI by showing widespread abnormality of DTI parameters in the white matter. We further found some evidence suggesting smaller volume of subcortical structures (i.e., nucleus accumbens). Especially patients on dialysis therapy and patients who receive a kidney transplant may be at risk for widespread disruption of white matter integrity and smaller volume of the nucleus accumbens. This study suggests that brain abnormalities in young patients with severe CKD may be partly irreversible, possibly even after successful transplantation. Prospective longitudinal studies should determine the effects of kidney transplantation on the developing brain, which may be less favorable than seen in adult patients. Author contribution SL, MK, MSS, FJB, KJO, and JWG participated in research design. SL conducted the research and performed data analyses. SL, MK, JO, KJO, and JWG were involved in the data analyses plan. All authors participated in interpretation of outcomes and participated in the writing of the paper. Funding This study was partly funded by the Dutch Kidney Foundation. Conflict of interest The authors declare no competing interests. Time since successful transplantation L FA MD Fig. 2 Illustration of voxel-wise TBSS comparisons of FA and MD maps, using threshold-free cluster enhancement correction, showing a significant negative correlation between time since succesful transplantation and FA values in CKD patients. Image is illustrated the following coordinates: (x = 34, y = − 10, z = 26) and shows the whole brain skeleton (at FA > 0.3. in green), overlaid on standard MNI 152 1 mm T1 brain. Significant group differences are "thickened" towards the full width of the white matter tract to increase visualization. Abbreviations: FA = fractional anisotropy; MD = mean diffusivity; L = left Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-11-21T06:16:33.171Z
2021-11-20T00:00:00.000
{ "year": 2021, "sha1": "fa742f965cfbd70d0423407391c12733cf4d678a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00467-021-05276-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "eae182e3a36a4c779f7097c73fdcc7263916024d", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
258833352
pes2o/s2orc
v3-fos-license
AD-MERCS: Modeling Normality and Abnormality in Unsupervised Anomaly Detection Most anomaly detection systems try to model normal behavior and assume anomalies deviate from it in diverse manners. However, there may be patterns in the anomalies as well. Ideally, an anomaly detection system can exploit patterns in both normal and anomalous behavior. In this paper, we present AD-MERCS, an unsupervised approach to anomaly detection that explicitly aims at doing both. AD-MERCS identifies multiple subspaces of the instance space within which patterns exist, and identifies conditions (possibly in other subspaces) that characterize instances that deviate from these patterns. Experiments show that this modeling of both normality and abnormality makes the anomaly detector performant on a wide range of types of anomalies. Moreover, by identifying patterns and conditions in (low-dimensional) subspaces, the anomaly detector can provide simple explanations of why something is considered an anomaly. These explanations can be both negative (deviation from some pattern) as positive (meeting some condition that is typical for anomalies). Introduction Anomaly detection concerns the identification of "abnormal" instances in a data set. What is considered abnormal depends, of course, on the application context; there is no single definition of the term. An instance may be abnormal because it is a global outlier (i.e., it has values for certain attributes that deviate strongly from typical values), a local outlier (values are atypical among instances that are otherwise similar to this one), because it breaks a certain pattern in the data (consider e.g. the sequence "abababaababab"), because it creates a pattern where none is expected (e.g. "vlajcoinjahjooooooookajdcna"), and for many other reasons. As a result, many different approaches to anomaly detection exist, all with different strengths and weaknesses. these EVs break the typical range-capacity pattern, but no individual range or capacity is unrealistic in isolation. The green cross is an accidental inlier: many other EVs made in this factory on this date are anomalous, raising suspicions about this particular EV. In this paper, we investigate an entirely novel approach to anomaly detection that has a number of unique properties. The approach returns models that can both identify and explain anomalies, and these anomalies can be in the form of local outliers, global outliers, but also accidental inliers: instances for which there is reason to believe they are anomalies, even though they occur in a high-density region. More specifically, the approach identifies low-dimensional subspaces in which certain patterns exist. An instance may be anomalous because it deviates from such a pattern, or -and this is novel -precisely because it adheres to the pattern. Figure 1 illustrates these different anomaly types on a toy dataset. The new approach is based on so-called multi-directional ensembles of classification and regression trees (MERCS) [19]. A MERCS model is an ensemble of decision trees, but differs from classical ensembles in that the trees it contains do not all try to predict the same target attribute, as in standard supervised learning settings. Rather, any attribute can play the role of target variable in any given tree. A MERCS model thus contains a set of predictive trees, each of which expresses a pattern that governs the co-occurrence of values of any attributes. By the nature of decision trees, each such pattern typically involves relatively few attributes, and as such can be seen as identifying a low-dimensional subspace within which the discovered pattern is visible. By using standard tree-learning methods, the method identifies subspaces in which informative patterns are present. Such a MERCS model could in principle be used to find anomalies by checking which instances violate the discovered patterns. The violated pattern can serve as an explanation of why the instance is seen as an anomaly. But because we have an ensemble of such patterns, which are to some extent independent views on what it means to be normal, more is possible. An instance can be considered more likely to be anomalous when it violates many different patterns. Now, when a single leaf of some decision tree contains many such instances, it is reasonable to assume that the leaf defines a set of conditions that are only fulfilled by anomalous instances; we call this an anomalous context. Instances in this leaf that were not yet (convincingly) identified as anomalies can now be assumed to be anomalous. Thus, an interaction is created between different subspaces, or "views" of the data, where two types of evidence can be exchanged and can reinforce each other: deviation from a normal pattern in one subspace, and belonging to an anomalous context in another subspace. The above describes the basic intuition behind the proposed method, which is called AD-MERCS (Anomaly Detection using MERCS models). Besides this, AD-MERCS implements a number of other ideas, more on a technical level, which are explained in the technical sections of this paper. In the remainder of this paper, we briefly describe the state of the art in anomaly detection, present the details of the novel AD-MERCS approach, and present an empirical evaluation. The empirical evaluation is qualitative, showing that AD-MERCS can indeed explain anomalies both in terms of normal patterns and in terms of anomalous contexts, and quantitative, showing that AD-MERCS performs well over a wide range of anomaly detection problems. At the same time, the quantitative evaluation reveals some issues with current benchmarks that appear to have gone unnoticed till now. Related Work The detection of local and global outliers has been extensively studied in the literature. A value for a given attribute is called a global outlier if it falls outside of the spectrum of typical values. It is called a local outlier if it falls outside the spectrum of values typically observed among similar instances. The set of similar instances is sometimes called a (local) context, and methods differ in how they define that context. The above definition assumes a given attribute of interest. Often, there is no single attribute of interest, and an instance is called an anomaly as soon as it has one or more attributes with outlier values. Identifying the attributes involved in the anomaly then becomes a task in itself. For local outliers, these attributes include not only the outlier attribute but also the attributes used for computing the local context. We refer to this set of attributes as the relevant subspace. In what follows, we provide a brief overview of the anomaly detection literature and discuss how each approach quantifies the anomalousness of an instance, discovers contexts, and finds relevant subspaces. Nearest neighbor approaches such as kNN [9] and LOF [4] flag an instance as anomalous if it is far away from its nearest neighbors. kNN uses an absolute threshold for this, whereas LOF compares the distance to typical distances among other neighbors. These methods implicitly identify relevant contexts (whatever is near constitutes the context) but not subspaces, as the distance metric predetermines the relevance of each dimension. Subspace-based methods (HBOS, iForest and HiCS) detect anomalies in multiple subspaces and aggregate the results into a final anomaly score. HBOS [12] constructs a univariate histogram for each attribute. An instance's anomaly score is the inverse product of the height of the bins it belongs to. Because HBOS uses univariate histograms, it detects global anomalies only. iForest [14], or isolation forest, is an ensemble of isolation trees, which isolate each instance in its own leaf by random splits. iForest assumes anomalies are easier to isolate and therefore have a shorter average path-length from root to leaf, across trees. As the splits are random, iForest uses highly randomized subspaces and contexts, and lacks interpretability. HiCS [13], or High-Contrast Subspaces, actively looks for subspaces S where, for each attribute A i ∈ S, the marginal P (A i ) differs from its conditional distribution P (A i |S \ A i ). Anomaly scores are then computed by running LOF in each subspace, and taking the average of those LOF-scores. This combination benefits from HiCS' subspaces and LOF's contexts. Residual-based methods such as ALSO [15] convert the unsupervised ADproblem into a supervised learning problem. 1 ALSO learns for each attribute a model that predicts it from the other attributes. It computes an instance's anomaly score by calculating for each attribute the difference between its observed and predicted value (called a residual ), dividing that by the attribute's standard deviation, and aggregating this into one score. Briefly summarized, an instance is flagged as anomalous if it violates many functional dependencies among attributes of normal cases. Clearly, the choice of predictive model type (decision tree, k-NN, . . .) determines the extent to which the anomaly detector identifies a context and a subspace, and to what extent it is interpretable. Residual-based methods strongly rely on the assumption that normal instances are characterized by higher predictability of their attributes. However, it is perfectly possible that many attributes are inherently unpredictable (e.g., a coin flip), or that some attributes are more predictable among anomalies than among normal cases. As will become clear later on, residual-based methods can fail badly when their assumption is violated. Positioning of the proposed method. Like residual-based methods, the proposed method AD-MERCS finds relevant subspaces and contexts by learning a set of predictors. However, AD-MERCS explicitly avoids any supposed equivalence between predictability and normality. First, to quantify the anomalousness of an instance, AD-MERCS uses continuous density estimations instead of residuals. This density-based mechanism enables AD-MERCS to overcome a main drawback in residual-based methods: AD-MERCS also captures "unpredictable" non-functional dependencies. Second, as mentioned in the introduction, AD-MERCS recognizes that modeling abnormal behavior may help detect more anomalies, including "accidental inliers", such as the green cross in Fig. 1. All AD methods discussed till now label individual instances: a conclusion about one instance does not affect the conclusion about another. Identifying accidental inliers, however, requires recognizing commonalities among anomalies, meaning that conclusions about a group of instances will affect the conclusions about its individual members. This is somewhat similar to the guilty-by-association principle in graph-based anomaly detectors [2], where nodes may be called anomalous just because they have many anomalous neighbors. The graph structure here provides a complementary view to the node labels. AD-MERCS essentially uses the different subspaces created by the trees in the ensemble as complementary views to achieve a similar effect in the context of tabular data. AD-MERCS Formally, AD-MERCS tackles the following problem: given an M -dimensional dataset , calculate an anomaly score δ( x i ) for each instance x i . Intuitively, δ should be higher for instances that in practice would more likely be considered anomalous. In this paper, that means instances that are deviating, or fulfill some condition that is typical for deviating instances. To explain how AD-MERCS works, we start from the basic MERCS approach, and then describe the adaptations made. MERCS Given a dataset D as described above, MERCS learns an ensemble of decision trees. For each attribute of the dataset, we learn one tree to predict that target attribute. 2 Trees are learned in standard, top-down fashion: attributes are selected for splits one at a time, based on their informativeness for the target attribute, and the resulting tree represents a function from a subset of the available input attributes to the target. Thus, with each tree T i corresponds a subspace S i that contains its input and target attributes, and this subspace is chosen so that the information in it is maximally predictive for the target; that is, a maximally strong "pattern" is detected. As a result, each leaf in a decision tree can be used as a context (a group of similar instances) for anomaly detection: to decide whether a particular instance is anomalous, it can be compared with the instances with which it shares a leaf, as instances within a leaf are expected to adhere to the same pattern between input and target attributes. We refer to [19] for more details on MERCS. Now, to do anomaly detection with a MERCS model we could use its decision trees in exactly the same way as ALSO uses its predictive models: aggregate the normalized residuals of all predictions made for a given instance. However, this assumes that predictability and normality are equivalent, and we argued earlier that this assumption is problematic in two ways. First, unpredictable yet normal behavior (e.g. a coin flip) is wrongly seen as anomalous. Second, predictable yet abnormal behavior escapes detection, as any predictable behavior leads to low residuals. The contributions of this work are essentially solutions to these two issues, and in the remainder of this section we explain how they work: Left. Density-based scoring correctly considers 0.5 as anomalous (likelihood of 0). The density estimation κ is shown in blue and the likelihood function ω used for scoring is in orange. Right. Because the tree predicts 0.5 for any instance classified in this leaf, residual-based scoring considers instances with a target value of 0.5 perfectly normal in this leaf (the residual is 0), while this value can reasonably be assumed anomalous as it has never been observed before. in Section 3.2, we introduce a density-based scoring mechanism so AD-MERCS can handle unpredictable behavior; in Section 3.3, we ensure that AD-MERCS recognizes contexts where abnormal behavior is the norm as anomalous contexts. MERCS with Density Estimation The first contribution of this work introduces a density-based scoring mechanism in AD-MERCS. As explained before, residual-based methods do not work well under all circumstances. For instance, suppose a leaf contains instances with target values 1.52, 1.55, 1.57, 2.81. The value 2.81 is an outlier, and indeed, if the mean of the leaf is used as a prediction, we will find that 2.81 has a greater residual than the other values. In contrast, consider a leaf with values 1.4, 1.5, 1.6, 3.4, 3.5, 3.6. The mean value for this leaf is 2.5, but there are clearly two clusters here. If an instance with value 2.5 gets sorted into this leaf, it can reasonably be considered an outlier (it does not fit the clusters), yet it has a zero residual. Clearly, the basic idea behind residual-based predictions does not work well here (Fig. 2). In AD-MERCS, instead of using residuals, each tree in the ensemble scores the instances based on the likelihood of its target value as indicated by a histogram (if the target is nominal) or some local probability density (if the target is numeric). More specifically, in each leaf, AD-MERCS estimates the density κ of the target attribute using Gaussian kernel density estimation [17], selecting the kernel bandwidth using the approach of Botev et al. [3]. Inside a leaf, lower values of κ indicate less likely target attribute values. However, there are two reasons we cannot do anomaly detection using κ values directly. First, distinguishing high values of κ is pointless: all those indicate "normality" anyway. Second, κ values are incomparable between leaves: densities are normalized in terms of their area, not height, so their values depend on the local range of the target variable. As a solution, the κ values are translated into likelihood values ω using the following equation: where threshold τ j (ρ) is exceeded by ρ% of the instances in leaf j. In other words, the (100 − ρ)% lowest κ values are linearly mapped to the interval [0, 1], the rest is mapped to 1. The closer to 0 an instance's ω value is, the more likely it is anomalous. Fig. 2 illustrates this density-based scoring on a bimodal distribution of target values. So far, we assumed this likelihood estimation to take place in each leaf; however, a more thorough analysis shows room for further improvement. We use decision trees to find low-impurity leafs, but after a split, the impurity of a child node may still exceed the impurity of its parent. This happens if the impurity decrease from the parent to one child node is large enough to compensate for the impurity increase from the parent to the other child node. The pattern that holds in the parent is then actually stronger than the pattern that holds in the high-impurity child node. Therefore, after learning the full tree, if a leaf l j has a higher impurity than one of its ancestors, the instances classified in that leaf are scored using the likelihood function of the lowest-impurity ancestor of l j . This ensures that we always use the best pattern available to score a particular instance. Using the procedure above, each tree in the ensemble assigns its own anomaly score to each individual instance. To aggregate these scores, we interpret the anomaly scores 1−ω j (x) as probabilities p i and aggregate them using a noisy-or gate [11] with inhibition probability 1 − γ for each input: Intuitively, using an or gate reflects that an instance is considered an anomaly if it looks anomalous in at least one subspace. Inhibition reduces the trust we put in every single estimate: a higher inhibition probability (a lower γ) causes the aggregation to require more evidence, from all subspaces combined, before an instance gets assigned a high anomaly score. This procedure yields an anomaly detector that can model non-functional dependencies among normal cases, but does not yet explicitly model anomalies. In the following subsection, we describe how AD-MERCS detects anomalous contexts (i.e. contexts where abnormal behavior is the norm) and how this influences the anomaly scores. AD-MERCS To address the flawed assumption that abnormal behavior is necessarily unpredictable, the second contribution of this work ensures that AD-MERCS can identify anomalous contexts. By doing so, AD-MERCS can also exploit patterns in abnormal behavior, and detect more anomalies. Assume for a moment that we know which instances are anomalous. If a leaf exclusively contains anomalous instances, it is reasonable to assume that the conditions defining the leaf define an anomalous context; in fact, this is standard in supervised anomaly detection where new instances are considered anomalous if they end up in such a leaf. We want AD-MERCS to have a similar capacity to flag an instance as anomaly simply because it belongs to an anomalous context. This is not trivial because AD-MERCS works in an unsupervised manner, but the fact that a single AD-MERCS model contains many trees makes an iterative approach possible. In the first iteration, we compute an anomaly score δ i for each training instance x i , using the approach explained above. Based on these scores, we can now assign an anomaly value to each context (i.e. leaf) of each tree: a context with many high-scoring instances is more likely to be an anomalous context. Next, these context scores can be used to adapt the anomaly scores of individual instances: when an instance appears abnormal in a normal context or belongs to an anomalous context, its score is raised. On the basis of the new scores, context scores are recomputed, and so on, until convergence or a stopping criterion is reached. Note that the whole procedure does not require retraining the ensemble, and the tree structures remained unchanged during the process. Specifically, after obtaining the ω j functions that indicate how anomalous any instance is in context c j , AD-MERCS updates an array of δ i values (one per instance x i in the training set) and λ j values (one per context c j ), using the following update rules: The v ij values are context-specific estimates for the anomalousness of an instance x i that take into account both the (current estimate of the) abnormality of context c j , and its own abnormality in that context. They are essentially a weighted mean of 1 (if the context is anomalous) and 1 − ω j (x i ) (if it is normal). These estimates are aggregated into a global estimate δ i for the abnormality of x i , as explained before. The δ i values of all the examples in a context are next aggregated into a single λ j value for that context, using what is essentially a noisy-and: a context is considered anomalous when it covers only anomalous instances. The γ δ and γ λ parameters provide some control over how strong the combined evidence should be. Algorithm 1 summarizes the AD-MERCS method in its entirety. It consists of three phases. First, all trees are learned and their leaves are returned as contexts; next, for each leaf the density is estimated (using the leaf itself or its ancestor with the lowest impurity); finally, the interleaved computation of instance scores and context scores is performed. Explanations Finally, AD-MERCS does not only detect anomalies but also explains them on a level that domain experts understand. For each instance x i flagged as anomalous, AD-MERCS knows exactly which trees were responsible and offers concise explanations of how these trees arrived at this conclusion. Similarly, for each anomalous context, AD-MERCS immediately provides an interpretable description of this group as a whole. This greatly facilitates the extraction of actionable knowledge from the AD-process. Concretely, if a tree T assigns an individual instance x i a high v ij value, that instance is deemed anomalous by that tree. Note that this can be due to either x i 's abnormal behavior within a normal context, or simply because it belongs to an anomalous context. We discuss these two scenarios separately. First, if x i belongs to a normal context c j , it apparently violates the pattern that holds in the node n used to score instances in c j . In this case, the explanation consists of two parts: an interpretable description of node n, which can be obtained by traversing the tree; and the normal target values, as captured by the likelihood function ω j , together with instance x i 's atypical target value. Second, if x i belongs to an anomalous context c j , all we really need as explanation is an interpretable description of this context: in this scenario, the context description characterizes the anomalies directly. Illustration We illustrate AD-MERCS' explanations using the Zoo dataset [8], which describes different animals. The most anomalous animals according to AD-MERCS are scorpion, platypus and seasnake. Fig. 3 shows how AD-MERCS explains why the scorpion is anomalous. Several reasons are mentioned. Perhaps the most appealing one is that scorpions, being invertebrates, have a tail (invertebrates typically do not have a tail). Not having a backbone defines the context here, "tail" is the attribute with the atypical value. Further, animals that do not lay eggs typically have teeth; scorpions do not lay eggs, yet lack teeth: another reason for considering them anomalous. The platypus and seasnake are considered anomalous because animals typically give milk or lay eggs (i.e. one or the other, not both); a platypus does both, while a seasnake does neither. AD-MERCS also identifies an anomalous context of flying hairy animals with members: honeybee, housefly, wasp, moth, vampire bat and fruitbat. The combination of "hair" and "airborne" attributes both being true is apparently rare in the dataset, and covers a highly diverse range of animals, implying that each animal on its own is somewhat atypical among its peers for having this combination. This anomalous context is the highest-scoring in this dataset, but its anomaly score is still relatively low. Experimental Evaluation Our experiments answer the following research questions: We ask Q1 to ensure that, generally speaking, AD-MERCS is a competitive allround anomaly detector. Then, Q2 systematically verifies specific, desirable properties that we tried to embed in AD-MERCS. All experiments follow the same structure. We state a hypothesis, describe datasets and discuss the limitations of that particular setup. Then, we report and discuss results and draw conclusions. Each of the following subsections is devoted to one of the experiments. First, in subsection 4.1, we test general anomaly detection performance. Second, in subsection 4.2 and 4.3, we test the subspace and context aspect of AD-MERCS in isolation followed by subsection 4.4 where we consider subspace and context simultaneously. Finally, in subsection 4.5, we test AD-MERCS' capability to detect accidental inliers. Common to all these experiments are the algorithms, the evaluation metrics and the hyperparameter tuning. Additionally, in the supplementary material, we provide characteristics of all datasets, the parameter grids and selected parameters of our gridsearch, the dataset-by-dataset performance of each algorithm, and parameter sensitivity plots for AD-MERCS. Evaluation Metrics. We measure performance by two common metrics in AD [1]: the area under the receiver operating characteristic curve (AUC ) and the average precision (AP ). Per experiment, we report an algorithm's average performance and rank, along with a critical distance D crit determined by a Nemenyi post-hoc test [7] with significance level p = 0.05. A lower rank means algorithm A outperforms algorithm B. A difference between two ranks is significant it it exceeds the critical distance D crit . If there are multiple versions (due to subsampling) of the same dataset, performance is averaged across versions prior to calculation of the ranks. Hyperparameter Tuning. Per algorithm and experiment, we determine one set of hyperparameters using a grid search. We select the hyperparameters that yield the highest average AU C across a representative sample 4 of the datasets of an experiment (to keep the execution time under control). General Performance To test whether AD-MERCS is competitive with state-of-the-art AD-algorithms (Q1), we use 23 real-world datasets from the AD literature: the Campos benchmark (Campos et al. [5]) 5 . The main limitation of this experiment is that the characteristics of the anomalies in these datasets are unknown and beyond our 3 We use LOF, kNN and HBOS from pyod [20]; iForest from scikit-learn [16] and HiCS from ELKI (github.com/elki-project/elki). The ALSO implementation is our own, using scikit-learn decision trees. 4 For Campos and CamposHD, the first subsample version of each dataset [5]; for the HiCS benchmark and Synth-C&S, the first dataset of each dimensionality; and for Synth-C and Synth-I, we subsample 10 out of the 30 datasets. 5 For each dataset we use the normalized versions without duplicates with a 5% contamination. For datasets where a version with 5% contamination is absent, we perform our own subsampling. AUC AP avg rank avg rank avg rank avg rank avg rank avg rank avg rank avg rank avg rank avg rank avg rank avg rank control [10,18]. Therefore, this experiment does not allow us to draw any definitive conclusions with regards to an algorithm's capability to detect anomalies in subspaces, contexts, etc... Results . The leftmost column of Table 1 summarizes our results. Top performers on the Campos benchmark are kNN, iForest and LOF, closely followed by HBOS, AD-MERCS and HiCS. Note that, among these methods, no significant performance difference exists. ALSO, however, does record a significantly lower performance. Additionally, HBOS' impressive performance on this experiment indicates that most of the anomalies in the Campos benchmark are, in fact, global outliers. Conclusion. Generally speaking, AD-MERCS is a competitive anomaly detector, since it performs at par with the state-of-the art. HBOS' surprising performance is something to keep in mind when interpreting the results of this experiment; global outliers are comparatively easy to detect, and consequently, this benchmark offers a quite limited perspective on the true capabilities of an anomaly detector. Anomaly Detection with Subspaces In this experiment, we aim to show that AD-MERCS can find the right subspace(s) to do anomaly detection (Q2.1). First, to test robustness to uninformative dimensions, we built CamposHD, a version of the Campos benchmark with synthetic irrelevant dimensions: for a dataset with n attributes, we add 4n uninformative attributes uniformly sampled from [0, 1]. 6 Second, the HiCS benchmark (Keller et al. [13]) tests an algorithm's capability to find useful subspaces for AD: it consists of seven synthetic high-dimensional datasets with dense clusters in low-dimensional subspaces and anomalies that fall outside of these clusters. Both collections of datasets are useful to investigate the subspace aspect of the AD-problem in particular. Results. The second and third column of Table 1 summarize our results. LOF and kNN struggle when uninformative dimensions and subspaces come into play: their performance drops between Campos and CamposHD, and they are amongst the worst performers on HiCS. iForest' random subspaces are somewhat robust to uninformative dimensions, but are inadequate to detect anomalies scattered across multiple subspaces. HBOS (which only detects global anomalies) becomes the top performer on CamposHD, its performance is almost unchanged between Campos and CamposHD meaning that the algorithm is robust to the uninformative dimensions. However, HBOS struggles to detect HiCS' anomalies hidden in non-trivial subspaces. HiCS, being an algorithm designed for subspace-AD, records strong performances on both experiments. ALSO becomes a much more attractive option with subspaces coming into play. Finally, AD-MERCS claims the top spot on the HiCS benchmark and lies within D crit of the top performers (HBOS in terms of AUC and HiCS in terms of AP) on CamposHD. Conclusion. AD-MERCS performs at-par with the top algorithms on CamposHD and is the top performer on the HiCS benchmark, which indicates that it handles subspaces effectively. Anomaly Detection with Contexts To test whether AD-MERCS can detect local outliers by identifying the right context(s) to do anomaly detection (Q2.2), we construct 30 simple 2D patterns with obvious anomalies (two example patterns are shown in Fig. 4). The patterns and anomalies are chosen such that anomalies are invisible in the marginal distribution of any single attribute, which means that choosing proper contexts is necessary for their detection. We call this synthetic benchmark Synth-C. Proficiency on this benchmark indicates an algorithm's capability to identify appropriate contexts for anomaly detection. Results. The fourth column of Table 1 summarizes our results. When it comes to finding proper contexts for these simple 2D datasets, kNN performs best but AD-MERCS and LOF are within critical distance of kNN. HBOS consistently ranks last on this experiment, which is of course due to the fact that individual marginals carry no information here. ALSO appears to underperform too; this happens because it cannot capture the non-functional dependencies that are present in some of the datasets. Conclusion. AD-MERCS is able to identify adequate contexts to spot the anomalies on a collection of 2D benchmark datasets. Anomaly Detection with Subspaces and Contexts To test whether AD-MERCS simultaneously finds good subspace(s) and the context(s) (Q2.1 & Q2.2), we introduce Synth-C&S, a collection of 30 benchmark datasets where successful AD requires both context and subspaces. Each dataset here contains 5 relevant 2D subspaces, each subspace containing a simple 2D pattern with obvious anomalies (exactly like those used in Synth-C), and a varying number of irrelevant dimensions randomly sampled from [0, 1]. Results. The fifth column of Table 1 summarizes our results. When the detection of anomalies requires both finding relevant subspaces as well as finding adequate contexts, AD-MERCS is consistently the top-ranked method. Within D crit , we encounter ALSO. HiCS' performance is still reasonable and captures third place. All other algorithm struggle under the conditions imposed by this experiment, as illustrated by a large performance gap. Conclusion. If successful AD requires both finding relevant subspaces as well as use of proper contexts, AD-MERCS is consistently the top-ranked algorithm. Anomaly Detection with Accidental Inliers To actually verify that AD-MERCS is able to detect accidental inliers (Q2.3), we introduce Synth-I, a collection of 30 datasets where some anomalies can only be detected by realizing they belong to an anomalous context. Each dataset contains two subspaces: first, a simple 2D pattern with clear local outliers (again, exactly like those used in Synth-C) and second, a 2D subspace with a few clusters including at least one anomalous cluster : a cluster where the majority of instances are local outliers in the first 2D subspace. However, a minority of instances in the anomalous cluster will be perfectly normal in the first subspace; they are anomalous because, in the second subspace, they resemble other anomalies. Results. The last column of Table 1 summarizes our results. If one wants to detect instances similar to known anomalies, AD-MERCS' detection of anomalous contexts works: AD-MERCS is the best performer here. Conclusion. AD-MERCS is able to detect accidental inliers. Summary We conclude that AD-MERCS is a competitive anomaly detector in general, regardless of what kind of anomalies are present (Q1). More importantly, AD-MERCS finds and exploits relevant subspaces (Q2.1) and contexts (Q2.2) and is especially effective when both are needed simultaneously. Finally, AD-MERCS can detect accidental inliers (Q2.3). Conclusions In this paper, we presented AD-MERCS, a novel anomaly detection approach that is unique in that it exploits both normal and anomalous patterns between attributes to detect and explain anomalies. Due to this property, it can detect accidental inliers, a type of anomalies not supported by the current state-ofthe-art, and it can provide both positive and negative explanations for why something is an anomaly. We have shown experimentally that AD-MERCS is competitive with the state of the art on known anomaly detection benchmarks and a top performer on high-dimensional datasets because AD-MERCS effectively exploits subspaces and contexts. Finally, we identified a too strong focus on global outliers in known benchmark datasets and contributed a novel set of intuitive and interpretable benchmarks with local outliers.
2023-05-23T01:16:30.905Z
2023-05-22T00:00:00.000
{ "year": 2023, "sha1": "3811640fc9b29d2af607069131d7b878c23b4225", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3811640fc9b29d2af607069131d7b878c23b4225", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
265250936
pes2o/s2orc
v3-fos-license
Exploring the Impact of English Language Proficiency on Business Communication Effectiveness: A Comprehensive Research Analysis : In today's globalized environment, the significance of English cannot be underestimated. It stands as the most widely utilized universal language worldwide and is the primary language in international business. For Bangladesh seeking to navigate the challenges and opportunities of the free market era, mastering global languages, particularly English, is imperative. Proficiency in English equips them for effective communication within the SAARC as well as BIMSTEC Economic Community and EU countries. In the business realm, English acts as the international language, ensuring seamless communication and facilitating successful cooperation among diverse stakeholders. Effective communication is indispensable in business operations; without it, collaboration between producers, distributors, and consumers falters, leading to decreased entrepreneurial effectiveness. Mastering English empowers us to engage confidently with international business partners, fostering connections with people worldwide. Bangladesh falls within the Outer Circle category.In the context of Bangladesh, where English is not officially the second language, its importance remains paramount.Kachru (1997) In addition to facilitating socioeconomic activities and operating electronic devices, English has played a crucial role in fostering international relations.According to Bryson (2009), over 300 million people worldwide speak English, and the rest are seemingly attempting to learn it.Language serves as a key factor in establishing connections with individuals across the globe, making it easier for us to build relationships with diverse cultures.The language itself can serve as a foundation for starting businesses, as effective communication is at the core of successful ventures in the global arena.Communication serves as the process through which individuals, groups, organizations, and society create and utilize information to establish connections with their environment and others.Even when spoken language is not understood, communication can still occur through gestures, such as smiling, shaking heads, and shrugging.This form of communication encompasses various aspects like notifications, announcements, and promotions, all vital in our increasingly interconnected and multilingual world.In the contemporary era of globalization, the economic landscape has intensified business competition, both domestically and globally.Communicators need to develop intercultural skills, editorial expertise, and public speaking abilities, all crucial at the international level, as emphasized by Lemana, Rosa, and Juwardi (2017).Miina (2014) highlights that poor English language skills can impede effective communication, leading to misinterpretation, frustration, and barriers among employees.Mastery of all four English language skills-reading, speaking, listening, and writing-is essential.English holds a paramount position, inseparable from information and communication skills, especially in the realm of business.Typically, individuals working for international companies, including employers, employees, and secretaries, are expected to possess effective communication skills and conduct all correspondence and documentation in English.Proficiency in English is essential for them to seamlessly continue their learning and confidently present themselves during interviews.Talking on the phone with representatives of foreign companies, negotiating, and expressing themselves in English should not pose a challenge.Moreover, the ability to organize trips and create documents to facilitate productive business conversations enhances the professionalism of individuals who have mastered and maintain proficiency in English for business communication.As businesses expand, the pressure to find more effective ways of communicating with workers and the outside world intensifies.Companies operating on a global scale are obligated to excel in English proficiency, leading them to recruit employees fluent in the language.English serves as the primary language in international business, scientific research, and academic publications, with over 80% of academic journals written exclusively in English (Van Weijen, 2012).Effective communication is crucial as it involves the responsibility of providing information, and misunderstandings with communication counterparts are not uncommon.In the business context, communication plays a vital role and is one of the fundamental aspects of running a successful company.This communication can take various forms, including verbal and non-verbal methods, encompassing opinions, ideas, and information.It can occur on a personal or interpersonal level, highlighting its importance in the business environment.Effective communication is paramount in the realm of business.Without clear communication channels between producers, distributors, and consumers, the efficiency of entrepreneurial activities is compromised.Therefore, understanding how to apply English in the business context is crucial.In the domain of language arts within business, narration plays a significant role.Narration is essential to establish a connection between consumers and the products being offered, fostering a sense of inner attachment and relationship.Additionally, the choice of words in business communication holds immense importance, as each word carries a distinct meaning.Entrepreneurs should also consider catering to the language preferences of consumers.For instance, in countries like India and Singapore, businesses adapt to the market they enter by using languages such as English, Hindi, or Chinese to facilitate effective communication with their target audience.This adaptability ensures a stronger connection between businesses and consumers, enhancing the overall success of entrepreneurial ventures.However, about 47.87% of people in Bangladesh spoke English with the reason that it opened up more international opportunities.Introducing English as a corporate language is not an easy task, since using English brings both advantages and disadvantages for companies.English proficiency can significantly enhance the perceived value of a product.The focus of this study was to explore the importance of English in the business context and the impact of English as a communication tool on achieving business goals. Discussion 2.1 Communication Business Many people recognized the importance of understanding communication phenomena in order to understand business symptoms.If we viewed business and communication as both a social process, we would come to the conclusion that communication was business and, conversely, business is communication.It means, at the symptom level, communication and business are integrated symptoms.It could not be separated.Business and communication both started their activities by carrying out the production process.More details could be explained as followed: a.In the realm of communication, the output is referred to as information.Information encompasses data, facts, and knowledge that is conveyed from one entity to another.On the other hand, in the context of business, the products generated are goods and services.Businesses create tangible goods and intangible services to meet the needs and demands of consumers. However, it's worth noting that there are instances where information and goods/services intersect.In specific contexts, information itself can be considered a product, especially in media and publishing industries.For example, newspapers, magazines, television programs, and online content deliver information to consumers, essentially making information a commodity.In this way, information becomes a product in the business sense, blurring the line between pure information and traditional goods/services.b.Then, business and communications conveyed the product to other parties.In communication, the other party could be called the communicator, audience, destination, etc. Meanwhile in business activities other parties were often referred to as consumers, clients, buyers, and so on.Communication and business interactions evoke specific reactions and encounter distinct obstacles.Commercial exchanges serve as a means to establish partnerships, leverage intellectual resources, and facilitate the exchange of ideas.Whether it involves a product, service, or organization, the objective is to create value for the business in operation.Business communication requires a profound understanding of both internal and external aspects of the business.Internally, communication encompasses elements such as the company's vision, strategy, plans, corporate culture, values, fundamental principles, employee motivation, and ideas.According to Lathifah (2007), companies are not only competing within domestic markets but also on the international stage, necessitating effective communication to foster strong relationships between companies.External communication, on the other hand, involves areas such as branding, marketing, advertising, customer relations, public relations, media relations, and business negotiations.Regardless of the form it takes, these communication efforts share a common goal: to create business value.By understanding and addressing both internal and external communication needs, businesses can enhance their operations, build meaningful relationships, and thrive in the competitive global market.The effectiveness of communication within an organization hinges on the consensus among the individuals engaged in communication activities.Several factors influence this consensus, including the clarity of the message, the manner in which the message is delivered, the behavior exhibited during communication, and the communication situation, including the specific place and time of the interaction.Organizational communication typically employs a combination of communication methods, including verbal, written, and broadcast channels.This diverse approach makes it easier and clearer to convey and retrieve information, ensuring that messages are comprehensible and accessible to all involved parties.By carefully considering these factors and utilizing various communication methods, organizations can foster understanding, collaboration, and successful outcomes in their internal interactions. The Importance of English in Business In this millennial era, English was one of the languages that we should master.By 2015, the organization announced that the average employee's English language score had increased since the initiative began (Harvard Business Review, 2015).A recent EU study found that 94% of upper-secondary school European students are learning English as a foreign language (Eurostat, 2013).To a certain extent, English had become the global language of business all over the world, and in certain industries, such as shipping and aviation, English had become the standard official language.In order to keep up with the times, English was needed in many fields.especially in business.If you were proficient in English, you could promote your brand overseas without problems.Basically, if you were interested in developing your business, then you should be proficient in English as a communication tool between you and your customers, so that your business would be easier to operate.Because, by understanding English, you would be able to market around the world.In addition, people working in a certain company also needed English to meet business needs.This meant that English was important to everyone, whether or not they came from an official language. Graph 1. Persentage of employers that said English is important (Cambridge English, 2016). Despite the low rankings of these countries and regions, at least 50% of employers still spoke English as important to their organization/company.Companies usually needed employees who were good at marketing.Of course, in this case, marketing involved marketing between countries or multinational companies.Therefore, good English was required to do it.More and more multinational companies used English as the language of general companies, such as Airbus, Daimler-Chrysler, Nokia, Renault, Samsung and Microsoft Beijing (Harvard Business Review, 2015).Usually, some companies also sent employees to school for free.The cost was borne directly by the company.There were some important things of English in business: a.In the current job market, proficiency in English has become a mandatory requirement for employment in various companies.This prerequisite is especially crucial in the era of globalization, where companies strive to stay competitive and up-to-date with the changing times.Employees fluent in English have access to greater job opportunities.According to Mooij and Keegan (1994), in industries like advertising, mastering English is deemed highly important, with companies often refusing to hire individuals who do not speak English.When applying for jobs, many companies prefer candidates with English proficiency.This preference arises from the need to communicate not only with fellow Indonesians but also with international colleagues and clients in the workplace.Naturally, individuals with higher levels of English proficiency have better chances of securing positions in reputable and competitive companies. Graph 2. Persentage of employers that said English is significant (Cambridge English, 2016). We could see from the graph that English was significant to the industry/company.In countries and regions where English was not an official language, the industries that were least likely to speak English as important (less than two-thirds of employers speak English as important to their organization) were: 1) Construction and Property 2) Recruitment and HR Services 3) Retail.As per Blair and Jeanson (1995), having a high proficiency in the English language, particularly in oral communication skills, is invaluable for resolving various workplace issues.This applies not only on a national level but also internationally, as in the case of Bangladesh entering the BRICS Economic Community, opening up broader business opportunities.Becoming a business entrepreneur has become more accessible due to the rise of online businesses.With online platforms, entrepreneurs can expand their market reach, selling products to a wider audience, both domestically and internationally.This global connectivity emphasizes the importance of strong English language skills, enabling effective communication and participation in the international business arena.b.Make us ready to be a successful businessman According to Wachter and Maiworm (2011), the use of English in higher educationis also increasing.For example, English-medium undergraduate and master's degree programmes in Continental Europe have more than tripled over the last seven years.No wonder so many people today could use English I well.Some people learnt English from year to year.This obliged them not to fall behind in life, as in education and in the business world. Picture 1.People learning English worldwide (British Council), Understanding English could bring many benefits to anyone who intended to become a businessman.As a businessman who wanted to start.a business, of course we had to equip ourselves with communication skills.By mastering English, it wouldbe easier for us to communicate with anyone.Especially on the internet, there were a lot of creative ideas written in English.Of course, we would understand these creative ideas more easily with English.After that.we could try to apply these ideas to our efforts.As we know, unique or interesting ideas could make our business grow fast because they attracted the interest of many people.Then, it would help you to grow your business.As we know, businessman wanted to have a thriving business.When the business had really grown. it would require further promotion to introduce our business to the international market.By mastering English, we could do promotions without having to be confused.That was, we could use English to promote our business.That way, introducing our business to anyone would not experience obstacles when we mastered English.Not only that, we could also cooperate with other businessmen abroad.We did not need to be confused when met another businessman from abroad.Even though we could employ translators.mastering English was much more.important because it could prevent us from being scammed. c. Helps strengthen relationships with business relations Running a successful business requires establishing connections and nurturing relationships to facilitate its growth.Enrolling in business courses and mastering English can significantly enhance our ability to communicate with business partners from different countries, offering substantial benefits.Proficiency in English enables us to confidently engage with international business associates, making it easier to introduce our business to partners abroad.Moreover, English proficiency empowers us to participate in discussions with individuals from diverse parts of the world.For instance, when seeking input related to our business, being able to communicate effectively in English allows us to engage with a wide range of experts and stakeholders, enriching our understanding and improving our business strategies.Mastering English opens up opportunities to seek input on various forums, even those frequented by foreigners.Engaging with a diverse audience allows us to gather valuable insights and opinions from people worldwide.By leveraging our English language skills, we can actively exchange ideas and perspectives with a broad spectrum of individuals.This exchange not only enriches our understanding but also helps us discover opinions that can significantly contribute to the advancement of our business.Engaging in discussions and seeking input from a global community enhances our business strategies, fostering innovation and growth. The effect of English as a communication tool on business goals In our contemporary globalized business landscape, an increasing number of local companies in Bangladesh are venturing into the international market.Simultaneously, international companies are expanding their presence in the local market.Consequently, the use of English as the language of business is becoming increasingly imperative.This necessity becomes evident in cases where negotiations falter due to misunderstandings with potential foreign partners, project timelines are disrupted due to communication breakdowns with clients from other countries, job applications at foreign companies are rejected due to insufficient English proficiency, and opportunities to collaborate with international-grade companies are missed due to the unavailability of English-speaking workers.This trend is not limited to Bangladesh; it is a global phenomenon.For example, in China, the English language learning market is experiencing significant growth, with an annual growth rate of approximately 20%, primarily driven by school-aged learners (Technavio, 2016).This growth underscores the increasing recognition of the importance of English proficiency in the global job market and business interactions, emphasizing the need for individuals and organizations to invest in English language skills to remain competitive and successful in the international arena.Using foreign languages in daily communication has significant impacts.According to Williams and Chaston (2004), individuals who excel in English and can speak multiple foreign languages have a notable advantage in the market.People who regularly communicate in foreign languages tend to appear more intelligent, a fact supported by scientific research.Engaging in communication using foreign languages not only enhances language skills but also sharpens decision-making abilities.Bilingual individuals, in particular, exhibit a heightened awareness and a broader perspective, allowing them to understand their environment better.In the context of business, this proficiency in foreign languages is immensely valuable, especially when communicating with international partners.The ability to speak multiple languages not only facilitates effective communication but also demonstrates a high level of adaptability and cultural understanding, which are crucial attributes in global business interactions.In countries and regions where English was not the official language, large enterprises, particularly those with more than 2,500 employees, were more likely to emphasize the importance of English proficiency.However, it is noteworthy that there was surprisingly little variation across organizations of different sizes.Regardless of the size, at least two-thirds of employers in various organizations considered English to be important, highlighting its significance in the global business landscape.This uniform emphasis on English proficiency suggests its universal importance, irrespective of the organization's scale, emphasizing the need for employees to possess strong English language skills to thrive in diverse professional environments.In today's modern world, marked by challenges and fierce competition, individuals are advised to possess not only a high level of education but also specific skills.Among the most crucial skills in this era is proficiency in English.As elaborated earlier, English is a global language.To stay ahead of the general populace, individuals need to master English comprehensively, honing skills in reading, speaking, listening, and writing.This mastery of English not only enhances communication abilities but also opens doors to various opportunities in both personal and professional spheres, ensuring individuals are well-equipped to navigate the complexities of the contemporary global landscape. Graph 4. The English language skill employers said was most important for their organization language status comparison (Cambridge English, 2016). Based on the most important skills, reading and speaking were identified as crucial abilities.In countries and territories where English was not an official language, reading was the most important skill sought by employers.Conversely, in English-speaking countries and regions where English was an official or de facto official language, speaking was deemed most essential for employers.According to Jones and Alexander (2000), English serves as the primary medium of communication for business professionals across various countries.This communication in English may involve speakers from diverse language backgrounds, such as Swedish and German, Japanese and Italian.English enables individuals, including those whose native language is not English, to communicate effectively with native English speakers, underscoring its significance in international business interactions.Certainly, using English can boost self-confidence as proficiency in a foreign language enhances one's self-assurance.Confident individuals tend to appear more attractive to others, making social interactions, including meeting new people and making friends, less daunting.Engaging with diverse individuals broadens one's perspective and enriches life experiences.Effective communication and openness to others facilitate the process of making new friends.Developing such social skills not only fosters personal relationships but also paves the way for successful business interactions, allowing individuals to achieve optimal outcomes in their professional endeavors. Conclusion In the contemporary globalized era, effective communication has gained paramount importance.With the rapid pace of business and technological advancements, international opportunities for exchanges have expanded significantly.English, being a major business language, plays a crucial role in business development.Mastering English is essential as it opens doors to promising career prospects.English proficiency is a valuable asset in the global workforce, and most cross-border exchanges are conducted in English.International companies typically expect their employees to be fluent in English, making it indispensable for business and professional needs.In today's interconnected world, English remains the language of business across various sectors, including manufacturing, services, information technology, and the internet.Proficiency in English offers several advantages, such as increased job opportunities, preparation for successful entrepreneurship, and strengthening relationships with business partners.Being proficient in English not only enables smooth business operations but also instills confidence in communicating with partners and clients. When establishing a business, having sufficient capital and adhering to regulations is crucial, and English proficiency is increasingly becoming one of these essential requirements.Mastering English opens up numerous opportunities for business development.In essence, being proficient in English is a necessity in the modern workplace.Multinational companies prioritize candidates with strong English language skills, and using English for business purposes yields positive outcomes for companies, making it a valuable asset that benefits both individuals and organizations. Graph 3 . Persentage of employers that said English is significant for their organization organization size comparison (Cambridge English, 2016).
2023-11-17T16:11:35.451Z
2023-11-13T00:00:00.000
{ "year": 2023, "sha1": "ca5db2c20c926d9d3e792257e3b1aa71e7fa81fe", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2023/6/8809.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1b4a0d6a3655ca28c665584ec9f7fdeace612c7a", "s2fieldsofstudy": [ "Business", "Linguistics", "Education" ], "extfieldsofstudy": [] }
210921067
pes2o/s2orc
v3-fos-license
The differential Galois group of the maximal prounipotent extension is free Let $F$ be a characteristic zero differential field with algebraically closed constant field, and consider the compositum $F_u$ of all Picard--Vessiot extensions of $F$ with unipotent differential Galois group. We prove that the group of $F$ differential automorphisms of $F_u$ is a free prounipotent group. Introduction Throughout, F denotes a characteristic zero differential field with derivation D and algebraically closed field of constants C. The compositum F u of all Picard-Vessiot extensions of F with unipotent differential Galois group is a (generally infinite) differential Galois extension of F whose (pro)unipotent differential Galois group we denote by U Π(F ). We show that this group is free prounipotent. In fact, what we will show is that U Π(F ) is projective. In [5,Prop. 2.8,p.86] it is shown that projective prounipotent groups are free. The converse is also true, as will be shown in Section 3 below. Recall that a proalgebraic group P is projective in the category of proalgebraic groups if for every surjective homomorphism α : A → B of proalgebraic groups and for every homomorphism f : P → B of proalgebraic groups there is a homomorphism φ : P → A of proalgebraic groups such that f = α • φ [2,Definition 8,p. 29]. (Note: the definition in [2] said "epimorphism" instead of "surjective". It is clear from the context that "surjective" was meant. In the category of (pro)algebraic groups epimorphisms are not necessarily surjective, so that the definition of projective using epimorphism is far more restrictive than that using surjective.) A prounipotent group U is projective in the category of prounipotent groups provided it satisfies the above definition where A and B are restricted to be prounipotent. By [5] (see below), to test the projectivity, and hence freeness, of a prounipotent group U it suffices to consider the case of α's where both A and B are unipotent and the kernel of α is isomorphic to G a . We can, moreover, assume f is surjective. By the preceding, to see that the prounipotent group U Π(F ) is projective, we need to show that for any surjection α : A → B of unipotent groups with kernel K isomorphic to G a and any surjective homomorphism f : U Π(F ) → B there is a homomorphism φ : U Π(F ) → A such that f = α • φ. If α has a splitting, namely if there is a β : B → A such that α • β = id B , then we can take φ = β • f . Hence we can concentrate on the case that α is not split. In the non-split case, if there is a φ it must be surjective. In other words, to see that U Π(F ) is projective we must show that for a non-split homomorphism of unipotent groups α : A → B with kernel K isomorphic to G a and surjection f : U Π(F ) → B there is a surjection φ : U Π(F ) → A such that f = α • φ. We can of course assume that B = A/K. By Galois theory, a surjection U Π(F ) → B means we have a Picard-Vessiot extension E B of F with differential Galois group B, and a surjection U Π(F ) → A means we have a Picard-Vessiot extension E A of F with differential Galois group A. Thus the existence of φ amounts to starting with a Picard-Vessiot extension E B of F with Galois group A = B/K and finding a Picard-Vessiot extension E A of F with Galois group A which contains E B such that E B = (E A ) K . In Galois theory this is known as the embedding problem. Thus proving that U Π(F ) is projective amounts to a solution of the embedding problem for extensions of unipotent groups by G a ; this is the content of our first main result, Theorem 3 below. As noted, this implies that U Π(F ) is free prounipotent. This is refined in our second main result, Theorem 4, were it is shown that U Π(F ) is free prounipotent on a set of cardinality equal to the C vector space dimension of F/D(F ). The group Π(F ) of F differential automorphisms of the compositum of all Picard-Vessiot extensions of F is a proalgebraic group whose maximal prounipotent quotient is U Π(F ). If Π(F ) is projective (a very strong property: this implies all embedding problems over F are solvable) then so is U Π(F ). Bachmayr, Harbater, Hartmann, and Wibmer [1] have shown that Π(F ) is free, and hence projective, in some cases. A preliminary version of this work was originally presented at the conference "Galois Groups and Brauer Groups" held in honor of Jack Sonn. Embedding Problem We retain the notation from the introduction: F denotes a characteristic zero differential field with algebraically closed field of constants C. Its derivation is denoted D F , with the subscript sometimes omitted. As noted in the introduction, to prove that U Π(F ) is projective we need to solve an embedding problem which starts with a Picard-Vessiot extension E of F with unipotent differential Galois group B and a non-split unipotent extension A of B by G a . In this context the Picard-Vessiot ring of E is shown in Proposition 1 to be isomorphic to F [B] (hence a polynomial ring over F and a fortiori a UFD) and the surjection A → B is split as varieties. We are going to show in Theorem 1 that this embedding problem has a solution when the hypotheses are weakened to only require that B is a proalgebraic group such that F [B] is a UFD with a B invariant derivation extending D F such that the quotient field F (B) has no constants except C. This makes F (B) a possibly infinite Picard-Vessiot extension of F with differential Galois group B. We further require that the differential ring has no non-trivial principal differential ideals. Then we solve the embedding problem when A → B is a non-split extension of proalgebraic groups with kernel isomorphic to G a which is split as a surjection of provarieties. We fix the following notation for the group G a : The action of G a on C[G a ] (left action on functions from right translation action on the group) is then given by We also introduce some notational conventions for extensions by G a which are split as varieties: be a central extension of (pro)algebraic groups over C which splits as varieties. Denote the map G → G by g → g. Denote the variety section G → G by ψ so that ψ(g) = g. Then φ(g) := ψ(g) can be regarded as a function on G Taking G a to be a subgroup of G and using the conventions of Notation 1, we define the function y We call this the y − φ representation of elements of G. Then With these conventions, we have the following solution of some Embedding Problems for central extensions of G a which are split as varieties but not as groups. The result has a statement about factorality and units in both its hypotheses and conclusions; this is to enable the result to be used inductively. Note that, in the notation of the statement of Theorem 1, F (G) ⊃ F and F (G) ⊃ F have no new constants, so they are Picard-Vessiot extensions with groups G and G, respectively, and F (G) ⊂ F (G), solving the associated embedding problem. and q|D(q) then q is a unit be a central extension of C groups which splits as varieties but not as algebraic groups over F . Then there is a derivation on F [G] extending D which commutes with the G action and is such that F (G) has no new constants. Moreover, F [G] is a unique factorization domain and if 0 = q ∈ F [G] and q|D(q) then q is a unit. Proof. Using the conventions of Notations 1 and 2, and the fact that G a is central,we compare the product of y−φ representations with the y−φ representation of the product: Then we combine to define the function α on G × G, which, since φ(g) = ψ(g), can also be viewed as a function on G × G: We can then use α to describe the action of G on y: The second of these equations implies that α( Since D needs to extend the derivation on F [G], we need only define it on y. Thus to define D it will suffice to set D(y) = f ∈ F [G] for some appropriate element f . For D to be G equivariant, we want D(h·y) = h·D(y), which by the above means f + D(α(·, h)) = h · f . Note that this is a condition on f . We are going to show that σ is a cocycle. We calculate: σ(hk), using x to stand for the variable argument also symbolized by (·): σ(hk)(x) = D(α(x, hk)) = D(y(x(hk)) − y(x) − y(hk)) = D(y(x(hk)) − y(x)) − D(y(hk)) = D(y(x(hk)) − y(x)) (y is C valued so D(y(hk)) = 0); and , then, every cocycle is a coboundary. It follows that σ = δ(f ) for some f ∈ F [G]. (Note: the results of [4] are for linear algebraic groups; the extensions to proalgebraic groups are straightforward. ) We use the f such that . This is precisely the condition obtained above for the G invariance of D. Thus D extends the derivation of F [G]. Next, we want to show that F (G) has no new constants. We can regard F (G) as the quotient field of F (G)[y]. Since F (G) has no new constants, we claim that F (G) has no new constants provided that f is not a derivative in F (G). This is an elementary direct calculation; for example, see [ which implies that q|D(q). By assumption, this means that q is a unit of F [G] and that f is the derivative of Since splits as a group extension. Since C is algebraically closed, this means that the extension already splits as groups over C. This contradiction means f is not a derivative in the quotient field of F [G]. We conclude that F (G) has no new constants. We also observe that [y] is a unique factorization domain. To complete the proof, suppose 0 = q ∈ F [G] and q|D(q). We can write q as a polynomial in y with coefficients in F [G], say q = n k=0 a k y k with a n = 0. Then D(q) = (D(a k )y k + ka k y k−1 )f has degree at most n. Thus D(q) = bq for some b in F [G]. In particular, D(a n ) = ba n . Since a n |D(a n ), this means a n is a unit. We differentiate q/a n : D( q a n ) = a n D(q) − qD(a n ) a 2 n ) = q a n a n b − D(a n ) a n so q a n |D( q a n ). So we can replace q by q/a n and hence we can assume a n = 1. Then D(q) has degree less than n, so q|D(q) implies that D(q) = 0. Thus q is a constant of F (G), and we know these are in C. In particular, q is a unit of F [G]. This completes the proof of the theorem. Projective Galois Groups We are going to apply Theorem 1 when the group G is (pro)unipotent, and hence so is G. This implies that F [G] is a polynomial ring, and in particular a unique factorization domain, all of whose units are in F . With that application in mind, we make an observation about (infinite) Picard-Vessiot extensions whose differential Galois group is (pro)unipotent. Proposition 1. Let E ⊃ F be a (possibly infinite) Picard-Vessiot extension with (pro)unipotent differential Galois group H. Then its Picard-Vessiot ring R is isomorphic to F [H] as a ring and an H module. Before proving the proposition we make some preliminary remarks. Proposition 1 is a prounipotent version of Kolchin's Theorem, [6, Theorem 5.12 p. 67], which says that if the affine algebraic group G is the differential Galois group of the Picard-Vessiot extension K ⊇ F with Picard-Vessiot ring The isomorphism is given explicitly as is action of G on T , and f : T → F is any F algebra homomorphism. (Note that the facts that f exists, and that the proof that h is an isomorphism, use that T is an affine F algebra.) Suppose further that there is an F algebra homomorphism Now suppose K ⊇ F is an infinite Picard-Vessiot extension, G its proaffine differential Galois group, and T its Picard-Vessiot ring. Suppose further that there is an F algebra homomorphism T → F . Then we can define the F algebra G equivariant homomorphism h = (f ⊗1)∆ as above. For any closed normal subgroup is an isomorphism by Kolchin's Theorem applied to the Picard-Vessiot extension K H ⊇ F with affine differential Galois group G/H. Since T is the union of the T H as H ranges over all closed normal subgroups of G, and the restrictions of h to the T H 's are isomorphisms, we conclude that Kolchin's Theorem holds in the infinite Picard-Vessiot case provided there is a map f : T → F . And just as in the affine case (with the same proof), if there is an F algebra . Such an isomorphism is the conclusion of Proposition 1. Thus to prove Proposition 1, it suffices to show the existence an F algebra homomorphism T → F . Conversely, if the proposition holds, there is such a homomorphism, for example given by F [H] → F by evaluation at an element of H. We proceed to the proof, reverting to the notation of the proposition. Then g ⊗ F f defines an F algebra homomorphism p : and that the target is H simple, and hence so is the source. This tells us that the H equivariant F algebra homomorphism γ : If so, we have a contradiction to the minimality of N : N ∩ M is a proper subgroup of N and there is an F algebra homomorphism , and any such is in fact an algebra isomorphism. In particular, t is in its image and hence in T 0 . We conclude that the claim holds, and hence the proposition is proven. Proposition 1 implies that if E ⊃ F is a (possibly infinite) Picard-Vessiot extension with (pro)unipotent differential Galois group G, then its Picard-Vessiot ring F [G] = F ⊗ C[G] satisfies the hypotheses of Theorem 1. To apply the theorem to an extension of G by G a , we need to know that the extension in question is split as varieties (and not split as groups). All extensions of (pro)unipotents by (pro)unipotents are split as varieties: this fact seems to be well known, so we only sketch the proof. be an extension of a prounipotent group H by a prounipotent group K. Then the extension splits as varieties. Proof. When H and K are unipotent, so is G. Both G and H are isomorphic as varieties to their Lie algebras. A right linear inverse to the linear vector space projection Lie(G) → Lie(H) composed with these isomorphisms is a variety section. The same argument works for prounipotent groups, using the complete Lie algebras [5, 1.1 p.78]. There are (pro)variety isomorphisms of the groups to the complete Lie algebras, which are as additive groups products of copies of G a , and the surjection between them has a linear inverse because the kernel is a closed subspace. A linear action of a prounipotent group on a one-dimensional vector space is trivial, so a normal subgoup of a prounipotent group isomorphic to G a is central. Now we come to our main application. Theorem 2. Let E ⊃ F be a (possibly infinite) Picard-Vessiot extension with (pro)unipotent differential Galois group G. Let be a extension which does not split as algebraic groups over F . Then there is a Picard-Vessiot extension E 1 ⊃ F with differential Galois group G such that E 1 ⊃ E and such that the restriction map on differential Galois groups is the given map G → G. Proof. By Proposition 1 we may assume E is the quotient field of F [G] and so the latter satisfies the hypotheses of Theorem 1. By Proposition 2 the extension 1 → G a → G → G → 1 also satisfies the hypotheses of the theorem. Then E 1 = F (G) with the derivation of the theorem is the desired Picard-Vessiot extension. Theorem 2 applies of course when G is unipotent, and asserts that a solution of the embedding problem for extensions of unipotent groups by G a always exists. We can now conclude that the differential Galois group of the compositum of the unipotent extensions of F is projective. Theorem 3. Let U Π(F ) be the differential Galois group of the compositum F u of all Picard-Vessiot extensions of F with unipotent differential Galois group. Then U Π(F ) is a projective prounipotent group. Proof. By [5, Theorem 2.4, p.84], it suffices to show that for any unipotent group B and any extension of A of B by G a a homomorphism f : U Π(F ) → B can be lifted to A. Note that A is also unipotent. Let α : A → B be the projection. If f is not surjective, we can replace B with B ′ = f (U Π(F )) and A with A ′ = R u (α −1 (B ′ ) (the unipotent radical of the inverse image) to obtain an extension A ′ ≤ A of B ′ ≤ B by G a and a surjective homomorphism U Π(F ) → B ′ . If this homomorphism can be lifted to A ′ then the same homomorphism lifts f to A. As remarked above, if A ′ is a split extension of B ′ then the splitting produces the lift. Thus we can assume A ′ is a non-split extension of B ′ . We drop the "primes" and revert to the original notation. The surjection f means that we have a Picard-Vessiot extension E B of F with differential Galois group B and a non-split exact sequence 1 → G a → A → B → 1. By Theorem 2 there is a differential Galois extension E A ⊃ F with differential Galois group B (and hence a surjection U Π(F ) → B) and E B ⊃ E A (which implies that the surjection lifts f ). As noted in the introduction, for prounipotent groups projectives are free [ , the free prounipotent group U (I) on the subset I is universal with respect to the property that for unipotent groups U , set maps I → U with all but finitely many elements going to the identity extend uniquely to morphisms U (I) → U . By [5,Lemma 2.3,p. 84], the cardinality of the subset I is the same as the dimension of Hom(U (I), G a ). Thus Theorem 3 implies that U Π(F ) is free prounipotent as we record in Corollary 1. In Theorem 4 below we determine the cardinality of a set on which it is free. Corollary 1. Let F u be the compositum of all the unipotent Picard-Vessiot extensions of F . Then the differential Galois group U Π(F ) of F u over F is free prounipotent. Theorem 2 shows that if there is a G a extension of a prounipotent differential Galois group over F , then the extension is realized as a differential Galois group. In [5,Theorem 2.9,p.87], it is shown that if a prounipotent group G is not free, then H 2 (G, C) = 0 (and conversely). This H 2 is derived functor cohomology in the category of rational G modules, which by [4] is also Hochschild cohomology (two cocycles modulo two coboundaries). Because of the fact, used above, that extensions of prounipotents by G a split as varieties, Hochschild cohomology corresponds to extensions [3,Proposition 2.3 p.190]. Thus all non-free prounipotent groups have non trivial extensions by G a , and conversely. We need to remark here that non trivial means non split as algebraic groups over C, whereas Theorem 2 refers to non split as algebraic groups over F . Actually the former implies the latter: for if H 2 (G, C) is non trivial, the same is true for extension to any algebraically closed field C over C. This follows from constructing an injective resolution of C as a G module whose terms are sums of the modules C[G]. Then tensoring this resolution over C with C, which is an exact functor, produces a resolution of C whose terms are sums of C[G], hence injective, and taking G invariants and homology commutes with tensoring as well, so that It is possible that F has no unipotent Picard-Vessiot extensions; for example, it may be Picard-Vessiot closed [7]. In this situation the free prounipotent group of Corollary 1 is the free prounipotent group on no generators. It is possible that F has a unique unipotent Picard-Vessiot extension with group G a , which is the free prounipotent group on one generator; this happens for F = C. In both these cases, the number of generators of the free prounipotent group is the dimension of F/D(F ) as a C vector space. As we now show, this happens in general. We set the following notation: Notation 3. Let U Π = U Π(F ) denote the differential Galois group of F u over F . Let {x α ∈ F |α ∈ A} be such that their images in F/D(F ) are a basis over C. Let y α ∈ F u be such that D(y α ) = x α . Let G α be the differential Galois group of F (y α ) over F . Note that G α is isomorphic to G a . For the following lemmas, which describe some of the properties of the x α 's and y α 's, we recall some basic properties of elements of Picard-Vessiot extensions whose derivatives are in the base. Let E ⊇ F be a Picard-Vessiot extension and let z ∈ E with D(z) ∈ F . For any differential automorphisms σ of E over F , D(σ(z) − z) = σ(D(z)) − D(z) = 0, since D(z) ∈ F and σ is trivial on F . So σ(z) − z ∈ C. Suppose z i , 1 ≤ i ≤ n are elements of E with D(z i ) ∈ F for all i. Then the previous observation shows that the subfield E 0 = F (z 1 , . . . , z n ) of E is stable under every differential automorphism of E over F , which makes E 0 ⊇ F a Picard-Vessiot extension. Call its differential Galois group H. Since the z i generate E 0 over F , the map φ : H → C n by τ → (τ (z 1 ) − z 1 , . . . , τ (z n ) − z n ) is injective. It is also an algebraic group homomorphism, so H is a commutative unipotent group. Suppose further that z 1 , . . . , z n are algebraically independent over F . Then for any (c 1 , . . . c n ) ∈ C n we can define an F automorphism of F (z 1 , . . . , z n ) by z i → z i + c i , 1 ≤ i ≤ n. Since D(z i + c i ) = D(z i ), this automorphism is differential and its image under φ is (c 1 , . . . , c n ). Thus φ is surjective and hence φ is an isomorphism. Lemma 1. Let E be a Picard-Vessiot extension of F with Galois group H. Suppose z i ∈ E, 1 ≤ i ≤ n are algebraically independent over F such that D(z i ) ∈ F for 1 ≤ i ≤ n ; and Let y ∈ E with D(y) ∈ F . Then there are c 1 , . . . , c n ∈ C and f ∈ F such that Proof. As noted above, φ : H → C n by τ → (τ (z 1 ) − z 1 , . . . , τ (z n ) − z n ) is an isomorphism. For each i, choose σ i ∈ H such that φ(σ) is the n-tuple with 1 in position i and 0 elsewhere. Then the σ i 's generate H as an algebraic group. Let y ∈ E with D(y) ∈ F . Then, as noted above, for each i, σ i (y) − y ∈ C, say σ i (y) = y + c i . Let g = n 1 c j z j . Then σ i (g) = g + c i for each i, so σ i (y − g) = y − g for all i. Since the σ i 's generate H, τ (y − g) = y − g for all τ ∈ H, which implies that y − g = f ∈ F and hence y = f + n 1 c j z j . Lemma 2. The elements y α , α ∈ A, are algebraically independent over F . Proof. Let y 1 , . . . , y m be a subset of the y α 's. For any 1 ≤ k ≤ m the subfield E k = F (y 1 , . . . , y k ) of F u is a Picard-Vessiot extension. Let G k be the differential Galois group of E k over F . If σ ∈ G k then σ → (σ(y 1 ) − y, . . . , σ(y k ) − y k ) defines a homomorphism φ k : G k → G k a . We use induction on m to prove that φ m is an isomorphism and that the elements y i , . . . , y m are algebraically independent over F . The case m = 1 is trivial. So suppose the result holds for m = k−1 and consider the case m = k. Consider E k ⊃ E k−1 . These are Picard-Vessiot extensions of F with differential Galois groups G k and G k−1 respectively, G k−1 is a product of k−1 copies of G a by the induction hypothesis and G k is a subgroup of k copies of G a . Since ψ : G k → G k−1 is onto, if G k is not k copies ψ is an isomorphism and E k = E k−1 . In particular, y k ∈ E k−1 . Since D(y k ) = x k ∈ F , we conclude from Lemma 1 that y k = f + k−1 1 c i y i with c i ∈ C and f ∈ F . Apply D to this equation. We find x k = D(f ) + k−1 1 c i x i , which is a relation of linear dependence of x 1 , . . . , x k modulo D(F ). This contradiction implies that G k is actually a product of k copies of G a . The transcendence degree of E k over F is equal to that of the function field F (G k ), and since we now know this latter is k, we know the transcendence degree of E k = F (y 1 , . . . , y k ) over F is k, so y 1 , . . . , y k are algebraically independent over F . We next consider the subfield F ab u = F ({y α |α ∈ A}) of F u generated over F the y α 's. (The notation will be explained below). For σ ∈ U Π, we know that σ(y α ) − y α ∈ C, so that F ab u is a (possibly infinite) Picard-Vessiot extension of F . Let H be the (pro)algebraic differential Galois group of F ab u over F . Just as in the case with finitely many generators, the map φ : H → A C which sends σ to the tuple whose α entry is σ(y α ) − y α is injective since the y α generate E. Since the y α 's are algebraically independent over F (Lemma 2), given any A tuple of elements of C we can define an element of H as the automorphism which sends y α to y α plus the α th entry of the tuple, which shows that φ is also surjective and thus an isomorphism. Prounipotent group homomorphisms H = A C → G a are continuous in the product topology. We note for use below that Hom(H, G a ) = Hom( A C, G a ) (prounipotent group homomorphisms) is a C vector space of dimension the cardinality of A. Thus H is an abelian prounipotent quotient of U Π. We will see it is the maximum such: Lemma 3. Let E ⊆ F u be a (possibly infinite) Picard-Vessiot extension of F inside F u such that the differential Galois group M of E over F is abelian. Then E ⊆ F ab u . Consequently, the differential Galois group of F ab u over F is the maximal abelian quotient U Π ab of U Π. Moreover, Hom(U Π, G a ) = Hom(U Π ab , G a ) is a C vector space of dimension the cardinality of A. Proof. M is a product of copies of G a indexed by some set S. For s ∈ S, let p s : M → C be the projection on the s th factor; so p s ∈ C[M ]. For m ∈ M , m · p s = p s + p s (m). Since C[M ] is a polynomial ring over C in the p s 's, F [M ] is polynomial ring over F in the p s 's, and with the same M action. By Proposition 1 the Picard-Vessiot ring R of E is isomorphic to F [M ] as an F algebra and as a left M module. Let z s ∈ R correspond to p s . We will see that z s ∈ F ab u for all s ∈ S, which will prove the first assertion. So fix s and let z = z s and p = p s . As in Notation 1, M acts on the left on C[M ] by acting on the right on M , so for m ∈ M , m · z = z + p(m) with p(m) ∈ C. Then m · D(z) = D(m · z) = D(z) + D(p(m)) = D(z) for all m, so D(z) ∈ E M = F . Let x = D(z). Then x can be expressed, modulo D(F ), as a C linear combination of x α 's, where the α's in question come from a finite subset F of A. Renumbering, we write x = n k=1 c k x k + D(w), with z ∈ F , where x k is as in Notation 3, with k replacing α. Then D(z − c k y k − w) = 0, where y k is as in Notation 3, with k replacing α, so z = ( c k y k ) + w + c, where c is a constant. In particular, z s = z ∈ F (y 1 , . . . , y n ) so E ⊆ F (y 1 , . . . , y n ) ⊆ F ab u . Thus E ⊆ F ab u . Let K be the kernel of U Π → U Π ab and let L be the kernel of the restriction of U Π to F ab u . Let E be the fixed field of K. Since U Π/K is abelian, by the above E ⊆ F ab u , which means that L ≤ K. On the other hand, since U Π/L is abelian, K ≤ L. Thus K = L and the differential Galois group of F ab u over F is the maximal abelian quotient U Π ab of U Π, proving the second assertion. For the final statement, we note that, in the notation of the discussion preceding the Lemma, the differential Galois group H of F ab u over F has the property that Hom(H, G a ) is a C vector space of dimension the cardinality of A. Lemma 3 along with [5,Prop. 2.8,p.86] imply that the group U Π, which we know to be free prounipotent, is free prounipotent on a set of cardinality that of A. We conclude by recording these results: Theorem 4. Let F u be the compositum of all the unipotent Picard-Vessiot extensions of F . Then the differential Galois group U Π(F ) of F u over F is free prounipotent on a set of cardinality equal to the C vector space dimension of F/D(F ). Proof. The only thing remaining to be observed is the cardinality assertion. We have recalled above that for a free prounipotent group U on a set I the cardinality of I is the C vector space dimension of Hom(U, G a ) [5, Lemma 2.3, p. 84]. For U = U Π, by Lemma 3 Hom(U Π, G a ) has C dimension the cardinality of A. If dim C (F/D(F )) is infinite, then U Π is free prounipotent on infinitely many generators. It follows that any unipotent group U is a homomorphic image of U Π. If K ≤ U Π is the kernel of the surjection U Π → U then E = F K u is a Picard-Vessiot extension of F with G(E/F ) ∼ = H. So we conclude the following about the unipotent Inverse Problem for F : is an infinite dimensional C vector space, then every unipotent algebraic group over C occurs as a differential Galois group over F . Projective and Free Prounipotent Groups We have used throughout the result [5, Prop. 2.8, p. 86] that projective prounipotent groups are free. A fortiori, a prounipotent group which is projective as a proalgebraic group is free. And it is elementary to see that a free prounipotent group is projective. We record these observations: Proposition 3. Let U be a prounipotent group. Then the following are equivalent: (1) U is a free prounipotent group (2) U is projective as a prounipotent group (3) U is projective as a proalgebraic group. Proof. Let U = U (I) be a free prounipotent group on the set I. In [5], prounipotent groups which are projective as prounipotent groups are said to have the lifting property. By [5,Theorem 2.4,p.84], to verify the lifting property it suffices to show that for any surjection α : A → B of unipotent groups with kernel G a and any morphism f : U (I) → B there is a morphism g : U (I) → A with f = α • g. By [5, Proposition 2.2, p. 83], the set I 0 = {i ∈ I|f (i) = 1} is finite. For each i ∈ I 0 , choose x i ∈ A such that α(x i ) = f (i). By [5, Proposition 2.2, p. 83] again, there is a homomorphism g : U (I) → A such that g(i) = x i for i ∈ I 0 and g(i) = 1 for i / ∈ I 0 . Since α(g(i)) = f (i) for i ∈ I, again by [5, Proposition 2.2, p. 83] f = α • g. Thus U (I) is projective as a prounipotent group. Suppose U is a projective prounipotent group. To show that U is projective as a proalgebraic group, by [2, Proposition 4, p. 30] we need to show that if α : A → B is a surjection of algebraic groups and f : U → B is a morphism then there is a morphism φ : U → A with f = α • φ. Since U is prounipotent, f (U ) is a unipotent subgroup of B. Since α is surjective, its restriction to the unipotent radical R u (α −1 (f (U )) is surjective to f (U ). Since U is assumed projective in the category of prounipotent groups, there is φ 0 : U → R u (α −1 (f (U ) such that f = α • φ 0 i. Then φ 0 composed with the inclusion of R u (α −1 (f (U ) into A is the desired φ. Thus U is projective as a proalgebraic group. Suppose U is projective as a proalgebraic group. Then it is a fortiori projective as a prounipotent group, which means it has the lifting property of [5], and hence, as noted above, by [5, Proposition 2.8, p.86] is free prounipotent. In [5,Theorem 2.4,p. 84], it is shown that the property of a prounipotent group being projective with respect to all short exact sequences of unipotent groups is equivalent to the property of being projective with respect to those sequences which have G a kernel. As a corollary of the proof of Theorem 3 we can slightly strengthen that result. Corollary 3. A prounipotent group U is free if and only if for every non-split surjective homomorphism α : A → B of unipotent groups with kernel K isomorphic to G a and for every surjective homomorphism f : U → B of prounipotent groups there is a surjective homomorphism φ : U → A of prounipotent groups such that f = α • φ. Proof. [5,Theorem 2.4,p. 84] shows that U is projective in the category of prounipotent groups (and hence free) provided there exists a φ for all α and f as in the corollary without the restrictions that α be non-split and f be surjective. Apply the argument in the proof of Theorem 3 with U replacing U Π(F ) to reduce to the cases where α is non-split and f is surjective.
2020-01-28T02:00:40.677Z
2020-01-26T00:00:00.000
{ "year": 2020, "sha1": "3b8e4b0e44cb276bbeda8767ba99b5c1fbf73725", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2001.09506", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b800c6afd51cd2c93807c427a04af78de0b667bd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
99150252
pes2o/s2orc
v3-fos-license
Bioactive titaminates from molecular layer deposition Organic–inorganic hybrid materials are an emerging class of materials suitable for deposition by the molecular layer deposition (MLD) technique. Their toolbox is now expanded to include linkers of amino acids, which when combined with titanium form materials that can be termed titaminates, based on the amine and carboxylates present in the amino acids being used as linkers. This is a class of compounds with high potential as bioactive materials, containing essential amino acids and biocompatible titanium. The films have been prepared by combining titanium tetra-isopropoxide (TTIP) with glycine and L-aspartic acid. L-Arginine has also been used, however, without success. Hybrid films of TTIP and succinic acid were also investigated as a comparison to L-aspartic acid due to their structural similarities. All systems show self-limiting growth with a reduction in the growth rate with increasing temperature. The as-deposited films are amorphous, have low surface roughness, exhibit a hydrophilic nature as measured by a goniometer, and bear indications of some porosity towards water. Films based on glycine and L-aspartic acid have been used as substrates for growth of epithelial cells (rat goblet cells) where their proliferation has been monitored. The cell proliferation was significantly increased on these substrates compared to uncoated coverslips. Introduction In 1994, the rst complexes between titanium and amino acids were reported. 1More precisely, these were titanocene complexes with a selection of a-amino acids.Later, other titanium amino acid complexes have also been suggested, either as new stable structures or as precursors for TiO 2 synthesis. 2espite their potential as biocompatible materials, they have to our knowledge not been reported as lm coatings.The possibility for designing surface interactions with the environment is becoming an increasing reality.Controlling surface interactions is one of the key steps in controlling the bioactivity of solid surfaces, both to enhance and to retard surface-cell interactions.To build such structures, it is benecial to be able to design at the molecular level.This can be achieved through the molecular layer deposition (MLD) technique where the structure is built through self-limiting reactions between a precursor in the gas phase and active sites on the surface. 3LD is a special case of the atomic layer deposition (ALD) technique, where the embedded building units are larger molecules, such as amino acids.The precursors are introduced onto a substrate in a sequential manner separated by purging steps to remove unreacted precursors.By doing this, gas phase reactions are avoided and uniform coverage even on complex three-dimensional substrates can be achieved.5][6][7][8] Like ALD, MLD requires precursors with sufficient vapor pressure, reactivity and stability against decomposition at sublimation and deposition temperatures. 3This can be challenging when amino acids are applied as precursors, however, still possible, as will be shown in this paper. ALD was primarily designed for deposition of inorganic compounds in the late 1970s. 9Since then, the range of applicable precursors by this technique has expanded continuously.1][12] The major motivation for deposition of hybrid materials by MLD has so far been as gas permeation barriers and for its presumed exibility. 13,14Moreover, it enables modication of optical and electrical properties of deposited lms in a gradual manner. 15itanicones are an example of hybrid materials prepared by the MLD technique constructed using functional alcohols and titanium precursors.Titanicones have possible applications in fabricating exible multilayer gas diffusion barriers and thin solar lms. 13We here introduce a new novel group of organic-inorganic hybrid material that we have termed titaminates, based on the amine and carboxylates present in the amino acids used as linkers. Design of bioinert and bioactive surfaces is a huge eld where ALD/MLD has not yet been extensively applied.In medical applications, especially for bone implants, surfaces that do not cause inammatory reactions can form stable connective tissue around the implant and facilitate healing processes.The most important factors in designing for biocompatibility is by control of the surface wetting properties of the lm, in addition to surface charge, type of functional groups, and surface topography. 16ell-surface interactions are mediated through specic proteins that bind to specic amino acid sequences in an extracellular matrix in our body, such as RGD. 16RGD is a tripeptide composed of the three amino acids; L-arginine, glycine and L-aspartic acid (Fig. 1).Because RGD is proven to be highly effective at promoting the attachment of numerous cell types to a plethora of diverse materials, these three amino acids were selected as the organic building blocks for our MLD processes. 17mong inorganic materials, titanium-based coatings are recognized as compounds that support bone forming cells due to protein absorption and platelet activation with subsequent release of growth factors. 16Titanium is considered one of the most biocompatible elements 18 suitable as basis for MLD growth. In this study, we expand the MLD toolbox to include essential a-amino acids using titanium as the metal center.We have chosen titanium tetra-isopropoxide (TTIP) as the titanium source, avoiding the corrosive HCl by-product formed when using the more available and reactive TiCl 4 precursor.Lowtemperature deposition using TTIP is prone to some carbon contamination in the resulting lms, but this is not considered to be a problem for the current applications.This work is a contribution towards development and characterization of a novel type of hybrid materials with potential applications as bioactive surfaces.In addition, due to the structural similarity between aspartic acid and succinic acid (Fig. 1), hybrid lms of Ti-succinic acid were also deposited for comparison.This should reveal effects of the additional amine group on bonding principles and steric hindrance between organic precursor and TTIP on the lm growth rate. The bioactivity of a selection of these hybrid lms was studied by measuring the proliferation of an epithelial cell, rat goblet cells on glass coverslips with these novel coatings.Goblet cells in the conjunctiva are one of the main cell types that cover the ocular surface and their main function is to produce mucins that support a stable and functional tear lm.Any disturbance in the tear lm components can cause ocular surface diseases such as dry eye disease. 19The proliferation rate of an epithelial cell, rat goblet cells is known to be sensitive to the substrate, and is hence a suitable marker for investigation of bioactivity of different surfaces. Experimental The lms were deposited in an F120-Sat reactor (ASM Microchemistry Ltd) using TTIP and glycine, L-arginine, L-aspartic acid and succinic acid as precursors.The different organic precursors are sketched in Fig. 1 together with their names as used in this paper.Nitrogen was used as carrier gas supplied at a total rate of 500 cm 3 min À1 from a Schmidelin-Sirocco-5 N 2 generator with a purity of 99.999% with respect to N 2 and Ar content.The lms were deposited on precleaned single crystal substrates cut from Si(100) wafers. The growth rate was measured as a function of deposition temperature for each TTIP + organic precursor pair.In total, the temperature range of 160-375 C was covered.Thickness and refractive index of the lms were measured by a J. A. Woollam alpha-SE spectroscopic ellipsometer at incident angle of 70 .The lms were assumed transparent and the data were tted using a Cauchy-model. Infrared spectra of the lms was obtained with a Fourier transform infrared (FTIR) transmission spectroscopy using a Bruker VERTEX 80 FTIR spectrometer.The instrument was equipped with a nitrogen purging system.An uncoated Si(100) substrate was used to collect the background. X-ray photoelectron spectroscopy (XPS) was performed using a Thermo Scientic Theta Probe Angle-Resolved XPS system.The energy was charge referenced to adventitious C 1s, C-C peak, at 284.8 eV.The instrument was equipped with a standard Al Ka source (hn ¼ 1486.6 eV), and the analysis chamber pressure was in the order of 10 À8 mbar.Pass energy values of 200 and 50 eV were used for survey spectra and detailed peak scans, respectively.Ti 2p, C 1s and O 1s were captured for all samples and N 1s was captured for L-aspartic acid and glycine. The density of the lms was measured by a Bruker AXS D8 advance lm diffractometer equipped with a LynxEye strip detector.The thin lm diffractometer had a Göbel mirror and a Ge(220) four bounce monochromator for X-ray reectivity (XRR) measurements. Atomic force microscopy (AFM) measurements were performed in noncontact mode using a Park XE70 and contact angle measurements were done by a ramé-hart contact angle goniometer and DROP image analysis program. The growth dynamics were monitored in situ by a quartz microbalance (QCM) using a Maxtek TM400 unit and a homemade crystal holder.The changes in resonance frequency of the crystals were linearly dependent on the mass of deposited lm, Fig. 1 The organic molecules used as precursors in the current study. which was in accordance with the Sauerbrey equation. 20The QCM data were further processes to increase statistics by averaging 16 consecutive cycles.The conversion from frequency to mass per area was performed by using internal standards throughout the deposition campaign of a material with known growth rate and density, as measured by XRR.In this manner, variations in surface area of the QCM crystal due to evolution of texture was calibrated. The bioactivity of the surfaces was studied using conjunctival tissue that was removed from both eyes of male Sprague-Dawley rats.All removal of tissue and subsequent manipulations in this study conformed to the guidelines established by the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and was approved by the Schepens Eye Research Institute Animal Care and Use Committee.Pieces of minced conjunctiva (explants) were cultured in RPMI 1640 medium (BioWhittaker, Walkersville, MD) supplemented with 10% fetal bovine serum (FBS) (HyClone Laboratories, Logan, UT), 2 mM glutamine (Lonza, Walkersville, MD), and 100 mg ml À1 penicillin-streptomycin (BioWhittaker).Goblet cells were allowed to grow out from the explant for one week, aer which the explant was removed from the culture.The grown goblet cells were then detached by trypsin and sub-cultured for 4 days on amino acid coated substrates in the culture medium described above. To quantify the amount of viable cells in culture aer 4 days, which is an indication of cell proliferation and survival, the alamarBlue® (Invitrogen) colorimetric indicator was used.This is a nontoxic reagent that does not affect cell viability or proliferation.The active component in alamarBlue, the non-uorescent resazurin, is reduced to the uorescent resorun inside viable cells.Thus, this assay measures the amount of live cells in culture.The goblet cells were seeded on the substrates in 24-well plates.The cells were incubated in culture medium with 10% (v/v) alamarBlue solution for 4 hours before the cultures had become completely conuent.Aer incubation, the culture medium was transferred to 96-well plates, and the amount of uorescence from resorun was measured at 560 nm excitation and 590 nm emission wavelength, using the Synergy Mx uorescence plate reader (BioTek).Three independent experiments were done in triplicate. Results The sublimation temperature of the organic precursors was initially tested in a home built unit where the temperature of a small amount of the precursor was increased at a rate ca. 2 C per minute under vacuum while recording any visual change by a camera and any deposition on a nearby QCM sensor by logging its resonance frequency.The results from this test are given in Table 1 together with the values proved suitable for the later MLD depositions. The visual imaging of the precursors showed complete sublimation for all of the precursors except for arginine.Although the QCM analysis showed an abrupt increase in mass at 200 C for arginine, the precursor melted and gave signs indicating thermal decomposition.Further XPS analysis of lms based on arginine showed presence of nitrogen in the lm.However, the low thermal stability of this precursor raised questions about whether the nitrogen in the lm originated from L-arginine itself or from thermal decomposition products of this precursor.Arginine was therefore not used in the further experiments due to uncertainty related to its stability. The growth dynamics for MLD of the individual systems were investigated by QCM to obtain information on suitable pulse and purge parameters.An overview of the growth dynamics for long pulses and purges is given in Fig. 2 and the results from a systematic variation of the pulse and purge times are given in ESI 1-3.† The deposition conditions and the obtained pulse and purge times extracted from this investigation are given in Table 1.All three investigated systems follow near self-limiting growth where excess precursors and by-products are removed during the following purge step. The growth rate as a function of deposition temperature of the individual systems (Fig. 3) was further mapped based on the pulsing parameters given in Table 1.Glycine and L-aspartic acid showed relatively constant, albeit somewhat declining, growth rate over a large temperature range.The growth rate of succinic acid was more dependent on the deposition temperature with a maximum at 180 C. The evolution in refractive index as measured by ellipsometry, and density as measured by XRR as a function of deposition temperature, is given in Fig. 4a and b.The index of refraction increased on an overall basis nearly linearly with deposition temperature from 1.65 at 155 C to 2.15 at 375 C. For comparison, the index of refraction of anatase TiO 2 lms deposited at 225 C is 2.36. The density of the lms based on glycine and L-aspartic acid was relatively low and almost unaffected by the deposition temperature.The density of the lms based on succinic acid increased somewhat with temperature (3.2-3.5 g cm À3 ) and was notably higher than what one would expected considering that the density of anatase TiO 2 is 3.78 g cm À3 .The results from the XRR analysis is given in ESI 4. † By comparing the growth of the different systems as given by the QCM analysis, it is evident that the mass increase during the TTIP pulse is highest for the succinic-acid system, being the only system without amines.Neither of the systems shows a signicant mass increase during pulsing of the organic precursor.This is in accordance with a ligand exchange of the rather heavy isopropanol groups with the organic precursors, giving a low net mass increase.FTIR analysis of the hybrid lms provide information on the presence of the organic moieties in the lms and the resulting bonding modes, Fig. 5.For the amino acids, a weak asymmetric NH 3 + bending band near 1660-1610 cm À1 , and a fairly strong NH 3 + symmetric bending band near 1550-1485 cm À1 can be detected.The carboxylate group in amino acids absorbs strongly near 1600-1590 cm À1 and weakly near 1400 cm À1 , which corresponds respectively to asymmetric and symmetric stretching of COO À .In addition, amino acids show a broad NH 3 + band in the 3100-2600 region.The absorption bands of carboxylate ions are very close to the absorption bands of NH 3 + in amino acids. 21,22arboxylate ions may coordinate to metals in three modes, Fig. 6.The width of the frequency splitting (D) between asymmetric and symmetric stretching band of carboxylic ion determines the type of bonding mode.A splitting in the range 50-150 cm À1 is typical for bidentate complexes, for unidentate complexes D > 200 cm À1 and bridging complexes have D between 130 and 200 cm À1 . 23ased on this, the reaction between TTIP and succinic acid appears to be of bidentate type (D $ 90 cm À1 ), while the bond type is more uncertain for the amino acid systems since the absorption band for asymmetric carboxylate ion and symmetrical and asymmetrical NH 3 + is very close.This results in a broad absorption band hampering its analysis.Despite this, the overall appearance favours the interpretation of a larger split supporting bridging complexes.All three systems show a weak absorption band around 1290 cm À1 , which corresponds to C-O stretching of carboxylic acid.The broad band in the 3400-2800 cm À1 area is due to presence of OH À groups and can cover the bands from weaker C-H and NH 3 + signals in amino acids. The OH stretching mode is absent in FTIR spectra of L- aspartic acid and succinic acid and very weak and broad for glycine hybrid lms.This indicates that almost all OH À groups react with TITIP.The broad absorption band around 3300 cm À1 for glycine lm may be due to broad absorption band of NH 3 + , which is close to OH absorption band area.It can also result from water absorbed in the lm during exposure to air.Surface topography of lms deposited on Si(100) in temperature range of 200-275 C was measured by AFM (Fig. 7).All three systems exhibit very low surface roughness (RMS was 0.3 nm, 0.2 nm, and 0.2 nm for glycine, L-aspartic acid and succinic acid respectively), and almost no distinguishable surface features in spite of the lm thicknesses being above 20 nm. XPS was performed for a qualitative investigation of chemical state in the hybrid thin lms based on aspartic acid, glycine and succinic acid, deposited at 250, 225, and 180 C, respectively.The Ti 2p core level spectra (Fig. 8) are close to identical for all three sample types.The 459.3 eV binding energy of the Ti 2p 3/2 -peak corresponds to a Ti-O-type bonding scheme.This is conrmed by the 5.8 eV split spin-orbit energy difference and the charge transfer shake-up satellite corresponding to the O 2p eg / Ti 3d eg transition. 25No evidence of pure Ti-N-bonding is observed, showing that no titanium atoms are only coordinated to nitrogen.The domination of the Ti-O-character does not, however, rule out Ti-N-bonding. To observe a possible Ti-N-coordination the N 1s-peak was captured for aspartic acid and glycine.The N 1s peak for aspartic acid was sharp and symmetric at a binding energy corresponding to -NH 2 only.This points towards hybrid growth on the carboxylic acid group of the organic molecule only.This type of bonding scheme is not as probable for glycine that only has one carboxylic group.The N 1s peak of the glycine hybrid is not as symmetric, and a second component is needed to t the total recorded peak (Fig. 9). This second component has an energy corresponding well with reported energies of O-Ti-N-type bonding, and points towards at least some degree of O-Ti-N-linking in the glycine hybrid lms. 26 quantitative study of the composition of the hybrid lms proved difficult due to surface species of carbon and oxygen.However, an approximated 3 : 1 and 2 : 1 Ti : N ratio for aspartic acid and glycine hybrids respectively, can be explained by the as mentioned bonding models.This indicates an excess of Ti as compared to the ideal stoichiometry.Unfortunately, it was not possible to obtain trustable measurements of the Ti : O stoichiometry due to oxygen surface species.Attempts at etching the top layer to remove surface species were not successful as the hybrid lms were very prone to selective etching. An important factor that promotes cell attachment is surface wettability.This affects cell growth by determining what type of protein that is adsorbed from solution.The surface wettability depends on surface charge, polarizability and polarity of surface functional groups.Hydrophobic surfaces adsorb albumin, which is an abundant serum protein that does not promote cell attachment.Hydrophilic surfaces, on the other hand, adsorb cell-promoting proteins like bronectin, which is a component of extracellular matrix (ECM). 27he wettability of the surfaces was investigated by measuring the contact angle of water for different lms using a goniometer.Hybrid lms of glycine and L-aspartic acid were selected for contact angle measurements due to their potential application in cell growth purposes (Fig. 10). Both lms based on glycine and L-aspartic acid were rather hydrophilic with contact angles in the 23-33 range for lms deposited at temperatures in the range 225-350 C and did not change signicantly with deposition temperature (Fig. 11).This hydrophilicity is highly suitable for cell growth purposes. The growth dynamics of the individual systems were also investigated using an extra water pulse aer the organic precursor.The QCM analysis shows a huge mass increase during the water pulse followed by a similar mass loss during its purge, Fig. 12.This indicates that these lms can have a spongelike character in the presence of water providing the basis for its high hydrophilicity.Even though the total mass signal increases when water is included, the overall growth rates are reduced some as compared to Fig. 2. The succinic acid system is relatively unchanged from 98 to 91 ng cm À2 cycle, while the L-aspartic and glycine systems change from 72 to 42 ng cm À2 cycle and 65 to 50 ng cm À2 cycle, respectively.This may indicate that the relative content of the organic linkers are reduced during growth when water is introduced.Despite this, the idea behind the present test was only meant to shed light on the stability of the hybrid materials during deposition and on its possible porosity. The cell proliferation and viability results assayed with ala-marBlue shows that the amount of viable rat goblet cells on substrates coated with hybrid lm of glycine (p < 0.01) and aspartic acid (p < 0.01) and TTIP is signicantly higher than coverslips without any coating (Fig. 13).This indicates that the lms containing amino acids are highly biocompatible and suitable as scaffolds for cell growth. Discussion The three deposition systems show relatively similar growth dynamics with mass increase during pulsing of TTIP, Fig. 2.However, the aspartic acid system deviates somewhat in that the drop in mass during purging is less noted and that there is an initial decrease in mass during reaction with aspartic acid, where both glycine and succinic acid show an increase in mass.This drop in mass is not reproduced in the short pulse experiments in ESI 2. † It should be noted that the short pulse experiments reports the overall growth rate of a complete cycle, whereas Fig. 2 provides the in situ apparent variation in mass. When comparing the growth of aspartic acid and succinic acid, the only difference is the presence of an additional amino group for the former.It is evident that this additional functionality reduces the overall growth rate as compared to the succinic acid system.This is despite the observation from XPS analysis that the amino group does not take direct part in the bonding with titanium.Hence, we assume that the additional amino group leads to a lower packing density of aspartic acid on the surface than as for succinic acid.This may also be one possible reason as to why we observe a reduction in mass for pulsing of the aspartic acid, while succinic acid provides an increase in mass.The overall change in mass during exposure of the organic ligand is a sum of the mass lost from reacted isopropanol-ligands and mass gained from the organic ligand.For aspartic acid, it appears that more than two isopropanol groups must desorb for each aspartic acid reacted to obtain an overall reduction in mass.Whereas, the number is smaller for both glycine and succinic acid, leaving the possibility for inclusion of unreacted isopropyl groups in the lm during growth.This may be one reason for the differences in overall growth rate, and a slow release of excess isopropanol may explain the loss in mass during both purging sequences.The great number of possible reaction paths and inuence of thermal effects makes it challenging to conclude on the growth dynamics and rather highlights the necessity for more thorough investigations. The growth rates for all the investigated systems decreases with increasing the deposition temperature.According to a recent study, the growth rate of TTIP and water is independent of deposition temperature between 190 and 240 Cits ALD window.The ALD window is in fact a combination of precursor reactions and decomposition rates for TTIP.At elevated temperatures pyrolytic decomposition of TTIP becomes dominant and increases the achievable growth rate. 28However, as can be seen in Fig. 3, at temperatures higher that 250 C the growth rate for all three systems decreases with increasing deposition temperature, which indicates that pyrolytic decomposition of TTIP at elevated temperature does not have a signicant role in the growth rate.A possible explanation for a reduction in growth rate can be that increased thermal motion of the adsorbed molecules at higher temperatures contributes to steric hindrance of growth. 8ncrease in index of refraction with temperature for all systems can be related to an increase in products from pyrolytic decomposition of the TTIP precursor 28 leading to a larger content of TiO 2 with increasing temperatures.Alternatively, an increased packing density of the organic ligands may occur with increased deposition temperature.However, the density of the deposited lms does not increase in a similar rate that one would expect based on the suggestions mentioned.The density of the succinic acid based lms was surprisingly high when compared to the other systems and also with anatase TiO 2 (3.78 g cm À3 ).We have been unable to explain the origin for this, but have veried that the lms do contain carboxylic acids by FTIR and XPS, Fig. 5.In addition, this is also the lm with the lowest index of refraction. The succinic acid has earlier been used in MLD growth together with TMA (trimethyl aluminium) as precursor pair.This system also shows a reduction in growth rate with increasing temperature, although in two steps, a higher growth Fig. 13 The effect of hybrid film of Ti-glycine and Ti-L-aspartic acid on cell proliferation of rat goblet cells as evaluated with alamarBlue assay.**P < 0.01, compared to coverslip (n ¼ 3).rate ca.0.6 nm per cycle until 200 C and a lower rate ca.0.3 nm per cycle above 225 C, 7 proving on overall a notably higher growth rate than when TTIP is used.As comparison, the lm density of Al-succinic acid hybrid material made by Klepper et al. was reported to be below 2.0 g cm À3 as measured by XRR proving a distinct different nature of these lms. Conclusion Thin lms of organic-inorganic hybrid materials were successfully deposited using glycine, L-aspartic acid and, succinic acid along with TTIP.Attempts have been made using arginine as organic linker; however, it gave signs of partially decomposition during evaporation.The systems show self-limiting growth with a reduction in growth rate with increasing deposition temperatures.FTIR and XPS analysis prove a hybrid nature of the lms and that the organic linkers preferably react through their carboxylic acid functional groups, while some coordination to nitrogen is possible.The lms are amorphous and hydrophilic as deposited, showing a notable bioactivity.An epithelial cell, rat goblet cells can be successfully grown on these amino acid based surfaces while retaining high viability.The results conrm that the ALD/MLD technique can be used to build biocompatible substrates via molecular design and opens a eld for future design of scaffolds for tissue engineering. Fig. 2 Fig. 2 Mass evolution during growth measured by QCM using TTIP and glycine (at 225 C), L-aspartic acid (at 250 C), or succinic acid (at 180 C).The shaded area represents the statistical variation during 16 cycles. Fig. 3 Fig.3Film growth as a function of deposition temperature for TITIP and glycine (red squares), L-aspartic acid (blue triangles) and succinic acid (black circles). Fig. 4 Fig. 4 (a) Refractive index at 632.8 nm as measured by spectroscopic ellipsometry and (b) film density as measured by XRR as a function of substrate temperature for TTIP and glycine (red squares), L-aspartic acid (blue triangles) and succinic acid (black circles). Fig. 5 Fig. 5 FTIR spectra of hybrid films of (a) amino acid systems with TTIP (b) succinic acid and TTIP for 76 nm glycine, 53.9 nm aspartic acid, and 154.6 nm succinic acid film deposited on Si(100) substrates at 225, 275, and 190 C respectively. Fig. 6 Fig.6Possible bonding modes for carboxylic acids towards metals, here represented using glycine. Fig. 9 N Fig. 9 N 1s X-ray photoelectron spectroscopy.Recorded with 50 eV pass energy and monochromatic Al-Ka-source. Fig. 11 Fig. 11 Contact angle between water and Ti-L-aspartic acid, and Tiglycine hybrid film versus deposition temperature. Fig. 12 Fig. 12 Mass evolution during growth measured by QCM using TTIP and glycine (at 225 C), L-aspartic acid (at 250 C), or succinic acid (at 180 C), and water.The shaded area represents the statistical variation during 16 cycles. Table 1 Deposition conditions for QCM investigation and obtained pulse and purge parameters
2019-04-08T13:12:44.449Z
2017-04-10T00:00:00.000
{ "year": 2017, "sha1": "fa052e148a579e6ca664a953ba79ae9a71d18e42", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra01918a", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3a945f83ef3d619160c2663f9eb313204e04b83d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
244455720
pes2o/s2orc
v3-fos-license
Evaluation of functional outcome of metacarpal fractures managed by operative techniques: a hospital based study Background: Metacarpal fractures comprise approximately 35.5% of cases in daily emergencies, mostly due to road traffic accidents, fall, and assault. The main goal of treatment is to achieve the strong bony union without any functional disability. Aim of the study was evaluation of functional outcome of metacarpal fractures managed by operative techniques and to compare the efficacy of different operative techniques. Methods: A total of 50 patients managed by various operative techniques by using k-wires, screws and plates. Functional outcome was assessed by using total active movement (TAM) and disabilities of the arm, shoulder and hand (DASH) scoring system at 6 th week, 3 rd month and at 6 th month. Results: A total 47 fractures united and three fractures mal-united at the time of final assessment. Overall excellent and good results were achieved in 94% cases. Conclusions: Our finding suggests that open reduction and internal fixation with k-wire is most preferable method among all other operative techniques. INTRODUCTION Hand is one of the most important part in the human body. Its anatomy is complex because it has multiple joints and different tendons and ligaments are attached to it. 1 Around 10% hand fractures of all the fractures reporting to the emergency room and outpatients department. Metacarpal fractures comprise between 18-44% of all hand fractures. 2 In most of the cases, metacarpal fractures are work related and commonly seen in younger age group. 3 Proper management of metacarpal fractures are very important as any complication can result in functional deformity. These days various fixation techniques are in use like percutaneous pinning, cerclage wiring, plating, lag screws, tension band wires and external fixators. Out of these K-wire fixation is popular choice due to simplicity of the procedure and the minimal soft tissue interference. 4 The functional outcome and result of the management of hand fracture is of predominant importance, rather than fracture healing being the only isolated goal. The purpose of our prospective study is to evaluate the functional outcome of metacarpal fractures which are managed by different operative techniques like K-wires, plates and screws. Aims and objectives Aims and objectives of the study were: to evaluate the functional outcome of metacarpal fracture managed by operative methods; and to compare the efficacy of different operative techniques by k-wires, screws and plates in the management of metacarpal fractures. METHODS All the cases of metacarpal fracture admitted under department of orthopaedics in GMC, Jammu included in the study. This prospective study was conducted over a period of 12 months from November 2019 to October 2020. The data was analysed by using statistical package for the social sciences (SPSS) software program. A written informed consent taken from all the patients for their inclusion in the study. Inclusion criteria Patients with age >18 years, of both genders, and trauma to hand less than 2-weeks-old were included in the study. Exclusion criteria Patients not giving consent, patients reporting two weeks after trauma of hand, age <18 years, pathological fractures, and intra-articular fractures were excluded. Operative management Indications for operative management include: displaced irreducible fractures, shortening greater than 6 mm, angulation 30-40 degrees in small/ring finger or >10 degree in middle and index finger, malrotation, segmental fractures, and multiple metacarpal fracture. Surgical techniques Techniques like K-wires fixation, plate fixation, and screw fixation were used. Post-operative care Post-operative care like post-operative AP, lateral and oblique views were obtained to check the reduction and implant safety were included. Follow up and rehabilitation protocols Passive as well as active finger movements encouraged. Patient was recalled on 6 weeks to check for any kind of complications. Further post-operative visits were rescheduled on 3 rd and 6 th month. Functional outcome of the patient was evaluated by total active movement (TAM) and disabilities of the arm, shoulder and hand (DASH) scoring system. Functional evaluation of outcome measured by TAM scoring system at 6 weeks, at 3 months and at 6 months. TAM scoring system Union of the fracture (5 points) assessed clinically as well as radiologically and 5 points is assigned to union. Zero points for points for non-union. The functional status of the involved ray (finger) was assessed on the basis of TAM of the individual digits. TAM is the sum total of the active flexion range at the metacarpophalangeal, proximal interphalangeal and distal interphalangeal joints in one digit. The functional results graded according to the following modification of the criteria laid down by the American society for surgery of the hand (Table 1). RESULTS Present study includes 50 patients managed by different operative methods like K-wires, screws and plates. Out of 50 patients 40 were males and 10 were females. Age of the patients involved in the study was above 18 years and maximum number of patients belong the age group 19-30 years ( Figure 1). Figure 1: Age distribution of study participants. Male dominated the study. (80% were male). In 56% cases right hand was involved. Most common mechanism of injury was road traffic accident and blunt trauma. Closed fracture cases (40 patients) were more as compare to open fracture (10 patients). 5 th metacarpal (24 cases) was more commonly involved. Most common fracture pattern was transverse and shaft of metacarpal was most commonly involved. Out of 50 patients, 32 patients were treated with K-wires, 6 patients treated with screws and 12 patients treated with plates. 5 th metacarpal was most commonly involved. Final assessment was done at 6 months by assessing the patient clinically as well as radiographically. 47 fractures were united and 3 fractures mal-united at the time of assessment. Functional outcome of the patient was evaluated by using TAM and DASH scoring system (Tables 2 and 3). None of the patient in this study had any neurovascular injury. Only 3 patients had associated tendon injuries which simultaneously repaired. All the patients showed complete radiological union at 6 months. None of the patient in our study had any neurological injury. Superficial infections was seen only in three patients which was managed by antibiotics and dressing. DISCUSSION Metacarpal injuries are so common and frequently encountered in the orthopedic outpatient department (OPD). These fractures should be treated with proper caution to prevent any kind of problem related to the normal function of hand. In most of the cases these fractures are managed by conservative method with plaster casting/ slab. In 1928, Lambotte first time described the details regarding surgical treatment of fractures of metacarpals and other fractures around hand. 5 Early treatment options of metacarpal fractures were limited. It is only limited to closed reduction and its results were usually unsatisfactory. Due to open reduction and internal fixation by different methods leads to satisfactory results. [6][7][8] Functional outcome of small bones of hand fracture depends on severity of injury and its management by proper technique. 9 The functional outcome of hand is more important than only fracture healing. 10 Shehadi et al reported that return in total range of motion can be achieved upto 100% of metacarpal fracture fixed with external fixator. This mode of fixation is useful in compound metacarpal fractures with bone loss. But the routine use of external fixator is discouraged as there is loosening of construct following pin tract infection leading to loss of fixation and there is difficulty in constructing and applying the fixator. 11 In the study of 21 metacarpal fracture, a J shaped nail formed from a curved 2.0 mm diameter K-wire bent sharply at the proximal end was found to be useful in the neck or transverse shaft fracture of the metacarpals without concomitant injuries such as severe soft tissue damage. 12 In a study of 52 consecutive closed, displaced, extraarticular metacarpal fractures, results of intramedullary nail (IMN) fixation were compared with those of plate screw (PS) fixation. No significant differences in clinical outcomes were found, but the incidence of loss of function, penetration to the metacarpal-phalangeal joint, and secondary surgery for hardware removal in the operating room were much higher in the IMN group. 13 Earlier, these metacarpal fractures were managed only conservatively and surgical treatment were done only in unstable fracture cases because of limited treatment options. These days we can do surgical treatment with intramedullary K-wiring, transverse K-wiring, Bouquet techniques, cerclage, mini external fixators, screw fixation (lag principle) and fixation with plate and screws and many more. 14 Number of studies have been done in an effort to provide optimal treatment option for unstable metacarpal fractures. The principal goal of the treatment is to achieve the rigid bony union and to improve the hand movement without any kind of stiffness. However clear cut indications for conservative or surgical treatment of metacarpal fractures are not defined in the literature. Surgery is usually indicated for the fractures that have significant hand functional disability or cosmetic issue. 15 K-wiring is a popular method among orthopaedic surgeons worldwide for metacarpal bone fractures and using a K-wire has a benefit as it can be used as a joystick to help in reducing fracture intra-operatively, however if the K-wire is not rigid enough it may lead to loss of reduction subsequently and may be complicated by pin tract infection or pin breakage. 16 Metacarpal screw fixation has been commonly used in the past but it does not produces satisfactory results in long oblique pattern of fracture or a fracture where gross comminution is there. 17 Many studies have shown the satisfactory result of metacarpal fracture managed by percutaneous K-wiring for fracture fixation. A study reported good functional outcomes for percutaneous pinning with no functional impairment in K-wire treated patients and our study showed the same results. 21 Another study by Lee et al concluded that K-wires facilitate early hand mobilization, correct the deformity, and provide good clinical and radiographic outcomes. 22 As in our study, the functional outcome in terms of radiological union is better in plates and screws but in terms of hand movement is much better in K-wire patients. Another study by Kelsch and Ulrich showed that intramedullary K-wire for fixation is generally believed to be the least invasive technique with maximum long-term function. 23 Our study highlights the fact that average TAM score of metacarpal fracture is 225.4 degree. Overall functional results in our study based on TAM criteria of American society for surgery of hand is excellent in 60%, good in 32%, fair in 4% and poor in only 4% cases. The evidence of radiological union is better in case of patients treated plates and screws as compared to K-wire. When we compared the patients postoperatively treated with different methods, we found that range of motion was better in patients treated with K-wires. Limitations As in our study, few limitations are there because we have not divided the patients equally in each treatment modality, sample size was limited and no specific criteria for selecting the type of surgical technique. CONCLUSION We concluded that surgical treatment in case of unstable metacarpal fractures with screws and plates has less preferable method as compare to K-wire fixation. Open reduction and internal fixation with K-wire produces stable fixation and early mobilization of hand. It is easily available, cost effective and requires less operative time and finally has good functional outcome.
2021-11-21T16:23:31.510Z
2021-11-19T00:00:00.000
{ "year": 2021, "sha1": "231334cf77f734b6bdb790bb4b796a0d3e0ab962", "oa_license": null, "oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/10328/6862", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "87dcd58979bc8e99384024c0ecdfb517134330d7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
12305387
pes2o/s2orc
v3-fos-license
Ovarian cancer stem cells: Can targeted therapy lead to improved progression-free survival? Despite significant effort and research funds, epithelial ovarian cancer remains a very deadly disease. There are no effective screening methods that discover early stage disease; the majority of patients are diagnosed with advanced disease. Treatment modalities con-sist primarily of radical debulking surgery followed by taxane and platinum-based chemotherapy. Newer therapies including limited targeted agents and intraperitoneal delivery of chemotherapeutic drugs have improved disease-free intervals, but failed to yield long-lasting cures in most patients. Chemotherapeutic re-sistance, particularly in the recurrent setting, plagues the disease. Targeting the pathways and mechanisms behind the development of chemoresistance in ovarian cancer could lead to significant improvement in patient outcomes. In many malignancies, including blood and other solid tumors, there is a subgroup of tumor cells, separate from the bulk population, called cancer stem cells (CSCs). These CSCs are thought to be the cause of metastasis, recurrence and resistance. However, to CSCs have been difficult to identify, iso-late, and target. It is felt by many investigators that finding a putative ovarian CSC and a chemotherapeutic agent to target it could be the key to a cure for this deadly disease. This review will focus on recent ad-vances in this arena and discuss some of the controversies surrounding the concept. INTRODUCTION It is estimated that over 14000 women in the United States will die with ovarian cancer and more than 22000 women will be newly diagnosed with the disease in 2013 [1] . Women with early stage disease often have vague symptoms such as bloating, back pain, and fatigue leaving most women undiagnosed until later stages of the disease. Standard treatment of ovarian cancer consists of surgical resection of disease followed by taxane and plati-num-based chemotherapy which yields a partial response rate of greater than 80% and a complete response rate of 40%-60% in patients with advanced disease [2] . Although initial response rates are promising, the recurrence rate is approximately 70% and five-year survival is 45% in patients with advanced disease [3] . While it appears that the majority of ovarian cancer cells are initially chemosensitive as evidenced by the high initial chemotherapy response rates, the high recurrence rates suggest development of chemoresistance. Some believe that a population of cells are not killed by chemotherapy, or they repopulate after exposure to chemotherapeutic agents. These cells have been called ovarian cancer stem cells (CSCs). CSCS It has been theorized that CSCs exist in certain malignancies, particularly the blood cancers and basal-like breast cancer. For the blood cancers, identifying CSCs has been in progress since the first stem cells were identified [4] . In acute myeloid leukemia, CSCs have been proven to be an immature abnormally differentiated cells that have the ability to self-renew [5] It is felt by some investigators that these CSCs exist to promote tumor growth and metastasize to other organs. They have an increased tumorigenicity and differentiating capacity compared to other cells. The majority of solid tumor cells, may not have a differentiation capacity or the ability to develop chemoresistance but offer support to angiogenesis or signaling pathways. The CSCs (progenitor cells) are typically a small portion of the tumor and give rise to differentiated progeny that comprise the bulk of tumors ( Figure 1), and are capable of unlimited growth [6,7] . CSC markers have been shown to be upregulated in cells growing in tumorspheres compared to single cells suggesting that CSCs are enriched in this population. In ovarian cancer, this spheroid form of tumor cells is thought to be involved in the dissemination of cancer in the peritoneal cavity. This suggests that CSCs are involved in metastasis intra-abdominally. CSCs are generally thought to have the ability to self-renew, differentiate, and metastasize to form secondary and tertiary tumors [8] . It has been shown that primary treatment with chemotherapeutic agents results in increased drug-resistant CSCs and this leads to recurrence [9] . Unlike some of the blood cancers which have known normal stem cells, there is no known normal ovarian stem cell [6] . This obviously complicates the identification of specific ovarian CSCs. The majority of evidence in favor of ovarian CSCs exists from the identification of markers of "stemness" as identified in other malignancies. Still, many researchers are investigating the existence of specific ovarian CSCs. OVARIAN CSCS The isolation of ovarian CSCs is fraught with difficulty, like that of many other solid tumors. For isolation to occur, a single-cell suspension must be made from a solid tumor while sustaining viability. While there may be a large volume of tumor or ascites, the actual CSCs are a rare population of that tumor; unlike blood tumors, there is no specific marker for an ovarian CSC. The first model for this process was described by Bapat et al [10] in 2005. They collected ascites from a patient sample and were able to develop 19 immortalized tumor sphere-forming clones. Two of these were passaged into nude mice and grew into tumors that closely resembled the parental tumor. A single transformed clone was able to be isolated that demonstrated increased aggressiveness from the parent tumor. This experiment was some of the first evidence to show heterogeneous growth properties of tumor cell subpopulations in ovarian cancer. Also, these tumor cells demonstrated the ability to self-renew by continuing to form tumors even after serial transplantation. CSC MARKERS There is no specific ovarian CSC marker and researchers have relied on markers of "stemness" identified from other malignancies. Some of these proteins used as CSC markers include CD44, CD133, CD117, ALDH1A1, and EpCAM (Table 1). There are many other proteins that have been used as markers of "stemness" but are not as well defined in ovarian cancer. Discovered as a marker for breast development and breast carcinoma, CD44 is a hyaluronate receptor [11] that is involved in cell-cell and cellmatrix interactions and ultimately affects cellular growth, differentiation, and motility [12,13] . Zhang et al [14] found that CD44+/CD117+ cells had increased chemoresistance to taxane and platinum-based chemotherapy as well as the ability to self-propagate. Similarly, Alvero and colleagues showed that CD44+ cells were enriched in ovarian cancer patient ascites and once isolated and xenografted gave rise to tumor with both CD44+ and CD44-cells suggesting they can differentiate and self-renew [15] . Orian-Rosseau described various strategies to target the CD44 receptor, which included binding to hyaluronic acid and osteopontin, a protein involved in interleukin production and overexpressed in ovarian cancer, as well as contributing to receptor tyrosine kinase activation [16] . CD133 is a transmembrane glycoprotein that is expressed in normal hematopoietic and epithelial stem cells, and has also been described as a CSC marker in solid tumors. Ferrandina et al [17] showed that the amount of CD133 positive cells was higher in ovarian carcinoma than in normal ovarian tissue. In 2009, Baba and colleagues reported the ability of CD133+ cancer cells to generate both CD1333+ and CD133-cells, similar to what Alvero had seen with CD44+ cell spore [18] . CD133 has also been shown to be involved in increased tumor formation, increased chemoresistance, and the ability to recapitulate the original heterogenous tumor [19] . CD117, also known as c-kit or stem cell growth factor receptor, is a proto-oncogene encoded by the KIT gene. It is a type of tyrosine kinase receptor involved in cell signal transduction. It has been shown to be involved in apoptosis, cell differentiation, proliferation, and cell adhesion [20] . CD117 was shown by Kusumbe et al [21] to have high expression in ovarian cancer cells. Interestingly, cells expressing CD117 appear to be highly tumorigenic as it only takes approximately 10 3 cells to be able to self-renew, differentiate, and regenerate tumor in mouse models [22] The Wnt/β-catenin pathway which has been implicated in the development of chemoresistance is activated by CD117 [23] . ALDH1A1 is a member of the ALDH group of proteins, which contains 19 enzymes that function as cell protectors from carcinogenic aldehydes [24] . Landen et al [25] declared it a putative CSC marker and showed its association with chemoresistance in ovarian carcinoma. Cells that are double positive for CD133 and ALDH1A1 have a greater ability to develop tumors in mouse models as compared to CD133+/ALDH1A1 -or ALDH1A1 +/CD133 -cells [26] . Recently, Shank et al [27] , showed that metformin decreased the population of ALDH+ cells in ovarian cancer cell lines as well as decreased the formation of tumor spheres in patient tumors. In vivo, they also presented that metformin would restrict the growth of whole tumor cell line xenografts [27] . EpCAM (CD326) is a transmembrane glycoprotein involved in cell adhesion. EpCAM has been shown to have oncogenic signaling properties which result in cell proliferation and tumor formation [28] . Higher expression of EpCAM has also been seen in metastatic ovarian tumors [29] and it is involved in epithelial to mesenchymal transition leading to metastasis [30] . Another glycoprotein identified as an ovarian CSC is CD24 which is a cell membrane glycoprotein involved in cell adhesion. In 2005, the movement of CD24 from the cell membrane to the cytoplasm in borderline ovarian tu-mors was associated with microinvasion and omental implants as well as shorter survival time in adenocarcinoma of the ovary [31] . Moulla et al [32] also demonstrated that the transition from membrane to cytoplasmic CD24 expression was associated with a more aggressive phenotype in borderline tumors. CLINICAL SIGNIFICANCE While it is interesting to utilize proteins to identify CSCs in various tissues, the clinical significance of these markers is still being determined. In 2012, Meng and colleagues reported on CD44+/CD24-cells in ovarian cancer cell line studies and patient ascites samples. Ovarian cancer cell line studies confirmed that increased numbers of CD44+ cells increased chemoresistance. Patient ascites samples with > 25% CD44+ cells had significantly decreased median progression-free survival (6 mo vs 18 mo, P = 0.01) as well as propensity to recur (83% vs 14%, P = 0.003) [33] . Zhang and colleagues studied 400 ovarian cancer tissue samples for CD133 positivity. They found associations between CD133+ and higher grade ovarian tumors, advanced stage disease, and decreased response to chemotherapy. They also found that CD133+ tumors are associated with decreased overall survival (P = 0.007) and shorter disease free interval (P < 0.001) [34] . In a study by Chau et al [23] , they evaluated 3 patient samples in a xenograft mouse model and it was found that there was increased chemoresistance in patients with CD117+ tumor cells. In 65 ovarian cancer patients with advanced stage disease, greater than 20% of ALDH1A1+ cells correlated with decreased progression-free survival (6 mo vs 14 mo, P = 0.035) [25] . Recently, Zhu et al [35] reported on overexpression of CD24 in epithelial ovarian cancer and found that it was an independent variable associated with a low survival rate, increased metastasis, and decreased survival time. Recent studies have indicated an enriched population of CSCs in ovarian cancer patients with recurrent carcinoma as compared to patients with primary cancer. Rizzo et al [36] noted an increased percentage of side population cells (generally accepted to be CSCs) in the ascites of patients with first recurrence after platinum-based chemotherapy as compared to ascites of chemo-naive patients. Steg et al [37] Another method of targeting CSCs is to target their signaling pathways, which include Notch, Wnt/β-catenin, TGF-β, and Hedgehog pathways. McAuliffe and colleagues demonstrated this concept with the Notch pathway and platinum resistant ovarian cancer [46] . In particular they looked at Notch3, and showed that it was overexpressed in ovarian CSCs and was correlated with increased platinum resistance. A pan Notch inhibitor, gamma-secretase inhibitor (GSI), when used in combination with cisplatin, had a synergistic cytotoxic effect, and led to decreased numbers of CSCs (12.8% side population cells in the control, 2.31% with Notch inhibitor alone, and 0.81% with GSI and cisplatin). A Notch ligand, Jagged 1, was targeted in taxane-resistant ovarian cancer cell lines by Steg et al [47] . They showed that targeting Jagged1 induced chemosensitivity to docetaxel in vivo and reduced tumor weights. They implicated the Hedgehog pathway in these experiments with Jagged1 by showing that rather than the chemoresistance being mediated by MDR1 as expected, it was GLI2, a Hedgehog downstream marker, that was downregulated. Another study with Jagged1 found that inhibition of the Wnt/β-catenin signaling pathway reduced its expression [48] . Wnt/β-catenin pathways have previously been demonstrated to produce self-renewal in ovarian cancer and appear to be a driving force behind ovarian cancer progression [49] . The Hedgehog signaling pathway has been implicated in the growth regulation of spheroid-forming cells in ovarian cancer. This was demonstrated by Ray et al [50] , in four ovarian cancer cell lines (ES2, TOV112D, OV90, and SKOV3) where spheroid volume was increased up to 46-fold with Hedgehog agonists. Cyclopamine, a Hedgehog inhibitor, was used to prevent further growth of spheroid-forming cells in these cells lines and showed up to a 10-fold reduction in growth in ES2 cells [50] . Multiple groups are actively working to target these signaling pathways in hopes of altering ovarian cancer chemoresistance and recurrence. CONTROVERSIES Although there is growing evidence that ovarian CSCs are relevant, there are still many who debate the existence of these cells. At the forefront of this debate, remains the fact that a specific ovarian CSC marker has not been identified. None of the markers discovered are exclusively found in ovarian cancer cells. CD133 is recognized as the putative CSC marker for many human solid tumors, however, signaling pathways that regulate its behavior remain unknown [51] . Some studies presented in this review may be showing that CSCs are more "tumorigenic" based on ability of preferential or improved grafting. It will give much more credence to the argument if some of the pathways or markers being targeted show significant clinical results. FUTURE DIRECTIONS If progression and development of chemoresistance is due to the ovarian CSCs, then specific therapy for CSCs but samples collected after primary therapy showed higher densities of ALDH1A1, CD44, and CD133 due to the death of the non-stem cells. Stem cell markers were also examined in this study and 14% of recurrent tumors showed overexpression of these markers compared to primary tumors. TARGETING OF OVARIAN CSCS Stem cell markers have been implicated in chemoresistance and recurrence of ovarian cancer; therefore, it is reasonable to evaluate agents that could target these cells. CD44 has been studied with phase Ⅰ trials in head and neck cancer via an antibody drug conjugate, BIWI 1 [38] . There have also been several monoclonal antibodies designed to target CD44 in squamous cell cancers which could be extrapolated to adenocarcinomas [39] . CD44+ cells have been targeted in an intraperitoneal (IP) mouse model with cisplatin via a conjugate of hyaluronic acid and cisplatin which was then internalized more efficiently than CD44+ cells in ovarian cancer cell lines (A2780 and OV2008). Li and Howell [40] also demonstrated decreased growth in IP inoculated A2780 ovarian cancer cells treated with a hyaluronic acid-cisplatin conjugate when compared to free cisplatin. A hyaluronic acid-paclitaxel (HA-TXL) conjugate to target CD44+ cancer cells has also been studied in an IP mouse model with ovarian cancer cell lines (SKOV3ip1 or HeyA8) and showed significantly reduced tumor weights and nodules [41] . Similarly, CD133 has been targeted by IP administration of an anti-CD133 targeted toxin (dCD133KDEL), in an ovarian cancer cell line (NIH:OVCAR5-luc) in a mouse model, which resulted in significant decrease in progression of CD133 expressing tumors [42] . Noguera et al [43] evaluated imatinib mesylate, a CD117 specific inhibitor, in low grade recurrent platinum resistant tumors of the ovary in a single site phase II trial. Thirteen patients were enrolled and 48% of those had c-kit positive tumors. Eleven patients were eligible for evaluation of response, and though well-tolerated, no antitumor activity was seen in these low-grade tumors [43] . An anti-EpCAM monoclonal antibody, catumaxomab, was evaluated in a phase Ⅱ/Ⅲ trial in 258 patients with malignant ascites from epithelial cancer, half of which were ovarian carcinomas. When compared to paracentesis alone for treatment of ascites, addition of catumaxomab increased the median time to next paracentesis (11 d vs 77 d, P < 0.0001). Patients who received catumaxomab also had decreased signs and symptoms of ascites. The safety profile was acceptable [44] . Catumaxomab was evaluated in conjunction with steroid premedication (Catumaxomab Safety Phase Ⅲb Study with Intraperitoneal Infusion in Patients with Malignant Ascites Due to Epithelial Cancer) as well as in retreatment with IP therapy (SECIMAS), but results from these studies have not yet been posted (www.clinicaltrials.gov). It is also being evaluated in combination with cytotoxic chemotherapy in a phase Ⅱ trial [ENGOT-ov8] [45] . must be developed. In ovarian cancer, the use of monoclonal antibodies to many surface markers for CSCs has proven of some potential value. The most utilized monoclonal antibody, bevacizumab, an anti-vascular endothelial growth factor agent, has been shown to improve progression-free survival in advanced ovarian cancer [52] . Recently, CSCs have been implicated in the hypoxic environment that bevacizumab creates, but this relationship has not yet been well defined [53] . In addition to those mentioned previously, the anti-CD44 antibody, A3D8 was shown to produce significant apoptosis and arrest of cell cycle in the S phase for the SKOV3 ovarian cancer cell line by Du et al [54] and may represent a therapeutic option. Patients taking metformin for diabetes have previously been reported to have improved survival and some groups postulate that this relationship is due to the downregulation of CSC growth. A phase Ⅱ trial is currently underway to evaluate this relationship (NCT01579812) (www.clinicaltrials.gov). There are over 3000 results when searching for clinical trials related to CSCs on Clinicaltrials.gov. While the majority of these are not specific for ovarian cancer, many are for breast cancer or other solid tumors, which have traditionally led to findings applicable to ovarian cancer. CONCLUSION It appears that ovarian CSCs are involved in chemoresistance and likely contribute to an overall poor prognosis in ovarian cancer patients. Researchers continue to study the role of ovarian CSCs and develop targeting agents for specific identification and therapeutic treatment. Clinical trials are ongoing for agents targeting ovarian CSCs and data from these trials will be important to determine future research directions aimed at improving survival in women with ovarian cancer.
2018-04-03T00:29:28.739Z
2014-09-26T00:00:00.000
{ "year": 2014, "sha1": "ce7afeba15fa275adf5be305f76f11eb50ec1cc1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4252/wjsc.v6.i4.441", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d2b4dc0089493689f19308c3fbe0994f2091c0ac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240091412
pes2o/s2orc
v3-fos-license
Network Analysis of Insomnia in Chinese Mental Health Professionals During the COVID-19 Pandemic: A Cross-Sectional Study Purpose The coronavirus disease 2019 (COVID-19) pandemic is associated with increased risk of insomnia symptoms (insomnia hereafter) in health-care professionals. Network analysis is a novel approach in linking mechanisms at the symptom level. The aim of this study was to characterize the insomnia network structure in mental health professionals during the COVID-19 pandemic. Patients and Methods A total of 10,516 mental health professionals were recruited from psychiatric hospitals or psychiatric units of general hospitals nationwide between March 15 and March 20, 2020. Insomnia was assessed with the insomnia severity index (ISI). Centrality index (ie, strength) was used to identify symptoms central to the network. The stability of network was examined using a case-dropping bootstrap procedure. The network structures between different genders were also compared. Results The overall network model showed that the item ISI7 (interference with daytime functioning) was the most central symptom in mental health professionals with the highest strength. The network was robust in stability and accuracy tests. The item ISI4 (sleep dissatisfaction) was connected to the two main clusters of insomnia symptoms (ie, the cluster of nocturnal and daytime symptoms). No significant gender network difference was found. Conclusion Interference with daytime functioning was the most central symptom, suggesting that it may be an important treatment outcome measure for insomnia. Appropriate treatments, such as stimulus control techniques, cognitive behavioral therapy and relaxation training, could be developed. Moreover, addressing sleep satisfaction in treatment could simultaneously ameliorate daytime and nocturnal symptoms. Introduction Insomnia, characterized by dissatisfaction with sleep quantity or quality, 1 is a common sleep disorder. 2 Several factors, such as mental disorders, 3-5 emotional stress, 6,7 and working at night or in shifts, 8 are closely associated with insomnia. During the coronavirus disease 2019 (COVID-19) pandemic, health-care workers are confronted with the stress of high infection risk, heavy workload, long shift hours, and limited contact with their families, which can result in mental health issues, including stress, anxiety, depression, and insomnia. [9][10][11] A recent metaanalysis reported that the pooled prevalence of insomnia in health-care workers during the COVID-19 pandemic was 34.32%, 12 which was higher than the general population. 13 To reduce the risk of insomnia and develop effective measures to treat insomnia in this population, it is vital to understand the symptom patterns of insomnia. Network theory is a promising novel approach to studying psychopathology and understanding linking mechanisms at a symptom level, 14 in which the disorder is assumed to be constituted of co-occurring symptoms and that symptoms may trigger and accentuate each other. 15 Unlike traditional approaches, which implicitly reflect the underlying latent factor 16 or simply assume symptoms of disorders to be manifestations of an underlying disease, 17,18 symptoms and their mutual interactions are seen as the disorder itself in the network approach. 19 The network theory of psychopathology is flexible and dynamic and can explain the onset and maintenance of psychiatric symptoms. 20 Therefore, the perspective of network approach offers the opportunities to illustrate the structure and functioning of important psychological phenomena 16 such as the structure of insomnia symptoms. This theoretical framework can be statistically investigated via network analysis 21 to visualize the structure of symptoms 22 and to reveal information or key symptoms related to clinical status rather than relying on the summation of total scores from scales. 23 In network analysis, each symptom is viewed as one node and the interaction between two nodes is represented as an edge. The value of each edge stands for the statistical association calculated between two nodes after controlling for other nodes in the network structure. 24 In the context of a strong edge in a network, the activation/deactivation of one symptom may predict the activation/deactivation of another symptom. 25 In addition, the node position in the network represents its relative importance, so the more the node is centrally placed, the more interconnected the node is. Central symptoms are more prone to activate other symptoms; thus, they may play a pivotal role in causing the onset and/or maintenance of the syndrome. 20 In theory, treatment targeting central symptoms are more effective than targeting peripheral symptoms. 22 Consequently, network analysis is a highly valuable approach for depicting the psychological constructs and generating testable hypotheses for further studies. 22,26 Network analysis has been used in diverse clinical conditions among specific populations such as anxiety and depression in adolescents 27 and other psychiatric samples, 28 depressive symptoms in older people, 20 hopelessness in a community sample, 29 and post-traumatic stress disorder in earthquake survivors. 19 To provide targeted interventions and strategies that facilitate prevention and alleviation of clinically significant distress, it is important to understand mechanisms that increase their risk for insomnia with sophisticated methods. A recent review 30 on the trends in insomnia research indicated research using network approach was warranted. To the best of our knowledge, only two studies 31,32 have examined insomnia using network analysis. One network analysis explored the effect of personality traits on insomnia in the Netherlands, 31 while the other network analysis explored the cognitive-behavioral therapy for insomnia (CBTI) induced effects on symptoms of insomnia and depression throughout the course of treatment. 32 To date, no network analysis on insomnia in mental health professionals has been published. Therefore, we conducted an exploratory study using a network analysis to examine the associations of insomnia symptoms (insomnia hereafter) and identify the most central (influential) symptoms that could serve as intervention targets in mental health professionals during the COVID-19 pandemic. Participants and Procedure This study was based on a secondary analysis of a national mental health survey of mental health professionals in China during the COVID-19 pandemic. 33 To avoid the risk of contagion in population research, 34 in these hospitals were invited to participate in the survey on an anonymous and voluntary basis. Inclusion criteria included: (1) 18 years old and above; (2) mental health professionals (eg, doctors, nurses and nursing assistants) working in psychiatric hospitals or units in China during the COVID-19 pandemic; (3) able to understand Chinese and provide electronic written informed consent. No specific exclusion criteria were used in this study. The study protocol was approved by the Institutional Review Board (IRB) of Beijing Anding Hospital, China (2020 -Keyan (No. 10)) based on the local ethical regulations and the Declaration of Helsinki. 36 All participants provided online written informed consent and the information collected were kept confidential. Measures Basic demographic information and clinical characteristics were collected, including age, gender, marital status, education level, living circumstances, type of hospital (tertiary or non-tertiary), and past experience combating Severe Acute Respiratory Syndrome (SARS) during the outbreak in 2003. Insomnia was measured by the 7-item Chinese version of the Insomnia Severity Index (ISI) questionnaire 37,38 which included 7 domains: 1) severity of sleep onset problem; 2) sleep maintenance problem; 3) early morning wakening problem; 4) sleep dissatisfaction; 5) interference of sleep difficulties with daytime functioning; 6) noticeability of sleep problems by others; and 7) distress caused by the sleep difficulties. Each item scored from 0 (no problem) to 4 (very severe problem); a higher score indicating more severe insomnia. The reliability and validity of the ISI has been well-documented in Chinese populations. [38][39][40] Statistical Analysis Network Estimation The insomnia symptom network was estimated using the R program (Version 4.0.3). 41 Following previous studies, 25,29 item informativeness (ie, standard deviation (SD) of the item) was examined using the describe function in the R-package psych (Version 2.0.12), 42 and item redundancy (ie, < 25% of statistically different correlations) was measured using the goldbricker function in the R-package networktools (Version 1.2.3). 43 An item was regarded as having poor informativeness if its value is 2.5 times lower than the mean level of all items' informativeness in a scale; 29 in this case, this item should be excluded from the network analysis. In network parlance, each symptom is considered as node, and the correlations between two individual symptoms were considered edges. A thicker edge represents a stronger correlation, while the green color edge indicates a positive correlation, and red color indicates a negative correlation. 44 The network structure of insomnia symptoms was estimated using the function estimateNetwork in the R-package bootnet (Version 1.4.3). 24 As recommended previously, 31 polychoric correlations were used to examine the correlations between ISI items, taking the Likert-scale type variables into account. To minimize spurious associations due to sampling error, the Extended Bayesian Information Criterion (EBIC) graphical least absolute shrinkage and selection operator (LASSO) model was adopted. 44 In this model, the LASSO algorithm is used to shrink all edges in the network and set small edges exactly to zero to make the network sparser and easier to interpret. 45 The EBIC model selection approach has been widely used to select related tuning parameters 24 (γ=0.5). To visualize the network, the R-package qgraph (Version 1.6.9) 44 was used. Centrality indices including strength, closeness, and betweenness were calculated to explore the importance of individual symptoms in the network using the R-package bootnet (Version 1.4.3). 24 The default tuning parameter (alpha=1) was used. Following previous studies, 24,29 in tandem with lower stability of betweenness and closeness, this study only focused on the strength index. The node strength refers to the sum of absolute weights of edges (expressed by correlation coefficients) connected to a certain node, indicating how strongly a node was directly connected to other nodes in the network. 46 In addition, predictability was estimated using R-package mgm (Version 1.2-11), 47 indicating to what extent certain nodes were predicted by all its neighboring nodes. Network Stability and Accuracy The network stability and accuracy were assessed using the R-package bootnet (Version 1.4.3), 24 which reflected the robustness of the results. Based on a case-dropping subset bootstrap procedure, the correlation stability coefficient (CS-C) was calculated for centrality indices. The value of CS-C referred to the maximum proportion of dropped cases to maintain a correlation above 0.7 between the centrality indices of the original sample and those of subset samples with 95% of possibility. 24 As recommended previously, 24 CS-C value of above 0.25 is acceptable and preferably, above 0.5. A nonparametric bootstrap procedure was then used to assess the accuracy of edge weights. Edge accuracy was assessed by 95% confidence intervals (CIs), with a narrower CI indicating a more trustworthy network and a larger CI suggesting a poorer accuracy. 24,29 In addition, bootstrapped tests were also conducted based on 95% CIs to evaluate the difference between two edges or between two nodes strength, with zero included in the CIs implying statistical differences. 24 Network Comparisons Considering the influence of gender on insomnia based on previous findings, 48 three invariance measures (network structure invariance, global strength invariance, and edge invariance) of the network were examined using the Network Comparison Test (NCT) in the R-package NetworkComparisonTest 2.2.1 49 to compare network measures between females and males. Network structure refers to the maximum difference of pairwise edges between female and male networks, global strength refers to the absolute sum of all edges of each network, and edge differences refer to the differences of individual edge weight between females and males networks. Holm-Bonferroni correction for multiple comparisons at the level of individual edge between gender was adopted. The three network comparison tests were conducted based on a similar permutation-based procedure. 49 First, 50% of the participants in female and male dataset were randomly switched 1000 times to form the permutated datasets, which was considered as the reference distribution in the null hypothesis. Second, the three invariance measures (eg, network structure invariance, global strength invariance, and edge invariance) were calculated in the network structure estimate based on the original dataset (ie, observed statistics). Finally, the observed statistics were compared with the reference distribution created with the permutated procedure. Network Structure Item informativeness was calculated (M SD =0.8±0.1) and no item was found to be poorly informative. Additionally, analyses of item redundancy showed that no item was redundant with any other item in the ISI. Therefore, all the ISI items were included in the network analyses. The network structure of insomnia is shown in Figure 1 and the corresponding correlation matric is presented in Table S2. All the connections showed positive associations between nodes. The edge ISI1 (severity of sleep onset) -ISI2 (sleep maintenance) has the strongest connection, followed by edge ISI2 (sleep maintenance) -ISI3 (early morning wakening problems), edge ISI5 (noticeability of sleep problems by others) -ISI6 (distress caused by the sleep difficulties), and the edge ISI6 (distress caused by the sleep difficulties) -ISI7 (interference with daytime functioning). Figure 1 shows two main clusters of insomnia, including the cluster of nocturnal symptoms (ISI1, ISI2, and ISI3), and the cluster of daytime symptoms (ISI5, ISI6, and ISI7), both of which were connected by the item ISI4 (sleep dissatisfaction). In Figure 2, centrality indices of insomnia symptoms network show that ISI7 (interference with daytime functioning) was the most central symptom in mental health professionals with the highest strength. The item ISI3 (early morning wakening problems) was placed in the most peripheral area of the whole network with the lowest centrality indices. The predictability index shows that an average of 65.4% of variance could be potentially accounted by each node's neighboring nodes (M predictabikity =0.654±0.071). ISI7 (interference with daytime functioning, 73.6%) had the highest predictability in the network shown in Table S1. Network Stability and Accuracy In terms of network stability shown in Figure S1, the CS-C of strength calculated by the case dropping bootstrap procedure was 0.75, which shows that the network remained stable, in that dropping 75% of sample would not change the primary results (r=0.7). Regarding the accuracy of the present network, the results of bootstrap 95% CI for edges show that 95% CIs were narrow, indicating edges are trustworthy shown in Figure 3. The bootstrapped difference tests revealed that most comparisons among edge weights and node strength were statistically significant shown in Figure S2 and S3. Network Comparisons The NCT results did not find significant differences in the network global strength Figure S4-S6. Discussion This network analysis was the first exploratory study that characterized the network structure of insomnia in Chinese mental health professionals during the COVID-19 pandemic. Our results revealed that the symptoms were separated into two clusters connected by ISI4 (sleep dissatisfaction), and ISI7 (interference with daytime functioning) was the most central symptom in this sample. In this study, we found that sleep-related problems (ie, ISI1: severity of sleep onset; ISI2: sleep maintenance; and ISI3: early morning wakening problems), also known as nocturnal symptoms, were rounded up in one cluster. On the other hand, the daytime symptoms (ISI5: noticeability of sleep problems by others; ISI6: distress caused by the sleep difficulties; ISI7: interference with daytime functioning) were organized into another cluster. These two clusters both contributed to the node ISI4 (sleep dissatisfaction). Our findings are consistent with the previous findings 31 on the association of personality traits with different insomnia characteristics in the general population. These results suggest that the ISI total score may dilute the severity degree of nocturnal and daytime symptoms. 31 Our study findings may have clinical significance, particularly for the treatment of insomnia. For instance, cognitive behavioral treatment of insomnia (CBT-I) is the most widely used non-drug treatment 50 that encompasses techniques (ie, stimulus control techniques, cognitive therapy techniques, relaxation training) on managing expectations and beliefs in relation to nocturnal sleep improvement and coping strategies for daytime insomnia complaints. 50,51 Hence, CBT-I could address both daytime and nocturnal symptoms. 31 Based on our findings, if interventions are specifically targeted to address ISI4 (sleep dissatisfaction), daytime and nocturnal complaints may be ameliorated concurrently. 31 Recent studies 32,52 have also supported that research regarding the dynamic effects of interventions on specific symptoms of insomnia is further warranted. The item ISI7 (interference with daytime functioning) was the most central (influential) symptom in this study. This survey was conducted immediately after the WHO's declaration of COVID-19 as a global pandemic 53 when Chinese health-care workers were experiencing substantial stress due to the large number of COVID-19 cases and insufficient personal protective equipment. 54,55 The item ISI7 assessed daytime functioning interference (ie, ability to function at work/daily chores) caused by insomnia. Given the moral responsibility 56 and strong sense of mission, health-care workers were likely concerned more about the negative consequences of insomnia problems, such as reduced work efficiency that may jeopardize patients' safety and medical service quality, 57 than their experience of insomnia itself. Moreover, the edges ISI7-ISI5 (ie, interference with daytime functioning-noticeability of sleep problems by others) and ISI7-ISI6 (ie, interference with daytime functioning-distress caused by the sleep difficulties) showed strong connections in this network model, indicating that interventions targeting the two daytime complaints (ISI5 and ISI6) may improve the symptom of "interference with daytime functioning" (ISI7); hence, "interference with daytime functioning" (ISI7) could also be an important treatment outcome measure of insomnia. The average of predictability is 65.4%, indicating that most of the variance (ie, 65.4%) of individual symptoms of insomnia could be accounted for by this network model. The remaining (34.6%) variance could be attributed to other symptoms associated with insomnia, such as depressive and anxiety symptoms, fatigue, and perceived stress. 48,58 Moreover, as recommended previously, 29 the association of centrality index of strength and predictability with the item mean levels was examined, but no associations were found (node strength and item mean level: r s =−0.487, p=0.268; predictability and item mean level: r s =−0.179, p=0.702). This suggests that network analysis could gain insight into the influence of individual symptoms on the model, which is not possible for traditional statistical approach that only used total scale scores; for example, the ISI7 (interference with daytime functioning) had the highest predictability and node strength in the network model, but its mean score was the lowest among all ISI items. The strengths of this study included the large sample size and use of network analysis. Some limitations should be acknowledged. First, this is a cross-sectional study, hence the risk factors, co-morbidities and causality of the symptoms of insomnia could not be established. Second, data were collected based on participants' self-report, and the possibility of recall bias could not be excluded. Third, this study focused on mental health professionals, therefore, the findings could not be generalized to other populations, such as other health or general population. Third, an online survey with snowball sampling method was used to avoid the risk of COVID-19 infection, and therefore, the selection bias was inevitable with limited representativeness. Fourth, for logistical reasons, certain associated factors of insomnia, such as the different mental health disciplines, presence and severity of physical diseases, and the influence of insomnia on daily life, were not analyzed. Conclusion The results of this network analysis may have treatment implications. As the symptom of interference with daytime functioning was the most central symptom in the insomnia model, it may be an important treatment outcome measure of insomnia in mental health professionals. Further, addressing sleep satisfaction in treatment could simultaneously improve daytime and nocturnal symptoms. Ethics Approval The study protocol was approved by the Institutional Review Board (IRB) of Beijing Anding Hospital, China (Approval No.: (2020) Keyan (No. 10)). Funding The study was supported by the National Science and Technology Major Project for investigational new drug Disclosure The authors report no conflicts of interest in this work. Publish your work in this journal Nature and Science of Sleep is an international, peer-reviewed, open access journal covering all aspects of sleep science and sleep medicine, including the neurophysiology and functions of sleep, the genetics of sleep, sleep and society, biological rhythms, dreaming, sleep disorders and therapy, and strategies to optimize healthy sleep. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2021-10-29T15:21:45.060Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "a93d554bf3314c4b4353b590cae34ef5823c694f", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=75309", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ccb5c3a0f662bd47ab906c4982d75651de4c0575", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245858606
pes2o/s2orc
v3-fos-license
Effects of Trichoderma asperellum 6S-2 on Apple Tree Growth and Replanted Soil Microbial Environment Trichoderma asperellum strain 6S-2 with biocontrol effects and potential growth-promoting properties was made into a fungal fertilizer for the prevention of apple replant disease (ARD). 6S-2 fertilizer not only promoted the growth of Malus hupehensis Rehd seedlings in greenhouse and pot experiments, but also increased the branch elongation growth of young apple trees. The soil microbial community structure changed significantly after the application of 6S-2 fertilizer: the relative abundance of Trichoderma increased significantly, the relative abundance of Fusarium (especially the gene copy numbers of four Fusarium species) and Cryptococcus decreased, and the relative abundance of Bacillus and Streptomyces increased. The bacteria/fungi and soil enzyme activities increased significantly after the application of 6S-2 fertilizer. The relative contents of alkenes, ethyl ethers, and citrullines increased in root exudates of M. hupehensis Rehd treated with 6S-2 fertilizer and were positively correlated with the abundance of Trichoderma. The relative contents of aldehydes, nitriles, and naphthalenes decreased, and they were positively correlated with the relative abundance of Fusarium. In addition, levels of ammonium nitrogen (NH4-N), nitrate nitrogen (NO3-N), available phosphorus (AP), available potassium (AK), organic matter (SOM), and pH in rhizosphere soil were also significantly related to changes in the microbial community structure. In summary, the application of 6S-2 fertilizer was effective in alleviating some aspects of ARD by promoting plant growth and optimizing the soil microbial community structure. Introduction Apple replant disease (ARD) is a common occurrence in apple-growing regions [1,2]. Both biotic and abiotic factors can cause ARD, but an imbalance in the soil microbial community structure is considered to play a major role [3,4]. Some studies have shown that Fusarium is one of the key causes of ARD in China [5,6]. The specialized Fusarium proliferatum f. sp. malus domestica MR5 (MW600437.1), which is associated with ARD in China, was recently screened and identified in our laboratory; it has been shown to cause serious damage to the apple root system (in review). With the elimination of broadspectrum chemical fumigants, green and sustainable biological control measures have begun to emerge [4,[7][8][9]. Trichoderma, Bacillus, and Pseudomonas have been made into microbial fertilizers that exhibit broad-spectrum antagonism against pathogens. They are also used to compensate for the deterioration in soil physical and chemical properties caused by the excessive application of chemical fertilizers [10][11][12]. Compared with other species used to make microbial fertilizer, Trichoderma has exhibited a greater tolerance to environmental conditions [10,13,14]. To date, T. harzianum, Experimental Sites The soils for the greenhouse and pot experiments were obtained from a 35-year-old apple orchard in Manzhuang (36.04 • N, 117.11 • E), Tai'an City, Shandong Province, China. The field experiment was performed at three sites in major apple-producing areas of Shandong: Laizhou (37.07 • N, 119.82 • E), Qixia (37.34 • N, 120.85 • E), and Yiyuan (36.19 • N, 118.17 • E). The physical and chemical properties of the soils are shown in Supplementary Table S1. Production of 6S-2 Trichoderma Fertilizer 6S-2 was cultivated on PDA medium at 28 • C until the spores had grown for approximately 6 days; 10 mL of sterile water was then added to the plate, and the spores were gently scraped with a coating rod to make a spore suspension. After removing excess hyphae by filtering through four layers of sterilized lens-cleaning paper, we calculated the spore concentration under a microscope using a hemacytometer. Sterile water was used to dilute the spore solution to a concentration of 3.05 × 10 7 CFU/mL. The spores were then expanded using the shallow plate fermentation method [26]. Four hundred grams of sterile medium (wheat bran and corn flour in a 4:1 volume ratio with 45% sterile water added) was placed in a shallow dish (30 cm × 20 cm × 5 cm), and 2% 6S-2 spore solution was added. The dish was covered with sterilized double gauze, incubated at 28 • C, and turned once every 2 d. After 10 days of fermentation, the culture was air-dried, pulverized, and sieved to obtain T. asperellum 6S-2 spore powder. The concentration of the resulting spore powder was 9.5 × 10 8 CFU/mL. The 6S-2 spore powder was mixed with the blank fertilizer carrier (high-temperature sterilized, fully decomposed cow dung) at a ratio of 5%, and the water content was kept at 30% to promote natural fermentation. The concentration of spores was measured every day until it reached 10 10 CFU/mL. Finally, the spore concentration of 6S-2 fertilizer was 2.1 × 10 10 CFU/mL, and it was mixed with replanted soil at a volume ratio of 1%. In the blank carrier, the available nitrogen content was 0.36 mg/g, the available phosphorus content was 1.49 mg/g, and the available potassium content was 1.03 mg/g. Experimental Design Replanted soil was used for the controls, which were denoted GR (greenhouse), PR (pot), LR (Laizhou), QR (Qixia), and YR (Yiyuan). Treatments that received replanted soil with blank fertilizer carrier were denoted GC, PC, LC, QC, and YC, and treatments that received replanted soil with 6S-2 fertilizer were denoted GT, PT, LT, QT, and YT. In mid-January 2021, M. hupehensis Rehd seeds were stratified at 4 • C for approximately 45 days until they became white. The seeds were then sown into nursery substrate in March 2021, and seedlings were selected when they had grown 5-6 true leaves in late April. Seedlings with no diseases or insect pests and uniform growth were selected for use in the subsequent experiments. Some seedlings were transplanted into white plastic pots (15 cm × 9 cm × 11.5 cm) that contained 4.0 kg of soil (amended with or without the treatments above) for the greenhouse experiment, and others were transplanted into clay basin pots (42 cm × 38 cm × 32 cm) that contained 13.5 kg of soil (with or without treatments) for the outdoor pot experiments. Each treatment was replicated 20 times, pots were randomly arranged, and all plants received the same management. For the field experiments, twoyear-old grafted apple trees ('T337' rootstock and 'Yanfu No.3' scion) were planted at the three field sites with different soil treatments in early March 2021. Samples were harvested from the greenhouse experiment in early July 2021; samples were harvested from the pot experiment in the middle of July, August, and September in 2021; and samples were harvested from the field experiment in late October 2021. When sampling, five samples were obtained from the upper soil layer around each tree, mixed together, passed through a 2 mm sieve, and separated into three parts. One was stored in a refrigerator at 4 • C and used to determine the numbers of culturable microorganisms. The second was naturally air-dried and used to measure soil enzyme activities. The third was quickly placed in liquid nitrogen, returned to the laboratory, stored at −80 • C, and used for DNA extraction and RT-qPCR analysis. DNA samples from the greenhouse experiments were sent for high-throughput sequencing. All measurements were performed using three biological replicates from each treatment, and three technical replicates were performed per biological replicate. Plant Related Indicators The plant heights, stem diameters, and branch lengths were measured in the field experiment using a tower ruler, vernier caliper, and tape measure, respectively; the shoot numbers were also counted. Plant heights and stem diameters of M. hupehensis Rehd seedlings were measured using a meter ruler and vernier caliper, respectively. Dry and fresh weights were measured with an electronic balance. Images of roots were obtained with a Scan Maker i800 Plus scanner (Microtek. Shanghai, China), and various root system parameters were measured using an LA-S plant image analyzer (Hengmei Electronic Technology, Weifang, China). Soil-Related Indicators Culturable bacteria, fungi, and actinomycetes in the soil were counted using the dilution plate method [27]. The activities of urease, sucrase, neutral phosphatase, and catalase in the soil were determined using the method of Yang and Wu [28]. Total DNA was extracted from soil using an E.Z.N.A. soil DNA kit (Omega Bio-tek, Norcross, GA, USA), and the gene copy numbers of four Fusarium species (F. oxysporum, F. proliferatum, F. solani, and F. moniliforme) in the soil were determined using a CFX96TM Thermal Cy-cler (Bio-Rad, Beijing, China) [29]. The analysis of the bacterial 16S rRNA gene and the fungal ITS region was performed on the Illumina MiSeq platform (www.i-sanger.com, accessed on 25 November 2021). The sequences of the 16S rRNA primers were 338F (5 -ACTCCTACGGGAGGCAGCAG-3 ) and 806R (5 -GGACTACHVGGGTWTCTAAT-3 ) [30]; the sequences of the ITS primers were ITS1F (5 -CTTGGTCATTTAGAGGAAGTAA-3 ) and ITS2R (5 -GCTGCGTTCTTCATCGATGC-3 ) [31]. Bioinformatics Analysis Three soil samples from each treatment in the greenhouse experiment were used for DNA extraction, PCR, and sequencing on the Illumina platform. Raw fastq files were demultiplexed and quality-filtered with Uparse (version 7.0.1090 http://drive5.com/ uparse/, accessed on 30 November 2021). The 300 bp reads were truncated at any site that received an average quality score of <20 over a 50 bp sliding window, and truncated reads shorter than 50 bp were discarded. Exact barcode matching was required; reads with a 2-nucleotide mismatch in primer matching and reads that contained ambiguous characters were removed. Only sequences that overlapped by more than 10 bp were assembled according to their overlapping sequence. Operational taxonomic units (OTUs) were clustered with a 97% similarity cutoff. A total of 534,996 high-quality 16S rDNA sequences and 508,218 high-quality ITS sequences were obtained from the nine soil samples (three samples each from the three soil treatments). These sequences were distributed among 4047 bacterial OTUs and 1004 fungal OTUs. The rarefaction curves showed that the sequencing work was relatively comprehensive in covering bacterial and fungal diversity, as the curves tended to approach saturation (Supplementary Figure S1a,c). The Shannon-Wiener curve indicated that the dataset from the diversity analysis was large enough to reflect the full microbial diversity information in the samples (Supplementary Figure S1b,d). Determination of Root Exudate Composition We optimized the methods of Liu et al. [32] and Wang [6] to collect and analyze root exudates from M. hupehensis Rehd seedlings. Three replicate seedlings from each treatment from greenhouse experiment were removed from their containers in early July 2021, and surface impurities were washed from their root systems in running water. The roots were then rinsed with sterile water, and care was taken not to damage them. Each plant was placed into a glass flask filled with 1 L sterile water, and all plants were placed in a growth chamber for 48 h (16-h light/8-h dark) at 25 ± 5 • C with gentle shaking (50 rpm). Plants were then removed from their flasks, and the resulting exudate solution was filtered using a 0.45-µm filter (Millipore) and extracted 3 times with ethyl acetate at a volume ratio of 1:1. The three extracts were combined and concentrated to 5 mL under reduced pressure at 30 • C. After passing again through a 0.45 µm organic membrane, the crude extract was used for GC-MS analysis. Chromatography was performed using an Rtx-5MS column (30 m × 0.32 mm × 0.25 µm) with a column oven temperature of 50 • C and an injection port temperature of 230 • C. The sample was injected with a split ratio of 10.0, and the injection volume was 1 µL. High purity He was used as the carrier gas at a pressure of 117.6 kPa and a column flow rate of 2.4 mL/min. The temperature program was as follows: 50 • C for 2 min, increased to 250 • C at 6 • C/min, and held for 10 min. The mass spectrometry conditions included Q3 scan acquisition mode, relative value EMV mode, a full scan acquisition mass range of 45-550 amu, ion source EI of 70 eV, and a temperature of 200 • C. The experimental results were compared with spectra at the NIST 17 database, and the peak area normalization method was used to express the relative content of each metabolite as the ratio of its peak area relative to the total peak area. The triple quadrupole gas chromatograph-mass spectrometer (GCMS-TQ8040 NX) and peak processing software were all from Shimadzu (Beijing, China). The retention time was used for qualitative identification, and the peak areas of external standards were used for quantification. Statistical Analysis A heatmap of microbial abundance data was constructed using the gplot package in R. The edge-weighted spring-embedding algorithm pulled together similar related properties and systems with similar structures. Networkx was used to calculate the node degree distribution, network diameter, average shortest path, node connectivity (degree), closeness centrality, betweenness centrality, and other network attributes to obtain relevant information within or between groups of species and samples. Principal coordinate analysis (PCoA) was performed based on the Bray-Curtis distance matrix calculated from the genus information of each sample. Hierarchical clustering analysis at the genus level was performed using the UPGMA (unweighted pair group method with arithmetic mean) algorithm based on Bray-Curtis distances generated by mothur. LEfSe analysis was also performed; this metagenomic approach uses linear discriminant analysis to determine the taxa that were most likely to explain differences among treatments. Distance-based redundancy analysis (db-RDA), a constrained extension of PCoA, was used to show the relationships of environmental factors and treatments to microbial community structure and was performed with the capscale function in the vegan R package. The established biological correlation network was analyzed based on an understanding of graph theory. The data were digitized using Microsoft Excel 2010. Analysis of variance was performed using IBM SPSS 19.0, and Duncan's new complex range method was used to assess the significance of differences. Data were presented as mean ± SE (standard error). Graph Pad Prism 7 was used to construct the figures. To facilitate subsequent descriptions, the names of the experimental treatments were simplified and abbreviated as follows: GR (control replant soil in the greenhouse experiment); GC (replant soil with blank carrier in the greenhouse experiment); GT (replant soil with 6S-2 fertilizer in the greenhouse experiment); PR (control replant soil in the pot experiment); PC (replant soil with blank carrier in the pot experiment); PT (replant soil with 6S-2 fertilizer in the pot experiment); LR (control replant soil in Laizhou); LC (replant soil with blank carrier in Laizhou); LT (replant soil with 6S-2 fertilizer in Laizhou); QR (control replant soil in Qixia); QC (replant soil with blank carrier in Qixia); QT (replant soil with 6S-2 fertilizer in Qixia); YR (control replant soil in Yiyuan); YC (replant soil with blank carrier in Yiyuan); YT (replant soil with 6S-2 fertilizer in Yiyuan). Growth of M. hupehensis Rehd Seedlings and Young Apple Trees The application of 6S-2 fertilizer promoted the growth of M. hupehensis Rehd seedlings and two-year-old grafted apple trees ( Figure 1). There was a significant difference in plant height between control plants grown in replant soil and plants grown in replant soil with 6S-2 fertilizer (Figure 1b,j,l). Under greenhouse conditions, the root dry and fresh weights of M. hupehensis Rehd seedlings grown with 6S-2 fertilizer were 1.82-and 1.70-fold higher than those of the control, and 1.37-and 1.66-fold higher than those of blank carrier-treated plants (Figure 1h-i). Under field conditions, the application of 6S-2 fertilizer significantly increased the number of branches and branch elongation of young apple trees (Figure 1n,o). Analysis of Soil Microbial Community Composition at the Genus Level The top 50 dominant bacterial and fungal genera across all samples were used to construct an abundance heatmap, and differences in the abundances of soil bacteria and fungi were apparent after the application of 6S-2 fertilizer ( Figure 2). The fungal compositions of GT and GC were not clustered on the same branch, indicating that their fungal communities differed significantly (Figure 2b Analysis of Soil Microbial Community Composition at the Genus Level The top 50 dominant bacterial and fungal genera across all samples were used to construct an abundance heatmap, and differences in the abundances of soil bacteria and fungi were apparent after the application of 6S-2 fertilizer (Figure 2). The fungal compositions of GT and GC were not clustered on the same branch, indicating that their fungal Arthrobotrys, Lophiostoma, Duddingtonia, and Fusarium were the dominant fungal species (Figure 2b; Supplementary Figure S2b). After the application of 6S-2 fertilizer, the relative abundances of Bacillus and Trichoderma increased to 6.91% and 70.48%, respectively, but the relative abundance of Fusarium decreased significantly (Figure 2; Supplementary Figure S3). Collinearity network analysis at the genus level showed that the GR and GT groups shared the smallest proportion of specific bacteria and fungi, accounting for only 2.21% and 0.88%, respectively (Figure 2c,d). Collinearity network analysis in bacteria (c) and fungi (d) at the genus level based on species abundances greater than 50; the node sizes represent the relative abundance (square root) of the genus in the data set, and the edges represent the association patterns of individual genera with different treatments. The red nodes represent genera related to GR, the blue nodes represent genera related to GC, the green nodes represent genera related to GT, the yellow nodes represent genera related to GR and GC, the black nodes represent genera related to GR and GT, the purple nodes represent genera related to GT and GC, and the cyan nodes represent genera related to GR, GC, and GT. Analysis of Culturable Microorganisms and Real-Time Fluorescence Quantification of Four Fusarium Species Application of 6S-2 fertilizer increased the number of culturable bacteria and decreased the number of culturable fungi relative to the control treatment by 71.88%, 37.43%, and 50.00% in Laizhou, Qixia, and Yiyuan, respectively, and these differences were statistically significant (Figure 3a,b,d,g). As a result, the 6S-2 fertilizer caused a marked increase in the bacteria/fungi ratio (Figure 3c,g). In the greenhouse and field environments, there was little difference in the relative abundance of four Fusarium species between the control treatment and the blank carrier treatment. By contrast, the gene copy numbers of F. oxysporum, F. proliferatum, F. solani, and F. moniliforme declined to various extents (39.80-73.85%) in the 6S-2 fertilizer treatment (Figure 3e-i,k-n). Similar results were observed in the pot experiment (Supplementary Figure S4). and fungi (d) at the genus level based on species abundances greater than 50; the node sizes represent the relative abundance (square root) of the genus in the data set, and the edges represent the association patterns of individual genera with different treatments. The red nodes represent genera related to GR, the blue nodes represent genera related to GC, the green nodes represent genera related to GT, the yellow nodes represent genera related to GR and GC, the black nodes represent genera related to GR and GT, the purple nodes represent genera related to GT and GC, and the cyan nodes represent genera related to GR, GC, and GT. Analysis of Culturable Microorganisms and Real-Time Fluorescence Quantification of Four Fusarium Species Application of 6S-2 fertilizer increased the number of culturable bacteria and decreased the number of culturable fungi relative to the control treatment by 71.88%, 37.43%, and 50.00% in Laizhou, Qixia, and Yiyuan, respectively, and these differences were statistically significant (Figure 3a,b,d,g). As a result, the 6S-2 fertilizer caused a marked increase in the bacteria/fungi ratio (Figure 3c,g). In the greenhouse and field environments, there was little difference in the relative abundance of four Fusarium species between the control treatment and the blank carrier treatment. By contrast, the gene copy numbers of F. oxysporum, F. proliferatum, F. solani, and F. moniliforme declined to various extents (39.80-73.85%) in the 6S-2 fertilizer treatment (Figure 3e-i,k-n). Similar results were observed in the pot experiment (Supplementary Figure S4). The ratio of bacteria to fungi in the greenhouse experiment (c) and field experiment (f). Real-time fluorescence quantification of F. oxysporum in the greenhouse experiment (g) and field experiment (k), F. proliferatum in the greenhouse experiment (h) and field experiment (l), F. solani in the greenhouse experiment (i) and field experiment (m), and F. moniliforme in the greenhouse experiment (j) and field experiment (n). Different lowercase letters (a,b,c) in the same column indicate a significant difference at p ≤ 0.05 level by Duncan's new multiple range test. Differences in Microbial Species among Treatments A PCoA of Bray-Curtis distance matrix distances between samples revealed differences in their bacterial and fungal communities. The first two principal component scores accounted for 77.59% of the total variation in bacteria ( Figure 4a) and for 94.04% of the total variation in fungi (Figure 4c), suggesting that the application of 6S-2 fertilizers may be one of the important factors driving changes in the microbial community structure. Based on their different microbial communities, samples were clustered into two groups, one of which corresponded to GR in bacteria (Figure 4b) and the other to GT in fungi (Figure 4d). Overall, the results demonstrated a clear division between GR and GT. At the genus level, the top six genera of both bacteria and fungi differed significantly among the three groups. Bacillus, Streptomyces, and Trichoderma were more abundant after 6S-2 fertilizer application, whereas Arthrobotrys, Lophiostoma, and Fusarium declined markedly (Figure 4e,f). Groups were displayed in cladograms, and LDA scores of 4 or greater were confirmed by LEfSe (Figure 4g,h; Supplementary Figure S5). Two groups of bacteria and five groups of fungi were significantly enriched in GR: Acidobacteriales (from phylum to order), Burkholderiaceae (family), Cryptococcus (from class to genus), Lophiostoma (from class to genus), Arthrobotrys (from class to genus), Fusarium (from family to genus), and Bionectriaceae (from family to genus). Fewer microbes were significantly enriched in GC. Two groups of bacteria and two groups of fungi were significantly enriched in GT: Streptomyces (from order to genus), Bacillus (genus and its class Bacilli), Chaetomidium (genus, the class Sordariomycetes and the order Hypocreales), and Trichoderma (from family to genus). These results showed that there was a significant difference in the composition of the soil microbial community between GR and GT. Relationships between Microbial Community Structure and Environmental Factors Db-RDA revealed that the soil microbial community structure was influenced by environmental factors, including ammonium nitrogen (NH 4 -N), nitrate nitrogen (NO 3 -N), available phosphorus (AP), available potassium (AK), soil organic matter (SOM), and pH in rhizosphere soil. All these factors significantly affected the bacterial and fungal community structure (p ≤ 0.05) (Figure 5a Table S2). The GT groups were positively correlated with environmental factors, but the GR groups were negatively correlated with them (Figure 5a,b). Two-way correlation network analysis showed that bacterial genera and fungal genera had different relationships with environmental factors (Figure 5c,d). Streptomyces and Trichoderma showed significant positive correlations with all environmental factors, whereas Fusarium, Lophiostoma, Arthrobotrys, and Cryptococcus showed a highly significant negative correlation with NH 4 -N, NO 3 -N, AP, AK, SOM, and pH (Supplementary Figure S6). There were antagonistic or synergistic effects between different microbial genera, and the top six most abundant bacterial species showed fewer associations. Fusarium, Lophiostoma, Arthrobotrys, and Cryptococcus showed synergistic relationships with one another but showed strong antagonism toward Trichoderma (Supplementary Figure S7). Relationships between Microbial Community Structure and Environmental Factors Db-RDA revealed that the soil microbial community structure was influenced by environmental factors, including ammonium nitrogen (NH4-N), nitrate nitrogen (NO3-N), available phosphorus (AP), available potassium (AK), soil organic matter (SOM), and pH in rhizosphere soil. All these factors significantly affected the bacterial and fungal community structure (p ≤ 0.05) (Figure 5a,b; Supplementary Table S2). The GT groups were positively correlated with environmental factors, but the GR groups were negatively correlated with them (Figure 5a,b). Two-way correlation network analysis showed that bacterial genera and fungal genera had different relationships with environmental factors (Figure 5c,d). Streptomyces and Trichoderma showed significant positive correlations with all environmental factors, whereas Fusarium, Lophiostoma, Arthrobotrys, and Cryptococcus showed a highly significant negative correlation with NH4-N, NO3-N, AP, AK, SOM, and Soil Enzyme Activities Application of 6S-2 fertilizer increased the activities of urease, phosphatase, invertase, and catalase to various degrees in the greenhouse, pot, and field experiments, and there were significant differences between the GR and GT treatments ( Figure 6). In the pot experiment, the activities of the four soil enzymes peaked in August (Figure 6e-h). Compared with control replant soil, the urease activity in 6S-2-treated soil increased by 49.99%, 78.82%, and 100.31% in Laizhou, Qixia, and Yiyuan, respectively; the sucrase activity increased by 101.83%, 67.00%, and 40.46%; the phosphatase activity increased by 37.12%, 35.59%, and 46.99%; and the catalase activity increased by 49.99%, 78.82%, and 100.31% (Figure 6i-l). Figure S6). There were antagonistic or synergistic effects between different microbial genera, and the top six most abundant bacterial species showed fewer associations. Fusarium, Lophiostoma, Arthrobotrys, and Cryptococcus showed synergistic relationships with one another but showed strong antagonism toward Trichoderma (Supplementary Figure S7). , pot experiment (f), and field experiment (j); soil sucrase activity in the greenhouse experiment (c), pot experiment (g), and field experiment (k); and soil catalase activity in the greenhouse experiment (d), pot experiment (h), and Figure 6. Soil urease activity in the greenhouse experiment (a), pot experiment (e), and field experiment (i); soil phosphatase activity in the greenhouse experiment (b), pot experiment (f), and field experiment (j); soil sucrase activity in the greenhouse experiment (c), pot experiment (g), and field experiment (k); and soil catalase activity in the greenhouse experiment (d), pot experiment (h), and field experiment (l). Different lowercase letters (a,b,c) in the same column indicate a significant difference at p ≤ 0.05 level by Duncan's new multiple range test. Root Exudate Components and Correlation Analysis with the Microbial Community Root exudates of plants from the GR, GC, and GT treatments all contained esters, alkanes, acids, alcohols, amides, phenols, ketones, saccharides, alkenes, aldehydes, nitrofurans, and naphthalenes. Esters and alkanes were the main components. Ethyl ethers and citrulline were present only in the GT group, but nitriles were absent (Figure 7c-e). The peak area of dibutyl phthalate was as high as 41.82% in the GT treatment, 7.65% higher than that of the control. The components with peak areas greater than 0.5% in root exudates of the three treatment groups are shown in Supplementary Table S3, and the associated spectra are provided in Supplementary Figure S8. Alkanes, ethyl ethers, and citrullines were significantly positively correlated with Streptomyces and Trichoderma but significantly negatively correlated with Fusarium, Lophiostoma, Arthrobotrys, and Cryptococcus. Nitrofurans, saccharides, phenols, and citrullines were significantly positively correlated with Bacillus, but alcohols were negatively correlated with this taxon (Figure 7a,b). Discussion The root system is the link between the plant and the soil. A healthy soil environment can promote the development of the root system, which, in turn, promotes the growth of plants [33]. After 6S-2 fertilizer was applied to the soil, the relative abundance of Trichoderma in the rhizosphere increased significantly, whereas the relative abundance of harmful fungi such as Fusarium decreased (Figures 2b and 4e). This result suggested that 6S-2 can rapidly multiply in apple rhizosphere soil and can inhibit the propagation of pathogenic Fusarium [34] through direct niche competition [35]. This result may also reflect a direct interaction between 6S-2 and Fusarium, whereby 6S-2 exhibits antagonism, reparasitism, and bacteriolysis that reduces the relative abundance of Fusarium [36]. Previous work has shown that the diversity of the soil microbial community was enriched after Trichoderma colonization, resulting in a significant reduction in the population of Fusarium [37,38]. After 6S-2 fertilizer was applied to replanted apple orchards soil, the bacteria/fungi increased significantly (Figure 3c,j), demonstrating that 6S-2 fertilizer can effectively regulate the ratio of bacteria to fungi in the apple rhizosphere [39]. The soil community structure changed from a fungal type to a bacterial type; this may improve the structure and function of the soil microbial flora [39,40] and may stimulate the proliferation of beneficial bacteria such as Bacillus and Streptomyces (Figures 2a and 4f). In a previous report, colony multiplication drove interactions among the soil microflora, reduced the number of harmful Fusarium, and effectively controlled the occurrence of soil-borne diseases [41]. Trichoderma has also shown a variety of positive effects on plant growth, resilience, and yield [42,43]. In the experiments reported here, the application of 6S-2 fertilizer promoted the growth of young apple trees and M. hupehensis Rehd seedlings (Figure 1). Changes in microbial species may also depend on the symbiotic interaction between plants and their surrounding microorganisms [42]. After the application of microbial fertilizer, biocontrol microorganisms can quickly form a "substrate-microorganism" ecosystem with the help of the carrier [44]; this process helps to regulate the soil micro-ecological environment, promotes the restoration of soil enzyme activities, and changes the ecology of the rhizosphere through its effects on the physical and chemical properties of soil microorganisms [45,46]. The new optimized soil environment can, in turn, promote the further growth and development of 6S-2 and other biocontrol microorganisms [39,47]; it can also enhance the secretion of biocontrol enzymes, IAA, and other secondary metabolites and can improve plant growth [40] and stress resistance [48]. Root exudates are the bridge between plants, soil, and microbes and play an important role in the interaction between plants and the environment [49]. The type and quantity of root exudates determine the type and quantity of rhizosphere microorganisms [41,50], and rhizosphere microorganisms in turn affect the production of root exudates [41]. Root exudates of different crops can regulate different aspects of the rhizosphere microbial community [51]. Under continuous cropping conditions, watermelon root exudates can significantly increase the number of germinated Fusarium spores and enhance their reproductive ability [52]. The accumulation of phloridzin and other phenolic autotoxic substances in replanted apple soil hinders apple growth [53]. In this experiment, the composition of M. hupehensis Rehd seedlings root exudates changed after the application of 6S-2 fertilizer (Figure 7c-e). Previous works raise the possibility that 6S-2 application may adjust the root exudate composition to promote the recruitment and aggregation of specific beneficial microorganisms [54], thereby optimizing the rhizosphere microbial community structure [55] and inducing plant resistance to pathogenic fungi such as Fusarium [56]. The accumulation of dibutyl phthalate and other substances has been shown to directly inhibit the growth of harmful fungi such as Fusarium and the germination of spores, thereby helping to limit pathogen damage [57]. The relative contents of sugars and amino acids increased in root exudates after the application of 6S-2 fertilizer. This may have provided the carbon and energy required for 6S-2 growth [58,59], enhanced the absorption and utilization of nutrients under stress conditions [58], and thus promoted plant growth. Perhaps because of the different extraction methods, detection methods, and plant species, we did not detect large amounts of sugars, fatty acids, amino acids, and other substances, in contrast to the results of previous studies [60,61]. Related methods need to be continuously optimized in follow-up research to more accurately determine the composition of root exudates [62]. The relationships between root exudates and rhizosphere soil microorganisms also require further study [60]. The effects of Trichoderma fertilizers were closely related to the application environment [63]. At the same application rate, 6S-2 fertilizers showed greater effects in greenhouse and pot experiments than in the field experiment. To improve its biological control effect, it will be necessary to determine which environmental conditions are most conducive to the colonization of Trichoderma and enhance its ability to compete with the indigenous microbial flora. It is unclear whether the changes in the soil microbial community structure and plant root exudates caused by the application of 6S-2 fertilizer will continue for a long time [61]. The duration of the fertilizer effects and whether they can directly control ARD will require additional testing [64]. Further experiments are also needed to separate and identify chemotactic substances in root exudates and to determine whether their combination with 6S-2 is more effective for the prevention and treatment of ARD. The optimal application rate, timing, and frequency for 6S-2 fertilizer are also important issues that must be considered in field production. The effects of Trichoderma are related not only to the species itself but also to its application method [65]. The application of solid Trichoderma fertilizers, liquid fungal agents, or spore suspensions have all been shown to alleviate continuous cropping obstacles to some extent [66]. Therefore, reducing the number of processing procedures, lowering production costs, extending shelf life, and optimizing the application rate are the top priorities for subsequent research. Conclusions The application of 6S-2 Trichoderma fertilizer to replanted soil promoted increases in apple biomass and increased the ratio of bacteria to fungi in the soil. In particular, it increased the relative abundance of Trichoderma, Bacillus, and Streptomyces and reduced the relative abundance of harmful Fusarium. 6S-2 fertilizer thus altered the soil microbial community structure, perhaps through its marked effects on the relative content of multiple root exudate components. Therefore, the application of 6S-2 fertilizers to replanted soil can both promote plant growth and optimize the soil microbial community structure and can help to alleviate ARD. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jof8010063/s1. Figure S1: Dilution curve at the OTU level. Sobs index of bacteria (a) and fungi (c); Shannon index of bacteria (b) and fungi (d); Figure S2: Species compositions of different treatments at the genus level for bacteria (a) and fungi (b); Figure S3: Circos diagrams of the relative abundance and distribution of soil bacteria (a) and fungi (b) at the genus level for different groups. The width of the bar indicates the relative abundance of the genus; Figure S4: The number of culturable bacteria (a) and fungi (b) and the bacteria/fungi (c) in the pot experiment. Real-time fluorescence quantification of F. oxysporum (d), F. proliferatum (e), F. solani (f), and F. moniliforme (g) in the pot experiment; Figure S5: LDA scores of enriched taxa from Figure 4 (g and h). Indicator bacteria (a) and fungi (b) with LDA scores of 4 or greater in communities of the three treatment groups. GR (red), GC (blue), and GT (green); Figure S6: Correlation heatmap of the top fifty bacterial (a) and fungal (b) genera with environmental factors. The x and y axes are environmental factors and genera. The legend shows the color range of the R values. * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001.; Figure S7: Single-way correlation network among the top twenty bacterial (c) and fungal (d) genera. Node size is proportional to genus abundance. Node color corresponds to family taxonomic classification. Edge colors represent positive (green) and negative (red) correlations, and the edge thickness is equivalent to the correlation value; Figure S8: The spectrum of exudates from roots of M. hupehensis Rehd seedlings under different treatments. The x and y axes are time and contents; Table S1: Physical and chemical properties of soils from four apple orchards; Table S2: The environmental factors used in db-RDA analysis; Table S3: Components with peak area greater than 0.5% in the root exudates of M. hupehensis Rehd under different treatments. Data Availability Statement: The study did not report any data.
2022-01-12T16:20:31.430Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "20ceb76beedfdf143d48e8c05b75d0a358922c67", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2309-608X/8/1/63/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7628fca120771f8ec3a0762d712b7f3a93ab30cb", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
258688715
pes2o/s2orc
v3-fos-license
Examining indicators of psychosocial risk and resilience in parents of autistic children Background Parents of autistic children experience increased levels of caregiver strain and adverse mental health outcomes, even in comparison to parents of children with other neurodevelopmental disabilities. Previous studies have largely attributed these increased levels of mental health concerns to their child behavioral concerns and autism symptomatology, but less attention has been given to other potential child factors, such as child adaptive functioning. Additionally, little is known about potential protective factors, such as parents’ emotion regulation (ER) abilities, that may ameliorate the experience of caregiver strain, anxiety, and depression. Objective The current study examined the impact of child characteristics (restricted and repetitive behaviors, adaptive functioning and behavioral concerns) on parent mental health outcomes (caregiver strain, anxiety, depression and wellbeing). Additionally, we explore parents’ ER abilities as a moderator of the impact of child characteristic on parents’ mental health outcomes. Results Results of linear mixed effect models indicated a significant relationship between parents’ ER abilities and all four parent outcomes. Additionally, children’s adaptive functioning abilities and repetitive behaviors (RRBs) were significant predictors of caregiving strain. Parents’ ER abilities were a significant moderator of the effect of children’s repetitive behaviors and adaptive functioning challenges on caregiver strain, such that better ER abilities mitigated the impact of child clinical factors on caregiver strain. Finally, a significant difference was detected for mothers’ and fathers’ mental health, with mothers reporting higher caregiver strain, and more symptoms of anxiety and depression than did fathers. Conclusion This study leveraged a large sample of autistic children and their biological parents to examine the relationship between children’s clinical characteristics and parents’ psychological wellbeing. Results indicate that, although parents of autistic children do experience high rates of internalizing mental health concerns that relate to child adaptive functioning and RRBs, parent ER abilities act as a protective factor against parents’ adverse mental health outcomes. Further, mothers in our sample reported significantly higher rates of depression, anxiety, and caregiver strain, as compared with fathers. Introduction Parents of autistic 1 children experience increased levels of caregiver strain and higher rates of general mental health concerns compared to parents of neurotypical (NT) children, and those with other neurodevelopmental disabilities (Abbeduto et al., 2004;Davis and Carter., 2008;Estes et al., 2009Estes et al., , 2013Hartley et al., 2012). Several studies have reported increased rates of depression (Abbeduto et al., 2004;Hartley et al., 2012;Cohrs and Leslie, 2017) among parents of autistic children. Additionally, in one study of 52 parents of autistic children, researchers reported that 53.8% of the parents show clinically significant mental health concerns (Nikmat et al., 2008), with mothers reporting a greater adverse impact on psychological wellbeing compared to fathers (Nikmat et al., 2008). Understanding parents' caregiver strain, mental health, and wellbeing is particularly important, as research has demonstrated that parents play an important role in scaffolding their children's development (e.g., maternal sensitivity influences child expressive language growth; Greenberg et al., 2006;Baker et al., 2010). Caregiver strain is also associated with greater use of maladaptive parenting styles that may exacerbate behavioral regulation difficulties in autistic children (Hutchison et al., 2016). Thus, ignoring parental mental health needs will make it difficult to support child development because of the transactional nature of the relationships among the members of a family system (Gulsrud et al., 2010). Most of the prior literature exploring predictors of parents' caregiver strain, mental health, and wellbeing have focused primarily on the impact of autistic children's autism symptom severity. Specifically, higher rates of adverse parental mental health and increased caregiver strain have previously been attributed to perceived behavioral challenges associated with their child's autism (Smith et al., 2008;Lyons et al., 2010;Ingersoll and Hambrick, 2011;Hartley et al., 2012;He et al., 2022;Porter et al., 2022). However, less attention has been given to other parent and child factors that may impact caregiver strain and mental health in parents (e.g., wellbeing, depression, anxiety). Further, few studies have explored how parent factors may mitigate the adverse impact of other child factors on mental health and wellbeing in parents of autistic children (Weiss et al., 2012). The present study explored the impact of child and parent-specific factors on parents' caregiver 1 Many self-advocates from the autism community (Bury et al., 2020) and current research has indicated both a preference for identify first language (e.g., autistic children) or language describing the individual. Therefore, this language will be utilized throughout the present manuscript. strain, mental health, and wellbeing. Additionally, we examine the potential mitigating role of parents' emotion regulation on the impact of child factors on parents' mental health and wellbeing. Predictors of caregiver strain, mental health, and wellbeing Factors related to increased caregiver strain among parents of autistic children have been explored in numerous reports, with children's cognitive challenges and severity of autism symptoms being associated with increased caregiver strain (Abbeduto et al., 2004;Lecavalier et al., 2006;Davis and Carter., 2008;Rao and Beidel, 2009;Hartley et al., 2012;Karst and Van Hecke, 2012;Smith et al., 2012) and depression (Benson and Karlof, 2009;Ingersoll and Hambrick, 2011). Additionally, children's behavioral concerns, such as dysregulation and externalizing behaviors, have been found to be highly correlated with caregiver strain in parents of autistic children (Davis and Carter., 2008), which in turn was associated with parents' higher levels of anxiety and depression (Rezendes and Scarpa, 2011). Increased caregiver strain is also related to increased rates of children's restricted and repetitive behaviors (RRBs), with parents citing difficulty managing these behaviors adding to their own caregiver strain (Mercier et al., 2000;Gabriels et al., 2005;Lecavalier et al., 2006;Bishop et al., 2007). Furthermore, studies suggest that greater parental involvement in a child's day-to-day activities (e.g., involvement with daily routine, school, domestic duties) may lead to greater caregiver strain and negatively impact the family system (Tehee et al., 2009). Resilience of parents of autistic children Data also suggest resiliency, or the ability to positively adapt to the environment in the face of adversity or challenge (Luthar et al., 2000), in many parents of autistic children. Specifically, positive meaning-making (e.g., diagnosis as a strength for the family unit, acknowledging available resources) of the experience as caregivers of autistic children can buffer against stressful situations (Barakat and Linney, 1992;Anuradha, 2004;Wilgosh and Scorgie, 2006). Further, research has demonstrated that social support, hope and spirituality and religiosity are important protective factors for caregiver strain, mental health and wellbeing Ekas et al., 2016;Slattery et al., 2017;Pepperell et al., 2018). Although research has identified a range of resilience factors for parents of autistic children, little is known regarding the impact of parent emotion regulation abilities on caregiver strain and wellbeing in this population. Emotion regulation as resilience Emotion regulation (ER) is an umbrella term that refers to the ability to monitor, evaluate and modify emotional reactions (Gross, 1998). In the general population of NT adults, difficulties with ER have been associated with a range of mental health concerns, including increased rates of anxiety and depression (Mennin et al., 2007;Etkin et al., 2010;Cludius et al., 2020). Additionally, ER abilities have been associated with indices of wellbeing in NT adults, with lower ER abilities predicting lower self-reported wellbeing (Haga et al., 2009;Mandal et al., 2011). Despite the research supporting the impact of ER abilities on individuals' mental health and wellbeing in the general population, few studies have examined these associations in parents. These studies find that ER abilities play an important role in supporting parents' mental health and wellbeing. In one recent study of caregivers of NT children during the COVID-19 pandemic, caregivers' ER abilities predicted their mental health 2 months later (Russell et al., 2022). In another study, researchers provided an online ER intervention to parents, targeting the use of adaptive ER strategies (Preuss et al., 2021). Findings revealed that parents who received the intervention had a significant decrease in parenting stress at follow-up in comparison to the wait-list control group (Preuss et al., 2021). With regard to parents of autistic children, even less is known. To our knowledge, one study to date has examined the association between ER and parenting stress in parents of autistic children. In this study, Hu et al. (2019) found a significant, negative association between parents' ER abilities and their self-reported parenting stress. Further research is needed to better understand the impact of ER abilities on mental health and wellbeing for parents of autistic children. Mothers and fathers of autistic children In studies of the general population of heterosexual couples, fathers play important, but distinct, and complementary roles to mothers in many families. However, fathers of children with developmental disabilities, including autism, have been studied infrequently regarding their roles and relationships within the family. In the studies that have been conducted, child characteristics, partner characteristics, and features of the marital relationship have been found to differentially affect mothers and fathers of children with autism or other disabilities (Bristol et al., 1988;Hartley et al., 2016). There is evidence, for example, that fathers are more negatively affected by their autistic child's behavioral concerns than are mothers (Davis and Carter., 2008). Moreover, it is likely that the psychological state of one parent may affect that of the other parent in two-parent families (Hastings, 2003). In fact, maternal symptoms of depression have been found to predict paternal psychological wellbeing in families with autistic children (Hartley et al., 2012). Research suggests that mothers and fathers also differ in their adaptability (Bendixen et al., 2011) and in the types of support they provide for their families (emotional vs. practical support, respectively; Seligman and Darling, 2009). There are inconsistent findings with respect to differences in caregiver strain, mental health, and wellbeing between fathers and mothers of autistic children. Studies examining mental health outcomes in parents of autistic children have largely found that mothers report higher levels of depression (Hastings et al., 2005;Ozturk et al., 2014;Foody et al., 2015;Cohrs and Leslie, 2017;Li et al., 2022) and anxiety (Hastings, 2003;Foody et al., 2015;Li et al., 2022) than do fathers of autistic children. Less consistent findings have been found regarding caregiver strain or stress, with some studies reporting higher rates of parenting stress in mothers as compared with fathers (Moes et al., 1992;Sharpley et al., 1997;Tehee et al., 2009;Dabrowska and Pisula, 2010;Falk et al., 2014). Other studies have reported no significant difference in parenting stress between mothers and fathers of autistic children (Hastings, 2003;Davis and Carter., 2008;Nikmat et al., 2008;Ozturk et al., 2014). These studies typically involve small samples thus, they have had limited statistical power to detect differences between mothers and fathers and are limited in potential generalizability. Moreover, studies involving fathers have not focused on the role of ER, or the potential protective factors associated with positive outcomes. Overall, the inconsistencies in these data demonstrate a need for further research into the unique role of mothers and fathers of autistic children. The current study This study was designed to leverage a large sample to explore the relationship between autistic children's behavioral characteristics and parents' psychosocial risk and resilience, as well as to evaluate the potential role of ER as a protective factor for parents. In addition, mothers and fathers from the same family are included in analyses, thereby allowing exploration of differences in their psychosocial risk and resilience. It was hypothesized that children's higher RRBs and behavioral concerns, and lower adaptive functioning abilities (i.e., tasks of daily living), would be associated with increased caregiver strain and adverse mental health. Additionally, it was hypothesized that higher ER abilities in parents would be associated with lower levels of caregiver strain and more positive mental health outcomes, and that ER would moderate the association between child factors and indicators of parents' mental health and wellbeing. Lastly, it was hypothesized that mothers in our sample would report significantly higher rates of caregiver strain, anxiety and depression than would fathers. SPARK cohort Beginning in April 2016, SPARK began a nationwide recruitment effort with 21 clinical sites (growing to include 31 in 2021) and an extensive social media campaign. Any individual living in the United States with a professional diagnosis of ASD (obtained from a provider or through school), along with their parents and an unaffected sibling, are eligible to participate in SPARK. Phenotypic data and biospecimens are collected remotely, with online access to the study protocol, making participation more accessible and convenient. Participants consent to share their de-identified data, and to be contacted for future ASD-related research studies for which they may be eligible. Additionally, participants may consent to contribute a saliva sample for genetic analysis and may opt to receive individual genetic results related to ASD, in the event that a primary genetic cause of ASD is identified. For a detailed description of genetic material collection, genomic analyses and return of results to participants see Feliciano et al.'s (2019) publication. SPARK participants are also asked to complete a battery of online questionnaires. SPARK research match The data for the current project was obtained through the SPARK Research Match program, which connects qualified members of the SPARK community with research studies, inviting them to volunteer as participants. Data were accessed by submitting an application to SFARI Base, describing the aims of the study, as well as a description of the inclusion criteria for the proposed project. The application was reviewed and approved by the SPARK Participant Access Committee (PAC), and participants who met study criteria were invited to participate in the current study. Data collection took place between January and February of 2021, during which time the participants completed a battery of online surveys regarding proband characteristics, in addition to psychological risk and resilience for both biological parents. Participants The overall SPARK sample included 106,577 probands, the parents of whom were invited to participate in the current study if they met the following criteria: (1) the proband was less than 18 years old and (2) both biological parents were available to participate. This resulted in a sample of 263 dyads (biological mother, biological father) that participated in the online survey study. Probands were largely male (77.9%) and had an average age of 7.37 years (SD = 3.92; range = 1-17 years). A majority of probands identified as White (63.9%), with the remainder identifying as Asian (1.1%), African American (4.2%) and Other (3.4%). A portion of the sample identified as having multiple races (9.5%), and 17.9% did not endorse a race. Lastly, 11.0% of the sample identified as ethnically Hispanic. Maternal and paternal demographics are presented in Table 1. Child measures Child Behavior Checklist (CBCL; Achenbach and Rescorla, 2001). The Child Behavior Checklist, now known as the Achenbach System of Empirically Based Assessments, is a parent report questionnaire that measures the presence of behavioral and emotional challenges in children. The most recent version of the CBCL includes two separate forms, one to be used with children between the ages of 1.5 and 5 years and the second to be used with children between the ages of 6 and 18. Both forms of the CBCL have been found to have strong internal consistency (α = 0.92-0.94) and test-retest reliability in the norming sample (r = 0.89-0.92; Achenbach and Rescorla, 2001). Additionally, both scales have been demonstrated to have acceptable construct and criterion validity (Achenbach and Rescorla, 2001). The assessment yields scores across 6 different scales (affective problems, attention-deficit/hyperactivity, anxiety, oppositional defiance, somatic problems, and conduct problems), and three composite scores (Internalizing problems, Externalizing problems and Total Problems). For the purposes of the current study, the Total Problems score was used as a measure of children's emotional and behavioral concerns (hereafter referred to as behavioral concerns for readability). The choice was made to exclude subscales from the analyses, as two different forms of the CBCL were used (1.5-5 and 6-18) which include different subscales. Repetitive Behavior Scale-Revised (RBS-R; Bodfish et al., 1999). The RBS-R is an informant-report instrument that measures the presence and severity of restricted and repetitive behaviors, . This measure was completed by parents, who rated their child's behaviors on a 4point Likert scale, ranging from "0" (behavior does not occur) to "3" (behavior is a severe problem). The measure was found to have good inter-rater reliability (r = 0.88) and test-retest reliability in the norming sample (r = 0.71; Bodfish et al., 1999). The RBS-R yields an overall total raw score, based on summed items scores across subscales, which was used in this study as a measure of individual's restricted, repetitive behaviors. Vineland Adaptive Behavior Scale-Third Edition (VABS-3; Sparrow et al., 2016). The VABS-3 is considered the gold-standard measure of adaptive functioning for individuals from birth to 90 years old. Parents in the current study completed the caregiver questionnaire version of the assessment, which has been found to have strong internal consistency (α = 0.96-0.99) and acceptable test-retest reliability in the norming sample (r = 0.80-0.93; Sparrow et al., 2016). The measure has been shown to correlate with other measures of adaptive functioning (Sparrow et al., 2016), including the Bayley Scales of Infant and Toddler Development (Bayley-III;Bayley, 2006) and the Adaptive Behavior Assessment System (ABAS-3; Harrison and Oakland, 2015). The VABS-3 provides composite standard scores (M = 100, SD = 15) across four domains (Communication, Daily Living Skills, Socialization and Motor), in addition to an overall Adaptive Behavior Composite (ABC). The ABC standard score was used for the current study as a measure of overall adaptive functioning. We did not include analyses of the subdomains of the VABS-3 because 3 of the 4 subdomains are not normed for children under the age of 3 years, which would exclude part of our sample. Parent measures Caregiver Strain Questionnaire-Short Form (CGSQ-SF; Brannan et al., 1997). The CGSQ-SF is a self-report measure that assesses parenting strain in the previous month. It consists of 10 items derived from the original long form, measuring strain across two subscales: Objective strain (6 items) and Subjective Internalized strain (4 items). Response options are in the form of a 5-point Likert scale, ranging from "Not a problem" (1) to "Very much a problem" (5). The CGSQ-SF Total Score, a score derived from the mean of all items, was used in this study to characterize overall caregiving strain. The CGSQ-SF has strong psychometric properties, with an internal consistency reliability coefficient of 0.90 (Brannan et al., 2012). Patient Health Questionnaire (PHQ-9; Kroenke and Spitzer, 2002). The PHQ-9 is a self-report questionnaire measuring depression, adapted from the depression module of the PRIME-MD diagnostic instrument for common mental disorders. It consists of 9 items, each of which represents one of nine DSM-IV criteria for major depression. Item responses are measured using a 3-point Likert scale, ranging from "Not at all" (1) to "Nearly every day" (3). The PHQ-9 was found to have strong internal reliability (0.86-0.89; Kroenke and Spitzer, 2002) and test-retest reliability in the norming sample (0.84; Kroenke and Spitzer, 2002). It was also found to have strong construct and criterion validity (Kroenke and Spitzer, 2002). The measure yields a total score that represents the severity of respondents' depression symptoms. Generalized Anxiety Disorder (GAD-7; Spitzer et al., 2006). The GAD-7 is a self-report questionnaire that assess symptoms of generalized anxiety, asking participants how often in the previous 2 weeks they have been bothered by anxiety symptoms. The measure consists of 7 items, scored on a 4-point Likert scale ranging from "Not at all" (0) to "Nearly every day" (3). The measure was found to have strong psychometric properties, with good internal consistency (Cronbach α = 0.92; Spitzer et al., 2006) and test-retest reliability in the norming sample (ICC = 0.83; Spitzer et al., 2006). Additionally, this measure was found to have strong evidence of criterion, construct and factorial validity (Spitzer et al., 2006). The measure yields a total score between 0 and 21, with higher scores indicated greater anxiety symptomatology. Wellbeing Scale (WBS; Ryff and Keyes, 1995;Keyes et al., 2002). A modified version of Ryff 's Scales of Psychological Well Being, the Wellbeing Scale includes a total of 18 items across 6 aspects of wellbeing: self-acceptance, autonomy, environmental mastery, purpose in life, positive relations with others, and personal growth. Responses are measured using a 7-point Likert scale ranging from "Strongly agree" (1) to "Strongly disagree" (7). While there have been mixed findings on the psychometrics of the modified form (Ryff and Keyes, 1995;Springer and Hauser, 2006), it has been widely used in the literature examining wellbeing in diverse samples (Clarke et al., 2001;Sagone and De Caroli, 2014;Khanjani et al., 2018). The measure yields a total mean score, such that a higher score reflects greater wellbeing. Barkley Deficits in Executive Functioning Scale (BDEFS; Barkley, 2011b). The BDEFS is a self-report measure of individuals' executive functioning abilities. It is comprised of 89 items that are answered using a 4-point Likert scale ranging from "1" (rarely or not at all) to "4" (very often). It consists of items across five subscales (self-management of time, selforganization/problem-solving, self-restraint, self-motivation and self-regulation of emotions). The measure is reported to have excellent internal consistency in the norming sample (α = 0.92; Barkley, 2011b). For the purposes of the current study, we present scores from the self-regulation of emotions subscale (13 items), as a measure of participants' ER. Included in the self-regulation of emotions subscale are items designed to measure participants' ability to regulate negative emotions, such as "Have trouble calming myself down once I am emotionally upset" and "Unable to manage my emotions in order to accomplish my goals successfully or get along with others." Scores are calculated as a sum of scores across items, such that higher scores indicate poorer ER. Data analysis Preliminary analyses were conducted to provide sample demographics and summary statistics (means and standard deviations) of all variables of interest. Data were analyzed to confirm that all assumptions of linear mixed effects models were met. We used linear mixed effects models to assess the contribution of child characteristics to caregiving strain and mental health. Each parental outcome (CGSQ-SF, PHQ, GAD, and WBS) was modeled separately. We first modeled each parental outcome (CGSQ-SF, PHQ, GAD, and WBS) as a function of child characteristics (CBCL, VABS, and RBS-R), parent (mother/father) and all two-way interactions between child characteristics and parent. A random effect was included for each child to account for within-child correlation. Because four outcomes were analyzed, a Bonferroni corrected alpha level of 0.0125 was used to determine significance. If no interactions were significant, main effect only models were fit; otherwise, all interactions were retained in the model. We then tested the possible moderating effect of parents' ER abilities by including BDEFS-ER as a main effect and two-way (BDEFS-ER * Child characteristic) and three-way interactions (BDEFS-ER * Parent * Child characteristic). All predictors and the moderator were centered and scaled to a mean of 0 and standard deviation of 1. Moderation models were also evaluated at a Bonferroni significance level of 0.0125. Main effects only models were fit if no interactions were significant, but all interactions retained if some were significant. Analyses were conducted using R Statistical Software version 4.2.0. Child and parent characteristics On average, children in this sample had a mean Adaptive Behavior Composite Standard Score (ABC-SS) that fell in the "moderately low" range (M = 71.53, SD = 15.18), and a mean Total Problems score in the "clinical" range (M = 65.63, SD = 8.94). Additionally, children in the sample had a mean score of 33.85 on the RBS-R (out of a possible score of 129), with a large amount of variability (SD = 19.66). Parents in this sample reported symptoms of mild depression and mild anxiety, as measured by the PHQ-9 and GAD-7, respectively. Additionally, both mothers and fathers in this sample had a CGSQ total score that fell in the "medium" range (see Table 2). Associations between parental outcomes and child characteristics Two child characteristics, adaptive functioning (VABS-3) and repetitive behaviors (RBS-R), significantly predicted parents' caregiver strain (CGSQ-SF), such that higher caregiver strain was associated with children's lower adaptive functioning (β = −2.57; p < 0.001) and higher repetitive behaviors [(β = 2.12; p < 0.001; see Table 3)]. These relationships were not found to differ significantly by parent (Supplementary Table 1). Additionally, parents' ER abilities moderated the effect of children's RRBs on caregiving strain (β = 1.32; p = 0.002), such that parents with higher ER abilities (indicated by lower scored on BDEFS-ER) experienced smaller increases in caregiver strain for a given increase in a child's RRBs. Although not statistically significant at the Bonferroni corrected level of 0.0125, there was some indication that the moderating effect of BDEFS-ER was considerably less for mothers than fathers (β = 1.49; p = 0.016). Similarly, parents' ER abilities also moderated the effect of children's adaptive functioning abilities on parents' caregiver strain (β = 1.22; p = 0.011), such that parents with higher ER abilities experienced lower levels of caregiver strain in response to their children's lower adaptive functioning abilities. See Figures 1, 2 and Table 4 for more information. For the models of parents' depression, anxiety and wellbeing, RBS-R was a significant predictor (Table 3), with increases in RRBs associated with higher anxiety (β = 0.92; p = 0.001) and depression (β = 0.97; p = 0.002), and lower wellbeing (β = −0.13; p = 0.005), although these results did not remain significant upon the inclusion of BDEFS-ER in the models. Additionally, these relationships did not differ significantly between mothers and fathers (Supplementary Table 1). Parents' ER abilities significantly predicted parents' depression (β = 3.23; p < 0.001) and anxiety (β = 3.27; p < 0.001), such that parents with higher ER abilities had fewer symptoms of depression and anxiety. Additionally, a significant effect for parents' ER was seen for parents' wellbeing (β = −0.13; p = 0.005), with higher ER predicting higher wellbeing. See Table 5 for details. There was ER moderating the effect of RRBs on CGSQ. ER moderating the effect of VABS on CGSQ. no evidence of different effects of ER on depression, anxiety and wellbeing between mothers and fathers (Supplementary Table 2). Lastly, a significant main parent effect was detected, such that mothers reported experiencing higher caregiver strain (β = 2.06; p < 0.001) and anxiety (β = 1.66; p < 0.001) than fathers. See Table 3 for further details on all models. Discussion The current study examined contributors to, and potential mitigators of, strain, wellbeing, and mental health outcomes in parents of autistic children. Consistent with previous findings in the literature, parent self-reported caregiving strain was associated Bonferroni-corrected significance levels were used to control the Type I error rate across the four outcome models. *p < 0.0125; **p < 0.0025; ***p < 0.00025. with children's adaptive functioning abilities and restricted and repetitive behaviors (RRB) (Bishop et al., 2007). In previous studies, increased rates of RRBs have been associated with higher levels of caregiving strain and difficulty managing RRBs in parents of autistic children (Gabriels et al., 2005;Lecavalier et al., 2006;Bishop et al., 2007;Nikmat et al., 2008). Similarly, challenges with adaptive functioning skills can greatly impact a child's level of independence (Harrop et al., 2016) which, in turn, places greater caregiving burden on parents. One study reported that increased caregiving strain was associated with lower levels of child adaptive functioning, specifically in the domain of daily living skills (Postorino et al., 2019). While the current study did not examine the individual subscale scores for the RBS-R and VABS-3 due to methodological concerns (see methods for details), future research is needed to better understand how specific aspects of RRBs and adaptive functioning abilities uniquely contribute to parents' mental health and wellbeing. Altogether, our findings are consistent with the extant literature regarding the relationship between children's RRBs, adaptive functioning abilities and caregiving strain. The current study did not find a significant association between children's behaviors and caregiving strain. Initially, this lack of association appears at odds with previous studies that have shown that children's behaviors and autism symptomatology are associated with caregiving strain in parents of autistic children (Davis and Carter., 2008;Lecavalier et al., 2006). However, the current study differs from previous studies by analyzing children's RRBs alongside children's behaviors as a predictor of caregiving strain. It is possible that the inclusion of children's RRBs accounted for variance accounted for by children's behaviors in previous studies. Therefore, there is a need to better understand the ways different categories of behaviors may influence caregiver strain and, specifically, how autistic children's RRBs may be perceived as behavioral concerns by caregivers. The use of parent-report measures of child factors in this study may have also contributed to differences in associations between children's behaviors and parents' caregiver strain. Future studies would benefit from the inclusion of observational measures of children's behaviors, alongside caregiver report measures, to offer multiple vantages of children's behaviors. Such an approach would allow for a more detailed understanding of the specific behaviors which appear to impact caregiver strain. Additionally, physiological measures of stress (such as heart rate variability), have been found to provide a more objective method for studying individual differences and underlying biological influences in stress reactivity and ER (Factor et al., 2017). This would inform possible supports to help families address these challenges. In the current study, child factors were significantly associated with parents' caregiver strain, even after the inclusion of parents' ER in the model. This finding is consistent with previous studies in which child-related factors have been consistently associated with caregiver strain in parents of autistic children (Gabriels et al., 2005;Lecavalier et al., 2006;Bishop et al., 2007;Falk et al., 2014). Child factors, specifically children's RRBs, were also associated with parents' anxiety, depression and wellbeing, but these findings were no longer significant upon the inclusion of parents' ER in the models. Although the present findings differ from previous studies Bonferroni-corrected significance levels were used to control the Type I error rate across the four outcome models. *p < 0.0125; **p < 0.0025; ***p < 0.00025. that found a positive relationship between child factors and parents' mental health and wellbeing (Hastings et al., 2005;Ozturk et al., 2014;Foody et al., 2015;Cohrs and Leslie, 2017;Li et al., 2022) they are consistent with other studies findings that parent-related factors (e.g., increased social support, parent cognition, parents' internal locus of control) have been largely associated with positive mental health in parents of autistic children (Bishop et al., 2007;Falk et al., 2014;Bitsika and Sharpley, 2016). This difference may be in part due to the distinction between caregiving strain and mental health. Caregiving strain, as measured by the CGSQ-SF, refers to "the demands, responsibilities, difficulties" (Brannan et al., 1997) of caregiving for an autistic child. As such, this measure reflects the amount of daily strain experienced by parents, which may be more related to child-level factors such as adaptive functioning and behavioral concerns. In contrast, parents' mental health refers to a more global experience of psychological distress, such as the experience of symptoms of anxiety and depression, which may be more related to parent-related factors such as parental cognition. Altogether, our findings underscore the need to consider both child-and parent-level factors in examining contributors to mental health and wellbeing in parents of autistic children. The ability to regulate emotions also emerged as a significant factor in determining parents' reports of caregiving strain and mental health. In fact, parents' ability to regulate emotions was the only factor to predict all the parental outcomes -caregiver strain, depression, anxiety, and wellbeing. Additionally, parents' ER abilities moderated the impact of children's RRBs and adaptive functioning challenges on caregiving strain. Although the role of ER has not been specifically studied in parents of autistic children, it has been indirectly addressed in the literature within the related construct of parents' coping skills (i.e., the behavioral and cognitive efforts employed to reduce distress; Lazarus and Folkman, 1984). From a theoretical perspective, ER and coping strategies both refer to processes aimed at fostering positive emotional states and regulating negative ones (Gross, 2002(Gross, , 2015. The findings of the current study corroborate previous studies emphasizing the importance of positive coping strategies that address emotion dysregulation, such as cognitive reappraisal or reframing, for reducing stress and psychological distress in parents of autistic children (Hastings et al., 2005;Zablotsky et al., 2013;Shepherd et al., 2018). Conversely, certain coping strategies, such as avoidance or escape behaviors, have been associated with higher levels of stress (Dunn et al., 2001;Hastings et al., 2005). Our findings contribute to the extant literature by examining parents' overall ability to regulate emotions, as opposed to the utility of specific coping strategies, in relation to the experience of caregiver strain and mental health and wellbeing. Our findings highlight the impact of ER abilities on the experience of caregiver strain and mental health in parents of autistic children and identify parents' ER abilities as a possible intervention target. The importance of parent ER underscored in the present study also suggests that more environmental supports for parents of autistic children may be critical to providing space and time for parents to regulate complex emotions. Lastly, a significant difference was seen in the experience of caregiving strain and anxiety reported by mothers and fathers, such that mothers reported higher levels of caregiving strain and anxiety than did fathers. These findings are consistent with previous studies reporting significantly higher levels of caregiver strain and anxiety for mother than fathers (Hastings, 2003;Dabrowska and Pisula, 2010;Foody et al., 2015;Vitale et al., 2022). One possible explanation for this discrepancy is that mothers tend to be more involved in the day-to-day management of family life (Konstantareas and Homaditis, 1992;Benson et al., 2008). In our sample, almost half of mothers identifying as stay-at-home caretakers in comparison to less than 5 percent of fathers. Research has found that greater involvement in managing children's dayto-day activities is associated with higher levels of caregiver strain (Tehee et al., 2009), which may explain the higher incidence of caregiver strain in mothers. Differences have also been reported in the sources of caregiving strain for parents of autistic children, with mothers' caregiver strain being predicted by daily living skills (e.g., sleeping, eating) and dysregulation, whereas externalizing behaviors have been found to predict fathers' caregiver strain (Davis and Carter., 2008). These differences may have contributed to the sex differences seen for caregiving strain and anxiety. In sum, these findings contribute to the extant literature by replicating previous findings in a large sample. Further examination is needed to better understand the child and parent factors that contribute to different experiences of caregiver strain and anxiety in mothers and fathers of autistic children. Future directions and limitations The current study presents findings from a large, national sample of autistic children and their biological parents. These findings are the first to shed light on the importance of ER on the experience of caregiver strain and adverse mental health in parents of autistic children. Despite the considerable strengths of this study, some limitations should be addressed. First, the sample included in this study was majority married, White, with a high SES, which affects the generalizability of the findings. Similarly, parents in this study were in heterosexual relationships, with almost half of mothers identifying as full-time care providers, which affects the generalizability of these findings to other family structures. Additionally, findings are based solely on parent-reports of their own caregiver strain, mental health, and ER, as well as their children's autism symptomatology, adaptive functioning, and behaviors. Future research should consider other methods of measurement (e.g., direct observation, clinical assessment) when assessing the associations between these constructs. For example, studies examining the association between caregiver strain and children's RRBs could benefit from the use of observational measures such as the ADOS-2 (Lord et al., 1999), the gold standard for autism diagnosis. Further, future research would benefit from the inclusion of a measure of social communication abilities in autistic children, to further parse the contribution of each core feature of autism. Lastly, while this study begins to examine the ways that ER abilities impact caregiver strain and mental health in parents of autistic children, this area of study is new and further examination is warranted. Conclusion This study is the first to examine how parents' ER abilities moderate the association between children's behaviors and parents' experience of caregiving strain and mental health. Our findings suggest that parents may benefit from supports to improve their ER abilities and environmental supports to provide parents time and space for emotion management, which could improve their ability to cope with day-to-day stressors associated with caregiving for an autistic child. Data availability statement The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by the UCLA IRB Board. The patients/participants provided their written informed consent to participate in this study. Author contributions AD, RF, AS, LV, AW, LA, and AG conceptualized the study. ST and MP conducted the data analysis. AD and RF wrote the manuscript. All authors read and provided feedback on the final manuscript. Funding This work was supported through grants from SFARI (390314, AG) and the National Institutes of Health, National Center for Advancing Translational Sciences (UL1 TR001860).
2023-05-16T13:13:56.264Z
2023-05-15T00:00:00.000
{ "year": 2023, "sha1": "92cd463867d4482c6ed1eb784af5304225a42f6c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "92cd463867d4482c6ed1eb784af5304225a42f6c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
255225995
pes2o/s2orc
v3-fos-license
Surgery for pilonidal sinus disease in Norway: training, attitudes and preferences—a survey among Norwegian surgeons Background Pilonidal sinus disease (PSD) is frequently observed in young adults. There is no wide consensus on optimal treatment in the literature, and various procedures are used in clinical practice. The objective of this study was to assess current practice, experience, training, and attitudes towards PSD surgery among Norwegian surgeons. Methods An online survey on PSD surgery was created and sent to all members of the Norwegian Surgical Association. Categorical data were reported as frequencies and percentages. Results Most currently practicing Norwegian surgeons used the Bascom’s cleft lift (93.2%) or minimally invasive procedures (55.4%). Midline excisions with primary closure (19.7%) or secondary healing (22.4%) were still used by some surgeons, though. Most surgeons had received training in PSD surgery supervised by a specialist, but only about half of them felt sufficiently trained. The surgeons generally performed few PSD operations per year. Many considered PSD as a condition of low surgical status and this patient group as underprioritized. Conclusions Our findings suggest that PSD surgery in Norway has been moving away from midline excisions and towards off-midline flap procedures and minimally invasive techniques. PSD and its treatment have a low status among many Norwegian surgeons. This study calls for attention to this underprioritized group of patients and shows the need for consensus in PSD treatment such as development of national guidelines in Norway. Further investigation on training in PSD and the role of supervision is needed. have shown various long-term results depending on the choice of procedure, with higher recurrence rates for open healing and midline primary closure compared to off-midline procedures [9]. Nevertheless, there is no consensus on optimal treatment in literature. No surveys of current practice in Norway have been published in peerreviewed literature. Furthermore, previous studies on PSD have mainly focused on outcomes of surgical techniques, but less is known on attitudes towards PSD and training in surgical treatment for PSD. Brown and Lund reported frequent use of simple excision techniques in the UK despite prolonged recovery, and suggested lack of interest in PSD surgery as a possible reason [10]. Similar findings from Denmark also indicated that surgeons may regard treatment of PSD as a low status activity [11]. Understanding uncertainties in current practice, experiences, and attitudes towards PSD surgery among Norwegian surgeons might impact training in PSD surgery as well as the development of national guidelines. Methods A questionnaire was created for surgeons on the topic of PSD surgery by using nettskjema.no, a survey solution developed and hosted by the University of Oslo, Norway (nettskjema@usit.uio.no). It included questions about demographics, surgical training, and experience, preferred surgical procedures and attitudes towards PSD. The survey was conducted from June 1st to August 31st, 2021. The Norwegian Surgical Association (NKF) forwarded an e-mail containing a web-link to the questionnaire to all its members on behalf of the research team. A follow-up reminder was sent 2 weeks later. NKF is an umbrella association with members from all surgical specialties in Norway, both residents, specialists, and retirees. According to personal communication with NKF the proportion of gastrointestinal surgeons, general surgeons and other surgical subspecialities among NKF members is 19.1%, 40% and 40.9%, respectively. No information that could reveal personal details about the participants was collected. The Norwegian Centre for Research Data (NSD) assessed the study as not being subject to notification. This research project was designated as exempt from ethical review from the Regional Committee for Medical and Health Research Ethics (REC Central). Completion of the survey was considered to imply informed consent. Data analysis was performed using SPSS Statistics version 27 (IBM corp., Armonk, NY, USA). Categorical data were reported as frequencies and percentages. Results Among 1699 invited surgeons, 396 consented to participation and completed at least parts of the survey (response rate: 23.3%). Of those, 70 participants did not perform PSD surgery, mostly because their respective field of surgery did not include PSD surgery (81.4%), but also due to lack of knowledge (5.7%) and/or experience (2.9%) (not shown). The remaining 326 participants who reported performing PSD surgery (currently or previously) proceeded with the rest of the survey and comprised our study population (Fig. 1). Table 1 summarizes the respondents' characteristics. Most respondents were specialists (48.6%), had more than 10 years of experience (58.5%), and worked at a public hospital (86%). All regions of Norway were represented with a predominance of South-Eastern Norway (49.8%), which reflects the population distribution in Norway. Among the 134 respondents who performed PSD surgery in current practice, 47% were residents, 57.2% had less than 10 years of experience, and 65.7% performed fewer than 10 Surgical training Most respondents stated that they had received training in PSD surgery under the supervision of a specialist (83.3%) and/or a peer (28.1%). A total of 47.1% of the respondents considered themselves as sufficiently trained in PSD surgery. More than half of the respondents (61.3%), most of them specialists, instructed others in performing PSD surgery. Only 34.5% of the instructors reported following up their candidates consistently ( Table 2). Table 3 presents respondents' attitudes and experiences according to their training in PSD surgery (supervision by a specialist vs. no specialist supervision). A greater proportion of the group without specialist supervision (43.1%) did not feel sufficiently trained compared to the group with supervised training by a specialist (30.8%). Compared to respondents supervised by specialists, those without specialist supervised training were less likely to state that PSD was discussed often at their department (10% vs. 22.7%) or that their department had a standard method for PSD surgery (37.7% vs. 56.2%) (not shown). In addition, respondents without specialist supervised training were more likely to experience recurrence and/or prolonged wound healing after PSD surgery (51% vs. 37.4%). Attitudes Most respondents agreed that patients with PSD are often underprioritized (71.4%), that PSD and its treatment most likely have low status compared to other surgical diseases among surgeons in general (73.5%), and at their department (52.9%). Still, respondents generally stated that performing PSD surgery felt meaningful (82.8%) and disagreed with the notion that PSD surgery is simple, not requiring special training (86.3%) ( Table 4). Discussion In the present study, we investigated preferences, training, and attitudes towards PSD surgery among Norwegian surgeons. Most surgeons prefer Bascom's cleft lift or minimally invasive procedures. Midline excisions are sometimes still performed, though. Only about half of the surgeons feel sufficiently trained in PSD surgery and many consider PSD as a condition of low surgical status and the patients as underprioritized. Currently, only Italy, Germany, and the US have developed national guidelines for PSD treatment [12][13][14]. There is wide agreement that midline excisions with primary closure should be abandoned due to increased recurrence and wound dehiscence [12][13][14][15][16]. Yet, midline closure is still used at least sometimes by approximately 20% of the Norwegian surgeons in our study. Earlier studies from the UK and Ireland, Switzerland, Austria, and Denmark have reported even higher rates of midline closure, but this practice may have changed over the last years in favor of off-midline closure [11,17,18]. Our findings of lower rates of midline closure among currently practicing surgeons compared to surgeons no longer performing PSD surgery may indicate a decrease over the A similar procedure is midline excision with secondary healing. A recent meta-analysis stated that this technique can be justified despite a 10-year recurrence rate of 20% [19]. However, despite its technical simplicity, open wound healing is associated with extended time off work and decreased postoperative quality of life [15,20]. German and Italian guidelines are critical to open healing, while US guidelines present this as an option [12]. We observed that Norwegian surgeons tend to agree more with German and Italian guidelines, as use of midline excision with secondary healing seems to decline. Off-midline procedures, such as Bascom's cleft lift, Karydakis flap, and Limberg flap, are recommended because of better postoperative outcomes and lower recurrence rates [10, 12-14, 20, 21]. It has not been possible to identify the single best off-midline procedure [22][23][24][25]. While the Limberg flap and the Karydakis flap are preferred in many countries, we found that they are seldom used in Norway, where nine out of ten currently practicing surgeons reported using the Bascom's cleft lift [16,17,26,27]. A majority of the participating surgeons in our study reported to use flap procedures. This is in contrast to findings from a recent study that reported an infrequent use of flap techniques (22.6%) by Dutch surgeons [28]. It seems like most hospitals in Norway have followed the example of The University Hospital of North Norway, Tromsø, that introduced this technique as their standard procedure for all symptomatic, chronic PSD already in 2002 [29]. Minimally invasive procedures, such as modified Lord-Millar's procedure, Gips' procedure, and Bascom's pit picking, show various recurrence rates, but have the advantages of less pain, faster wound healing, and shorter time off work [12,30,31]. Minimally invasive techniques are recommended for limited disease without previous failed surgery [12,13]. Similar to a study from Switzerland, we found that minimally invasive procedures were widely used in Norway [17]. The trend of fewer midline procedures in favor of more off-midline flap-surgery and minimally invasive procedures in Norway is welcome, and comparable to recent treatment strategies for PSD in Switzerland and Austria [17]. On the other hand, endoscopic procedures are seldom used, although studies have suggested that these are effective treatment modalities [8,32]. One in six surgeons in our study has not been supervised by a specialist during his/her training. In addition, only one third of the instructors stated consistent follow up of their trainees. Accordingly, a survey in the UK and Ireland reported that 29% of the respondents had not received formal training in the surgical procedure of their choice in PSD management [18]. Similar to this study, we were not able to further assess the extent and quality of supervision. Only about half of the respondents considered themselves sufficiently trained in PSD surgery, especially those who had not been supervised by a specialist. This may suggest limited focus on PSD surgery and/ or inadequate quality of supervision. Similarly, a recent survey among Danish surgical residents found that selfperceived readiness to perform surgery after completion of the surgical residency program was significantly associated with the level of supervision [33]. We found that surgeons supervised by a specialist during their training were less likely to experience recurrence and/or prolonged wound healing after PSD surgery. This may indicate that increased supervised training in PSD surgery leads to fewer recurrences. However, our findings do not reflect an objective difference in recurrence rate and/or prolonged wound healing, but rather respondents' subjective estimation. These differences could have been influenced by other aspects, such as patient selection, experience, follow-up time, and choice of procedure. In addition to adequate supervision, surgical outcome depends on surgeon's volume [34]. Hopper et al. suggested that learning curves exist for all surgical procedures [35]. Wysocki has shown that competence using the modified Karydakis flap was achieved after case 10 to 21, and proficiency after case 30 to 51 [36]. Comparable learning curves may exist for the other treatment options for PSD. Currently, the specialization in general or gastrointestinal surgery in Norway does not require a specific number of PSD procedures performed. Like numbers from other countries, a large proportion of the surgeons performing PSD surgery in current practice performed fewer than ten PSD operations per year [11,27,28]. This indicates that it may take longer time to achieve competence in PSD surgery. PSD may be better addressed by a few interested surgeons with a larger surgical volume, as others have also suggested [37]. Album and Westin described a prestige rank order of diseases, where slowly developing diseases with long duration and diseases in the lower part of the body are given low prestige scores [38]. This agrees with our findings which showed that a high proportion of surgeons considered PSD as a low-status disease and the patient group as underprioritized. Similar to a study from Denmark, we found that PSD surgery was often performed by less experienced surgeons in an earlier phase of their career [11]. PSD was also seldom discussed among colleagues, especially in the group without specialist supervision in PSD surgery. Others have also suggested a lack of experience and/or interest in PSD among surgeons, further underlining the low prestige of PSD [10]. Few studies have investigated surgeons' preferences in PSD surgery, and even fewer have examined surgical training and the status of the disease. In Norway, no similar study has been published. The present survey was answered by Norwegian surgeons treating PSD either previously or in current practice, giving the opportunity to compare procedures and attitudes among surgeons in the past and now. One drawback of the survey is the lack of distinction among respondents who no longer perform PSD surgery. This group includes both working surgeons and retirees, and we do not know when these surgeons stopped performing PSD surgery. When analyzing the results, it was clear that some of the questions were open to interpretation by the previously practicing group. The overall response rate of the survey was 23.3%, which is slightly lower than typical response rates around 30-35% in previous web-based surveys answered by physicians [39,40]. In our study, all members of the Norwegian Surgical Association were invited to participate. The association has members from all surgical specialties in Norway. Approximately 40% of the invited members do not perform PSD surgery as part of their specialty, which can explain the low response rate. However, our study population was representative with respect to geographical distribution, work position, and level of experience. Response rates may have been higher among surgeons with a greater interest in PSD surgery, possibly leading to selection bias. Interested surgeons may be more likely to follow evidence-based treatment strategies, and this can potentially contribute to an overestimation of the proportion of Bascom's cleft lift and minimally invasive procedures in our study. Conclusions Our findings suggest that PSD surgery in Norway has been moving away from midline excisions and towards off-midline flap procedures and minimally invasive techniques, with the Bascom's cleft lift being the most commonly performed procedure. Nevertheless, the midline closure is still used too often considering the evidence supporting better treatment options. Surgeons generally perform few PSD operations per year, and only about half of the surgeons feel sufficiently trained in PSD surgery. Further investigation on training in PSD and the role of supervision is needed. PSD and its treatment have a low status among many Norwegian surgeons. This study calls for more attention to this less prioritized group of patients and shows the need for consensus in PSD treatment such as development of national guidelines in Norway. • fast, convenient online submission Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from:
2022-12-30T05:06:11.519Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "2f8cb767da44f9b107673cba071da66b3c787373", "oa_license": "CCBY", "oa_url": "https://bmcsurg.biomedcentral.com/counter/pdf/10.1186/s12893-022-01889-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f8cb767da44f9b107673cba071da66b3c787373", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202690133
pes2o/s2orc
v3-fos-license
Trans-national conservation and infrastructure development in the Heart of Borneo The Heart of Borneo initiative has promoted the integration of protected areas and sustainably-managed forests across Malaysia, Indonesia, and Brunei. Recently, however, member states of the Heart of Borneo have begun pursuing ambitious unilateral infrastructure-development schemes to accelerate economic growth, jeopardizing the underlying goal of trans-boundary integrated conservation. Focusing on Sabah, Malaysia, we highlight conflicts between its Pan-Borneo Highway scheme and the regional integration of protected areas, unprotected intact forests, and conservation-priority forests. Road developments in southern Sabah in particular would drastically reduce protected-area integration across the northern Heart of Borneo region. Such developments would separate two major clusters of protected areas that account for one-quarter of all protected areas within the Heart of Borneo complex. Sabah has proposed forest corridors and highway underpasses as means of retaining ecological connectivity in this context. Connectivity modelling identified numerous overlooked areas for connectivity rehabilitation among intact forest patches following planned road development. While such ‘linear-conservation planning’ might theoretically retain up to 85% of intact-forest connectivity and integrate half of the conservation-priority forests across Sabah, in reality it is very unlikely to achieve meaningful ecological integration. Moreover, such measure would be exceedingly costly if properly implemented–apparently beyond the operating budget of relevant Malaysian authorities. Unless critical road segments are cancelled, planned infrastructure will fragment important conservation landscapes with little recourse for mitigation. This likelihood reinforces earlier calls for the legal recognition of the Heart of Borneo region for conservation planning as well as for enhanced tri-lateral coordination of both conservation and development. Introduction Road infrastructure expansion across the Global South is increasingly recognized as a key factor in regional conservation and development planning [1][2][3], on par with demographic growth, urbanization, and climate change. Globally, road length is projected to increase~20-60% by 2050, with the vast majority anticipated in the Global South [4,5]. Such increases reflect high demographic and economic growth, the devolution of governance to local levels where road building is favored [6,7], and the advance of infrastructure mega-projects to open underdeveloped regions to global markets [8][9][10]. Infrastructure mega-projects in particular are expected to encroach upon intact 'wilderness' areas in many regions [4], as typified by the Chinese Belt and Road Initiative [10,11] and economic-corridor schemes in eastern Indonesia [12]. Globally, the proliferation of infrastructure mega-projects in the Global South coincides with an increasing prioritization of regional-scale integrated conservation. Internationally, parties to the Convention on Biological Diversity (CBD) have committed to 17% national coverage of "ecologically representative and well-connected systems of protected areas" by 2020 [13], an increase of 7% relative to an earlier CBD target. Across the Global South, Juffe-Bignoli et al. [14] identify shortfalls of 3-5% relative to the current target for all regions but Latin America, while Saura et al. [15] indicate consistently greater shortfalls of 2-15%, particularly in Asia. However, in Sabah, Malaysia, the focus of this study, an ambitious embrace of the CBD via the aligned Sabah Biodiversity Strategy [16] is fulfilling various CBD targets. Prominent goals within this Strategy include expanded protected-area coverage to >20%, the protection of key habitats outside of protected areas via enhanced forest connectivity, and the conservation of biodiversity-rich landscapes via cooperation with neighboring countries and states [16]. Indeed, for both Sabah and the Global South generally, international cooperation is increasingly necessary to realize enhanced regionally-integrated conservation [15,[17][18][19]. Correspondingly, the number of trans-boundary conservation areas has more than tripled internationally during the last three decades [20]. Nonetheless, achieving enhanced protectedarea coverage while neglecting losses to regional connectivity posed by infrastructure megaprojects will prove insufficient to maintain biodiversity and ecosystem resilience [21]. The Heart of Borneo initiative (HoB) is indicative of the potential of regionally-integrated conservation schemes to enhance national protected-area coverage. Established in 2007, the HoB formalized cooperation between Malaysian Borneo (Sabah and Sarawak states), Indonesian Borneo (four provinces of Kalimantan), and Brunei to integrate and enhance a 22-million hectare (Mha) trans-boundary network of protected areas (PAs), production forests, and other sustainable forest uses for mutual conservation benefit (Fig 1). The HoB has been particularly fruitful in Sabah, where it covers 54% of the state's territory and is acknowledged within the Sabah Biodiversity Strategy [16]. There, protected areas (PAs) have since expanded from 12% to 26% of the state's extent, in keeping with the CBD targets [22]; >0. 15 Mha of forests have been restored [23]; reduced impact logging has been adopted in all logging reserves (cf. [24]) (S1 Fig); and PA connectivity has increased both locally and regionally [25]. Hence, for Malaysia as a whole, inter-connected PAs account for 8-12% of its territorial extent (accounting for inherent territorial discontinuities [26]) and a relatively large proportion of this connectivity is dependent on trans-boundary PAs and nearby PAs in adjacent countries [15]. Regional, trans-boundary conservation initiatives such as the HoB are however arguably especially vulnerable to large-scale infrastructure development schemes. On the one hand, the HoB initiative is inherently contingent on international collaboration and territorial integration, as where the full benefit of PAs in a given administrative area depends on the sound management of forests in an adjacent area. Trilateral cooperation in both conservation and development is therefore key. On the other hand, large-scale infrastructure schemes are inherently unilateral, both with respect to the sovereignty of their design and implementation but also their political-administrative insulation from state-level forest management. Thus, in Sabah, Sarawak, and Kalimantan, large-scale development schemes driven by federal economic agenda struggle to reconcile with the HoB. The Sabah Development Corridor, driven by successive Malaysian Plans for industrial development, seeks to quadruple Sabah's GDP over 2008-2025, partly by expanding a Pan-Borneo highway network throughout the HoB (Fig 1). Similarly, planned road developments in Kalimantan, driven by the Acceleration and Expansion of Indonesia masterplan [27], would transect 1920 km of the Heart of Borneo 'spine' along the Malaysian border [28]. In this context, the incautious pursuit of large-scale development schemes by even a single state may have disproportionate negative effects for regionallyintegrated conservation. Here we describe anticipated changes to the integrity of the HoB that would follow the planned Pan-Borneo network development in Sabah. We highlight the potential impacts of the Pan-Borneo highway on the connectivity of protected areas and intact forests within the northern HoB as well as assess mitigation opportunities. Findings call for improved trilateral conservation planning and highlight a reliance on 'linear-conservation' approaches to mitigate planned developments. Recommendations for improved conservation-and-development planning are discussed in the context of infrastructure mega-projects. Counts and areas of protected areas. For intact forest patches in the HoB (Fig 1), we observed the number and extent of protected areas (PAs) inter-connected by intact forest cover before and after planned road developments in Sabah. Planned new road and planned road upgrades encompass Sabahan segments of the Pan-Borneo Highway as well as supplementary roads according to the Sabah Structural Plan 2033 [29]-an overarching statutory plan guiding development planning at local administrative levels. PAs wholly within or partially within an intact forest patch were considered inter-connected across that patch (Fig 2). The number and extent of inter-connected PAs per current intact-forest patch were observed according to the overlap of spatial data delineating PAs and intact-forest patches using ArcGIS, as discussed below. Rarely, a single PA partially overlapped more than one intact forest patch and thus was a member of more than one set of inter-connected PAs (Fig 2: PA 4 ). Such a PA alone could not connect such sets of PAs to each other in the absence of intermediary intact forest, however. To model changes to PA connectivity following road development, planned new and upgrade roadways bisecting an intact forest patch constituted interruptions to the connectivity amongst PAs within that patch on either site of the bisecting roadway (Fig 2: PA 1 disconnected from PA 3 PA 4 ). Planned new and upgrade roadways were buffered by 1 km and areas of current intact forest patches coincident with the buffer were removed from the extent of intact forest. Thus, an intact forest patch bisected by a buffered roadway would become two intact forest patches. The number and extent of inter-connected PAs amongst these subdivided, post-development intact forest patches were calculated in the same way as for the current intact forest patches. The 1 km buffer distance accounts for local ecological effects and human activities along roadways [30][31][32][33][34][35]. PAs were treated as indivisible when determining counts of inter-connected PAs per intact forest patch in order to ensure unambiguous counts before and after road development. Therefore, where a road through a PA would bisect its intact forest patch, the PA would remain integral and count towards the set of PAs in each resultant forest patch (Fig 2: PA 2 counts towards Patch 1a and new Patch 1b). To account for the bisection of PAs, we estimated changes to total inter-connected PA extent per intact forest patch before and after road development. These areal estimates consider only portions of PAs within an intact forest patch and treat PAs as divisible by roadways (Fig 2: PA 2a but not PA 2b is in Patch 1a; the middle and bottom portions of PA 4 and PA 6 are respectively excluded). Planned road infrastructure development. We digitized 1295 km and 1337 km of planned new roads and planned road upgrades respectively (Fig 1) from the Sabah Structural Plan 2033 [29]. A 1:500,000 map of planned roadways [36] was digitized and georeferenced to Sabahan administrative boundaries according to GADM v2.1 spatial data [37]. Planned roadways were then manually digitized on-screen in ArcGIS at viewing scales of 1:5000 to 1:10,000. The digitized routes for planned upgrade roadways were subsequently overlaid on high-resolution imagery in Google Earth, in which the routes of existing roadways nominated for upgrade were visible. Comparisons of these existing routes to the digitized planned upgrade roadway routes found them to be separated by < 300 meters generally. This is an appropriate distance and indicative of accurate digitization, considering that road upgrades entail the construction of four-lane highways parallel to existing dual-lane roads, in addition to the conversion of rudimentary dirt tracks to paved highways. The fragmentation of the HoB due to road developments reported here is best understood as an acute exacerbation of ongoing fragmentation, considering the nature of road upgrades. Protected areas. PAs were defined by the World Database on Protected Areas [38] and supplemented by additional 'totally protected' forest designations of the Sabah Forestry Department (e.g., protection forest reserves, virgin jungle reserves, wildlife reserves and conservation sanctuaries) (S1 Fig). Analyses considered PAs wholly or partially within the HoB boundaries. Following Sloan et al. [9] and Laurance et al. [8], overlapping PAs were 'dissolved' into discrete polygons to avoid double-counting overlapping PAs of distinct designations for a common area (e.g., national park, World Heritage Site). Intact forest patches. Intact forest patches were defined as areas of either contiguous lowland forest, lower montane forest, montane evergreen forests, or peat swamp forest as classified by Miettinen et al. [39] using MODIS imagery for 2015 with 250-m pixel resolution. Validations of these forest classes via visual interpretations of satellite imagery, as described by Miettinen et al. [39], confirm them to be "predominantly primary (including degraded) forests of >60% canopy cover and to occasionally include mature secondary forests that have attained structural characteristics similar to primary forests". Validation entailed visual interpretations of 1000 250-m MODIS pixels across 10 Landsat-8 false-color images spanning tropical Southeast Asia. Additionally, 400 pixels were interpreted using high-resolution (~1 m) true-color imagery across the extents of the mixed forest regrowth/plantation and industrial-plantation classes. User accuracies for the forest classes comprising our intact forest areas were 90-91%. Our delineation of intact forests patches is possibly conservative, given its exclusion of the mixed regrowth/plantation, mosaic, and industrial-plantation classes observed by Miettinen et al. [39]. While these mixed tree-cover classes might permit limited connectivity between intact forest patches, in Sabah these classes are generally highly intervened and associated with agriculture, including oil palm, as confirmed by the validations performed by Miettinen et al. [39] and our independent visual inspections of the mixed classes using high-resolution imagery in Google Earth. Intact forest patches are more liberal in terms of their contiguity, as narrow, sub-pixel gaps in the forest canopy due to existing small roadways may be unobserved in the MODIS classification. Any such unobserved gaps in an intact forest patch were presumed to be inconsequential for connectivity across that patch. Intact forests structural connectivity We assessed changes to structural connectivity across intact forest patches, presuming a range of potential faunal movements between patches and scenarios for crossing planned new and upgrade roadways. Spatial networks of intact forest patches and inter-patch linkages representing hypothetical faunal dispersal routes [40,41] were defined for a post-development landscape in which intact forest patches were already bisected by planned new and upgrade roads (buffered by 1 km). The post-development landscape encompasses Sabah plus a surrounding 20-km buffer zone, extending into northern Kalimantan and eastern Sarawak, allowing for the consideration of trans-boundary forest connectivity. We assessed connectivity using a graph-theoretic mathematical approach [42,43] treating the landscape as a network of intact forest patches of � 10 ha and linkages between such patches. Routes of potential faunal movement were defined between patches across the postdevelopment landscape. Drawing on these routes, discrete patch-linkage networks were ultimately defined according to pre-specified maximum inter-patch faunal dispersal distance thresholds. A single patch-linkage network is comprised of patches linked to each other yet isolated from the patches of other patch-linkage networks. A range of inter-patch faunal dispersal distance thresholds were considered during network modelling, from nil to a maximum distance yielding complete connectivity amongst all intact-forest patches across the post-development landscape (S2 Fig). For each dispersal distance threshold, we recorded the corresponding number of patch-linkage networks across the post-development landscape as well as the integral index of connectivity (IIC) (S2 Fig). The IIC describes the average 'accessibility' of intact forest patches across the post-development landscape. Conceptually, the IIC describes the probability that two fauna randomly located within intact forest patches may exchange places with each other while respecting the pre-specified dispersal distance threshold. Full mathematical details on the IIC are provided elsewhere [44,45]. We excluded patches < 10 ha to facilitate the computationally-intensive modelling. Excluded patches accounted for 6.3% of all intact forest patches in the post-development landscape but only 0.02% of their extent. We explored differences in the regional distributions of key inter-patch linkages amongst two scenarios of road development varying according to the permeability of planned roadways to faunal movements. Both scenarios adopt a 2-km dispersal distance threshold to define linkages between intact forest patches. Only one scenario allows inter-patch linkages to span planned roadways where patches are < 2 km of each other. Linkages across planned roadways are precluded in the other scenario. The 2-km dispersal distance threshold was adopted because it (i) marginally exceeded the localized road effects [35], (ii) constituted a lower 'stable' threshold above which IIC values and the number of patch-linkage networks changed relatively gradually (S2 Fig), and (iii) was considered generally applicable to most vertebrates in the absence of local species-specific data. Differences in the distributions of key linkages between these two scenarios highlight the degree to which planned roadways would disconnect major intact forest areas that are adjacent to planned roads and/or bisected by such roads. For each scenario, the importance of each individual linkage for overall regional connectivity was estimated by iteratively removing the linkage and re-calculating the IIC via the Graphab Delta-Metric tool [44,45]. Larger resultant deviations to the IIC, termed ΔIIC, indicate that a given linkage is more important for connectivity across the post-development landscape. Key linkages are those 15 linkages with the greatest ΔIIC values. These linkages comprise~0.5% of all linkages but 50-90% of the sum of all ΔIIC values across the post-development landscape, depending on the scenario. The distribution of these key linkages were considered relative to priority conservation forests observed by the Sabah Structure Plan 2033 [29,[46][47][48]. These priority conservation forests are 'optimal' modelled conservation priority areas in that they satisfy pre-specified conservation targets at minimal overall societal costs. The conservation targets entail coverage of (i) >30% historic areas of all forest eco-types (in keeping with the Sabah's commitment to maintain 30% of its extent under protected forest [29,48]), rising to >75% for mangroves; (ii) >60% of historic ranges for select endangered mammals (banteng, clouded leopard, elephant, gibbon, orangutan, proboscis monkey, and sun bear) as well as select endemic tree species, rising to 100% for select critically endangered endemic tree species [48]. Societal costs describe the difficulty, conflict, and efficiency entailed by conservation at a given location, and are represented by (i) the human-footprint index [49], describing the intensity of settlement and land use; (ii) species richness, describing the number of species present; and (iii) forest-carbon stock [50], representing the impact of forest conversion on atmospheric carbon emissions. Values for species richness and carbon stock were inverted to reduce the 'cost' of including high-richness, high-carbon areas within the priority conservation forests. Severely degraded forests [51] were excluded from consideration. Priority conservation-forest areas were delineated by WWF-Malaysia using the Marxan conservation-optimization software [52], as described by Tai et al. [48], Abram et al. [46] and Sloan et al. [53], and were provided by WWF-Malaysia. Opportunities for conservation mitigation Indicative sites for potential forest corridor and roadway-underpass development were identified to explore plans for enhancing the connectivity of intact forests and PAs across the northern HoB region. Indicative sites are those that would greatly enhance the structural connectivity of intact forests and PAs across the post-development landscape, being again comprised of Sabah and the 20-km buffer extending into Kalimantan and Sarawak. Each site integrates intact forest patches, typically as pairs, with at least one patch having relatively high connectivity importance (ΔIIC values for linkages permitted to span roadways), larger PA extent, and/or large intact forest extent (S3 Fig; S1 Table), in order of priority. These three criteria tended to covary and flag relatively few locales, such that the sites were readily identified 'opportunistically' rather than via pre-specified thresholds for the three criteria. Site selection also reflected three secondary criteria considering the feasibility and efficacy of the indicative sites, namely: (i) relatively short linkages (< 2km) amongst adjacent intact forest patches, as to facilitate hypothetical corridor/underpass development; (ii) extensive areas of contiguous forest relatively unsettled and free of agricultural development between the intact forest patches, as confirmed by visual inspections of high-resolution imagery in Google Earth; and (iii) regional synergy, such that a set of individual sites may link a series of important patches to each other across Sabah. In total, 12 indicative sites were identified (S1 Table). Protected area connectivity Most PA connectivity in the HoB is concentrated in a single intact forest patch spanning southern and eastern Sabah, northern and central Kalimantan, eastern Sarawak, and Brunei ( Fig 3A). This trans-boundary patch hosts 41 PAs, accounting for half of the PAs in the HoB and 89% of their extent (Fig 3A). Of these 41 PAs, 19 occur at least partially inside Sabah and account for half of the PA extent coincident with the HoB. Planned road developments in southern Sabah would significantly fragment this transboundary patch and interrupt PA connectivity across the northern HoB (Fig 3A and 3B). Within the current trans-boundary patch, roughly half of its 4.2 Mha of PAs are comprised by two PA agglomerations immediately north and south of planned developments between the towns of Kemabong, Sapulut, and Kalabakan (Fig 4). Upon separating these two PA agglomerations, the losses of inter-connected PA extent would be >3 Mha for each the 14 intact forest patches that would result from planned road developments across southern Sabah (Fig 3C). Planned road development would confine the current trans-boundary patch to far southern Sabah (Figs 3B and 4). This confined patch would inter-connect 10 fewer PAs than currently across Sabah and the larger HoB, with most of its 31 PAs being mostly outside Sabah. Consequently, the northern reach of the current trans-boundary patch would, as a newly isolated post-development patch immediately north of planned developments between Sapulut and Kalabakan (Fig 4), constitute the most significant patch entirely within Sabah (Kinabatangan and Lahad Datu districts) in terms of intact forest area (876 thousand hectares [Kha]), PA extent (642 Kha), and PA count (7) (Fig 3B and 3C). Intact forest structural connectivity The two post-development scenarios varying by planned roadway permeability exhibit major differences in their distributions of key inter-patch linkages across the northern HoB. In the scenario in which linkages may span roadways to a limited degree (<2 km), the key linkages largely circumscribe the extent of conservation-priority forest (Fig 5A). Notably, these linkages define a single Sabah-wide network incorporating most contiguous forests in the northern HoB (78%) and thus half of the conservation-priority forests in Sabah (49%) (Fig 5B). Key linkages would cross planned roadways in all instances to define this network, with the partial exception of the linkage from the Crocker Range National Park where an existing road would be crossed instead. These key linkages thus extend to relatively large, road-adjacent intact forest areas otherwise separated by planned roadways (mean patch area: 151 Kha; per linkage: 181 Kha). In contrast, for the scenario in which linkages cannot span planned roadways, key linkages adopt an inferior configuration in which conservation-priority forest remains disjointed. These linkages center on only two or three main post-development patches, notably including descendants of the current trans-boundary patch (Fig 5C). Further, they extend to comparatively-insignificant surrounding patches, most of which would be unaffected by planned roadways. The key linkages thus incorporate a lesser 64% and 39% of post-development intact areas comprised of the main patches and remaining disjointed across seven patch-linkage networks (Fig 5D). Key linkages would therefore integrate relatively little intact forest beyond the main patches (mean patch area: 102 Kha; per linkage: 150 Kha) and entirely fail to incorporate priority conservation areas across central and northern Sabah (Fig 5C). Each scenario is a hypothetical extreme: planned roadways are neither completely permeable nor completely impermeable to faunal movements. Contrasts between the scenarios do not advocate that linkages should span planned roadways or where, but rather they illustrate how planned roadways are largely situated within major forest regions key to regional connectivity. Accordingly, key linkages invariably span planned roadways to maximize connectivity where allowed-something that is unlikely in practice. Therefore, to the extent that planned roadways are impermeable, reginal connectivity will drastically decline as linkages merely consolidate main patches and their nearby smaller satellites. Opportunities for conservation and mitigation Following planned developments, any connectivity between the 'confined' trans-boundary patch in southern Sabah and the major patch to its immediate north-east (Fig 3B) would be by far the most important for regional integrity. The importance of linkages between these two main post-development patches is twice that of the next-most important linkage and nearly that of the next two most-important linkages combined, according to ΔIIC values. Such rankings underestimate the actual importance of connectivity between these two patches, considering that they would also host the greatest PA agglomerations accounting for one-quarter of HoB PA extent (Fig 4). The modelled connectivity between these two main patches is contingent on networks of stepping-stone patches along the Sapulut-Kalabakan route (Fig 6 sites 1, 2, 3). In practice, pro-active conservation planning would be essential to maintain these and other networks bridging critical main patches. Fig 6 and S1 Table describe the indicative sites for corridor and underpass development that, from a structural perspective, would enhance post-development connectivity of intact forests and PAs across the northern HoB. The configuration of the indicative sites across Sabah is comparable to that of key post-development linkages where these linkages are permitted to span roadways (Fig 5B). The indicative sites also complement, but differ more markedly from, currently planned and recently proposed forest connectivity zones advanced by the Sabah Structure Plan 2033 (Fig 6). This is noteworthy considering that planned connectivity zones according to the Sabah Structure Plan 2033 aspire to more and larger forest corridors (>1.5 km wide), as well as more underpasses, compared to the past and to connectivity zones proposed by the Sabah Forestry Department, also endorsed by the Sabah Structure Plan 2033. Indeed, the Plan instructs that "an overpass or underpass shall be constructed to minimize ecological impacts" wherever a highway "cuts through forest connectivity", particularly where it is a "national highway or strategic road" [29]. The 12 indicative sites differ from the planned and proposed connectivity zones of the Plan largely because the latter often do not account for impending losses of connectivity that would follow from planned infrastructure development (e.g., sites 5, 6, 7b, 8c in Fig 6). Theoretically, corridors and underpasses at the indicative sites would retain~85% of intactforest connectivity after road development, as estimated by the proportion of the sum of all ΔIIC values for all linkages accounted for by the sites. Approximately half of this retained connectivity would be attributable to sites 1-3 and 5 alone, flagging these as priority areas (S1 Table), with only the former recognized by planned and proposed connectivity zones (Fig 6). The corresponding retention of PA connectivity would also be considerable: sites 8a-8d, 7b, and 5 would integrate 42% of PA extent in Sabah and, if complemented by sites 1-3, PA connectivity would theoretically extend from northern Sabah to the larger HoB (Fig 4, S3 Fig). However, in reality, the actual degree of meaningful ecological re-integration possible via underpasses and corridors would be far more moderate than the structural metrics reported here, as discussed below. The indicative sites are therefore best understood as mitigative options of last resort that complement planned and proposed connectivity zones. Improved planning for the Heart of Borneo The Pan-Borneo Highway and related roadways in Sabah would unilaterally cut off the head of the HoB by isolating hundreds of thousands of hectares of Sabahan PAs and other intact forest. The greatest challenge to regional HoB integrity is the planned Sapulut-Kalabakan upgradehighway through the trans-boundary forest patch of southern Sabah (Fig 4). Fig 6. Select inter-patch links (numbered) by which forest corridors/underpasses would enhance the integrity of the northern Heart of Borneo region following planned road developments in Sabah. Note: Numbered arrows are indicative sites for corridor and road-underpass development that would enhance postdevelopment forest/PA connectivity across Sabah and the northern HoB (S1 Table). The circled area at Site 4 is important for regional connectivity but would not require corridors/underpasses, just sound management alone (S1 Table). 'Planned connectivity' zones were adopted by the Sabah Structure Plan 2033 Proposal Map endorsed by Sabahan government in 2016 (plan SSP2033 Gazette LXXI/47/231) [36]. Planned connectivity zones near site 1 and in south-western Sabah approximate those of the Heart of Borneo Corridor Implementation Project [25,54]. 'Proposed connectivity' zones are according to an earlier Sabah Forestry Department (SFD) proposal endorsed by the Sabah Structure Plan 2033 report [29]. PA extent is shown for the ten post-development intact forest patches with greatest PA extents. Intact forest patches are as defined previously. A corridor at Site 5 would bridge a series of disjointed patches lacking PAs (grey). https://doi.org/10.1371/journal.pone.0221947.g006 Conservation and development in the Heart of Borneo Roadways for the Pan-Borneo Highway were largely planned independently of the planned and proposed connectivity zones in Sabah. Calls for a trilateral HoB Master Plan resonate in this light [23]. A HoB Master Plan would constitute the further formalization and codification of the original HoB trilateral agreement. Amongst transboundary conservation initiatives internationally, formalized transboundary joint governance is not uncommon ( [20]:Appendix B). Related master plans for conservation and development are however arguably observed more amongst contiguous transboundary areas than expansive regional zones such as the HoB. The Marittime Alps-Mercantour transboundary conservation zone (France and Italy) and the Great Limpopo Transfrontier Park (Mozambique, South Africa, and Zimbabwe) both exemplify the evolution of informal joint management activities amongst national entities into formalized, common legal governance regimes entailing seamless transboundary plans ( [20]: p. 52,68). For the HoB, at a minimum a basic master plan could designate high-conservation forests, corridors, and forest buffer zones across the HoB with trilateral consensus. Such an overarching plan would in turn support related calls for the legal recognition of the HoB for planning purposes [55] and, further still, for trilateral spatial planning of conservation, land use, and infrastructure within the HoB [23]. Trilateral planning for common conservationand-development goals across the HoB is considered to be much more effective and efficient than current approaches, according to conservation-scenario analyses [17,18]. Compromised ecological planning across the HoB region partly reflects the fact that the HoB is a voluntary agreement lacking force of law. In Malaysia, infrastructure development schemes are designed federally (e.g., National Physical Plan) but elaborated by quasi-independent state plans (i.e., Sabah Structure Plan), within which the 'HoB vision' must be approximated under existing environmental laws and policies. The ultimate elaboration of state plans at the local level further challenges the regional HoB vision, as in instances where laws are conflicted or poorly observed. Similar situations characterize Indonesia and Brunei [55]. A trilateral HoB Master Plan would supplant such 'good-will implementation' and is feasible, depending on its ultimate scope and legal framework. A basic overarching trilateral Master Plan that identifies regional priority conservation networks and corridors, guides unilateral infrastructure and land-use plans accordingly, and more clearly articulates the responsibilities of state-level stakeholders may alone suffice to promote greater regional integrity. Intensive trilateral coordination, and possibly the legal recognition of HoB boundaries, may prove essential, since any master plan would invariably be the culmination of a common governance regime rather than a means to this end. More detailed trilateral spatial planning of infrastructure and land use across the HoB is far more challenging, and probably unlikely, given state sovereignty issues. Still, member states could conceivably call upon a basic overarching master plan to pre-emptively inform and hedge against unilateral development plans that would undermine regional HoB integrity as described by the plan. For this, a permanent HoB Secretariat, rather than the current annually-rotating trilateral Secretariat scheme, would likely be necessary to ensure a more consistent, rapid, and equitable coordination of the HoB conservation agenda. Limited options for post-development mitigative conservation Planned road development in Sabah relies on highway underpasses and forest corridors to retain connectivity across the HoB. Our analysis suggests that Sabah's current plans overlook various sites where such 'linear conservation' would be beneficial. Regardless, the linear-conservation approach is dubious and risky, given limited evidence that species of high conservation value would benefit enough from this approach to offset the direct and indirect impacts of forest-road development [56,57]. Such an approach is regarded as an 'engineering solution' to a conservation problem, as tokenistic, or more cynically as a strategic overture to conservation interests [56][57][58]. Peninsular Malaysia has previously employed forest corridors alongside roadway underpasses as local conservation measures, and Sabah is apparently increasingly adopting this approach. Peninsular Malaysia utilized 25 highway underpasses in conjunction with 17 forest corridors to integrate fragmented conservation landscapes via the Central Forest Spine initiative [35,58,59]. To date, Sabah has developed corridors to a lesser degree [60], at least with respect to large networks of corridors, and underpasses have rarely featured in these efforts. This situation may change with the implementation of the Sabah Structure Plan 2033, which explicitly advocates both greater corridor development and extensive roadway underpasses. This advocacy coincides with independent efforts to establish forest corridors across the HoB. The HoB Corridor Implementation Project, for instance, would establish corridors between at least a dozen PAs extending from south-central and southwestern Sabah into Sarawak, Kalimantan, and Brunei [25,61]. Indeed, planned connectivity zones in southern Sabah according to the Sabah Structure Plan 2033 approximate those of the HoB Corridor Implementation Project. Underpasses and corridors are unlikely to be effective mitigating conservation options in Sabah in light of the nature and scale of planned infrastructure development there. In the Central Forest Spine initiative in Peninsular Malaysia [35,59], none of the highway underpass locations were based on surveys of animal movements or biodiversity [58]. Underpasses were instead simply a byproduct of normal road construction, whereby roads necessarily traverse rivers, creeks and gullies and so yield 'underpasses'. Observers of Sabah anticipate a similar commitment to underpass development there, which would offer only limited benefits for faunal movements. Surveys of mammalian movements across 20 underpasses along two roadways of~30-40 km length in Peninsular Malaysia suggest that underpasses facilitated meaningful road crossings for only two species (elephants, serows) of seven species observed, with only one of these species (serows) using the highway underpasses at expected rates ( [59]:p.114-159). Mammals otherwise cross roads where and when they wish, increasing mortality [62], or they avoid roads, increasing population isolation [35]. Others have similarly concluded that linear conservation is at best an uncertain, if not ineffective, means of promoting meaningful ecological connectivity [56,57]. Furthermore, constructing extensive networks of underpasses and related corridors explicitly for wildlife movement is likely prohibitively expensive, given the scale of planned developments. The costs of planning and constructing under-or over-passes in Singapore and Peninsular Malaysia have been estimated at $1.2-$32 million USD per 100 meters [63,64]. Hypothetically, two 200-meter long underpasses at the eight indicative sites in Sabah likely requiring an underpass (S1 Table) would cost $38-$1024 million USD. Even the lower figure is greater than the Sabah HoB budget allocated by the Ministry of Natural Resources and Environment [22]-raising doubts over the feasibility of extensive linear conservation at regional scales. Although it is not inconceivable that underpasses could facilitate ecologically-meaningful exchanges amongst some forest patches, this would entail various carefully-planned underpasses per site, in conjunction with local faunal surveys, roadside fencing [62], and monitoring programs, further increasing costs and doubt over the feasibility of this approach. Additional costs for law enforcement to prevent roadside poaching would also be required, given poachers' tendency to target specific species at underpass 'bottlenecks' [59,65]. Other, complementary forms of mitigative conservation planning include enhanced conservation designations for roadside forests [66]. Limited scope remains for this in Sabah. Notwithstanding scattered agricultural holdings that are possibly permanent fixtures in any conservation landscape, all indicative sites for potential corridor and underpass development except site 7a are largely situated within forests designated for permanent protection (Fig 6, S1 Table), including Class II Commercial Forest Reserves. Class II reserves are however subject to partial conversion to timber plantations [67]-a trend that may gain momentum as roads are built and upgraded-and if these reserves are further degraded some may ultimately become re-designated for estate agriculture. Notably, the Sapulut-Kalabakan route separating the two main patches (Fig 4) is currently a poorly maintained, sealed former logging road to be upgraded to a four-lane highway surrounded by Class II reserves. Selectively precluding roadside timber plantations within such reserves, or elevating such reserves to Class I Protection Forest Reserves, remain viable options. This will not preclude impending losses of regional connectivity but merely hedge against its subsequent aggravation. Recently, Sabah's Chief Minister affirmed that planned roadway developments should occur along existing routes only, and not extend to intact forest areas, reflecting productive engagement between State agencies and an alliance of local conservationists [68]. While commendable, this fails to reconcile the fact that the planned upgrade of the existing Sapulut-Kalabakan route is by far the preeminent threat to the regional integrity of PAs and intact forests across the northern HoB. In other contexts, others have also highlighted how such major road upgrades typically accentuate deforestation by permitting new economic activities, all-season access to forests, and lower transportation costs for extracted resources [69]. Invariably, this dissonance between the recent decision by the Chief Minister and the continued planning of road upgrades reflects the multiple priorities and tradeoffs inherent to conservation and development planning. Conservationists, for their part, have focused more on sensitive, threatened local habitats and their resident endangered fauna, and less on regional forest integrity. Nonetheless, in light of the considerable loss of integrity posed by the Sapulut-Kalabakan route, we similarly urge the reconsideration of its development within the Pan-Borneo Highway network. The Pan-Borneo Highway in the context of infrastructure megaprojects The emphasis on infrastructure and conservation planning in the tropics has arguably shifted over recent decades from the preservation of intact forests to the integrated management of degraded landscapes. This shift reflects both the progressive ecological degradation within the tropics but also the increasing scale and complexity of infrastructure mega-projects and the contexts that host them. The Pan-Borneo Highway within the Heart of Borneo is a case in point. Historically, the academic community highlighted the implications of major forest-penetration roads for intact-forest fragmentation [70][71][72] leading to subsequent habitat conversion and wildlife poaching [65]. In contrast, recent assessments of infrastructure development and conservation have emphasized the threats posed to remnant forests [7,28], including protected areas [8,9]. In settled landscapes, remnant forests often constitute the final frontiers for resource extraction or transportation impediments for expanding populations. Their vulnerability to infrastructure development has therefore been characterized increasingly as planning failures rather than as a carefully weighed 'cost of development'. Even for regions with large expanses of intact forest, such as in New Guinea, recent assessments have focused more on the adequacy of environmental planning to contain conservation challenges arising from infrastructure development, and less on relatively foreseeable challenges posed by forest fragmentation along penetration roads [12,73]. Studies of the ongoing Chinese Belt and Road Initiative (BRI) in particular stress the growing complexity of conservation challenges stemming from mega-projects in a globalized world in which environmental safeguards are frequently lax [10,74]. Our assessment of the Pan-Borneo Highway resonates with recent BRI assessments [10,11,74,75] in emphasizing environmental governance, and not merely environmental management, as central for sound mega-project development. Sabah offers at least two lessons in this respect. First, Sabah underscores the persistent need to more fully integrate conservation-anddevelopment planning, legislatively and administratively. In the absence of such planning, conservation efforts will be undone and opportunities squandered by the sheer scale of development. Sabah, for instance, has more than doubled its coverage of protected forests since joining the Heart of Borneo, with concomitant increases in forest connectivity. Yet the Pan-Borneo Highway, promulgated independently of these conservation efforts, would limit or even reverse many of their conservation benefits. Second, integrated planning should occur at the most basic of scales to anticipate and forestall cumulative effects at regional scales. In Sabah, planned upgrade roads would disrupt regional conservation integrity as much as planned new roads (e.g., the Sapulut-Kalabakan route); yet upgrade roads are not equally subject to planning scrutiny. In fact, environmental impact assessments for upgrade roads are relaxed where 'upgrades' entail new highway construction parallel to an existing rudimentary road. The amplification of planning deficiencies to national or multi-national scales by the BRI and similar initiatives has culminated most recently in unprecedented calls to unite conservation and development priorities into a single rubric [10,11,75]. The BRI and similar initiatives would, in effect, adopt conservation as a 'core value' or an explicit goal. The case of the Pan-Borneo Highway offers few clear examples of how this might look relative to older models seeking simply to protect intact forests from encroaching development. Sabah's recent decision to avoid new road development in intact forests [68] resonates with calls for 'no net biodiversity loss' with respect to the BRI [11], although it does not drastically differ from the older models. The legal recognition of the Heart of Borneo and a trilateral Master Plan may guide development according to conservation priorities, although this would not necessarily support such priorities. Proposals for new networks of protected areas and corridors along infrastructure routes might afford greater synergies between conservation and development [11], though the anticipated isolation of PAs around the Pan-Borneo Highway urges caution. In the realm of road development at least, threats to regional connectivity in particular may remain a serious challenge without a clear planning patch. Conclusion Sabah is planning a Pan-Borneo Highway network to increase economic activity, as are variously other member states of the HoB initiatives. While Sabah has substantially increased its PA extent, the currently planned network would seriously undermine PA and intact-forest connectivity within Sabah and across the northern HoB region, reducing the conservation benefit of individual, isolated PAs and managed forest landscapes. Trilateral spatial planning across the HoB is recommended to forestall and hedge against such outcomes, but it will require coordination amongst HoB member states at a level not yet attained. Meanwhile, Sabah's commitment to underpasses and forest corridors for conservation mitigation is arguably a futile gesture. It is apparently too geographically selective, too unlikely to facilitate meaningful ecological re-integration, and ultimately too economically costly as a practical solution. We urge that the Sapulut-Kalabakan route within the Pan-Borneo Highway network be reconsidered in particular. Table). The circled area at Site 4 is important for regional connectivity but would not require corridors/underpasses, provided sound management alone (S1 Table). PA extent is shown for the ten post-development intact forest patches with greatest PA extents. A corridor at Site 5 would bridge a series of disjointed patches lacking PAs (grey). (TIF) S1 Table. Conditions, costs, and contributions of complementary sites for road-underpass and forest-corridor planning in support of connectivity of intact forests and protected areas (PAs) across post-development Sabah.
2019-09-19T09:08:55.420Z
2019-09-18T00:00:00.000
{ "year": 2019, "sha1": "df40b2771216370addd8cc5984a4055e47e6bde9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0221947&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f672504d8827598e31c3fd7ec479dd94cfe555c2", "s2fieldsofstudy": [ "Environmental Science", "Political Science", "Geography" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
201803473
pes2o/s2orc
v3-fos-license
Insights Into Enhanced Complement Activation by Structures of Properdin and Its Complex With the C-Terminal Domain of C3b Properdin enhances complement-mediated opsonization of targeted cells and particles for immune clearance. Properdin occurs as dimers, trimers and tetramers in human plasma, which recognize C3b-deposited surfaces, promote formation, and prolong the lifetime of C3bBb-enzyme complexes that convert C3 into C3b, thereby enhancing the complement-amplification loop. Here, we report crystal structures of monomerized properdin, which was produced by co-expression of separate N- and C-terminal constructs that yielded monomer-sized properdin complexes that stabilized C3bBb. Consistent with previous low-resolution X-ray and EM data, the crystal structures revealed ring-shaped arrangements that are formed by interactions between thrombospondin type-I repeat (TSR) domains 4 and 6 of one protomer interacting with the N-terminal domain (which adopts a short transforming-growth factor B binding protein-like fold) and domain TSR1 of a second protomer, respectively. Next, a structure of monomerized properdin in complex with the C-terminal domain of C3b showed that properdin-domain TSR5 binds along the C-terminal α-helix of C3b, while two loops, one from domain TSR5 and one from TSR6, extend and fold around the C3b C-terminus like stirrups. This suggests a mechanistic model in which these TSR5 and TSR6 “stirrups” bridge interactions between C3b and factor B or its fragment Bb, and thereby enhance formation of C3bB pro-convertases and stabilize C3bBb convertases. In addition, properdin TSR6 would sterically block binding of the protease factor I to C3b, thus limiting C3b proteolytic degradation. The presence of a valine instead of a third tryptophan in the canonical Trp-ladder of TSR domains in TSR4 allows a remarkable ca. 60°-domain bending motion of TSR4. Together with variable positioning of TSR2 and, putatively, TSR3, this explains the conformational flexibility required for properdin to form dimers, trimers, and tetramers. In conclusion, the results indicate that binding avidity of oligomeric properdin is needed to distinguish surface-deposited C3b molecules from soluble C3b or C3 and suggest that properdin-mediated interactions bridging C3b-B and C3b-Bb enhance affinity, thus promoting convertase formation and stabilization. These mechanisms explain the enhancement of complement-mediated opsonization of targeted cells and particle for immune clearance. INTRODUCTION Complement plays an important role in humoral immune responses against invading microbes, clearance of apoptotic cells and debris, and modulation of adaptive immune responses (1,2). Initiation of the complement cascades through either the classical, lectin or alternative pathway converges in the formation of C3 convertase complexes, consisting of C3b and protease fragment Bb forming C3bBb, which generates a positive-feedback loop that amplifies the complement cascade yielding massive deposition of C3b onto the targeted surface. At this critical step, the complement system is heavily regulated. Intrinsically, the non-covalent C3bBb enzyme dissociates irreversibly into its components C3b and Bb with a half-life time of 1-2 min (3,4). Host regulators, such as factor H (FH), decayaccelerating factor (DAF), and membrane-cofactor protein (MCP), provide protection of host cells against complement attack (5). FH and DAF inactivate the C3 convertase by promoting dissociation of C3bBb into C3b and Bb (5). FH and MCP have cofactor activity that enables factor I (FI) to bind and cleave C3b into iC3b, rendering it inactive, and unable to form new convertases (5,6). Properdin is the only known intrinsic positive regulator of the complement system (7)(8)(9). Properdin stabilizes C3bBb, increasing the half-life of the enzyme complex 5-to 10fold (10). In addition, it has been indicated that properdin accelerates formation of pro-convertases C3bB (11) and reduces C3b inactivation by FI (12,13). Furthermore, it has been suggested that, for some bacterial surfaces, apoptotic/necrotic cells or renal epithelial cells, properdin can function as a pattern recognition molecule, forming an initiating platform for the alternative pathway (14)(15)(16)(17)(18)(19), although others claim that properdin binding to surfaces depends on initial C3b deposition (20,21). Properdin deficiency results in increased susceptibility to infection by Neisseria meningitidis (22), with high mortality rates compared to deficiency of protein components (C5-C9) of the terminal pathway (23). In addition, properdin deficiency has been associated with other diseases, such as otitis media and pneumonia, as reviewed in Chen et al. (23). Human properdin is an oligomeric plasma protein that is present in serum at relatively low concentrations (4-25 µg/ml) (8), compared to other complement components [∼1.2 mg/ml for C3 and ∼0.6 mg/ml for factor B (FB)] (24). In contrast to most other complement proteins, properdin is not produced by the liver, but expressed locally by various immune cells including neutrophils, monocytes, and dendritic cells (23,25,26). Therefore, at sites of inflammation properdin concentrations might be considerably higher than serum concentration. In Abbreviations: ADP, atomic displacement parameters; Bb, cleavage product b of factor B; C3, complement component 3; C3b, cleavage product b of complement component 3; CTC, C-terminal C345c; DAF, decay-accelerating factor; EM, negative-stain electron microscopy; FB, factor B; FH, factor H; FI, factor I; IMAC, immobilized metal affinity chromatography; MCP, membrane-cofactor protein; Pc, cleaved properdin; PDB, protein data bank; rmsd, root-mean square deviation; SCIN, Staphylococcus aureus inhibitor; SEC, size exclusion chromatography; SPR, surface-plasmon resonance; STB, short transforming-growth factor B binding protein-like; TSR, thrombospondin type-I repeat. serum, properdin is predominantly found as dimers, trimers and tetramers in the percentage ratios of 26:54:20% (8), although a small amount of pentamers and hexamers are also found (13,27). At physiological conditions, no exchange between the oligomeric states of properdin is observed (8), but higher order aggregates form upon freeze-thaw cycles (28). A properdin protomer consists of 442 amino-acid residues with a fullyglycosylated molecular weight of 53 kDa (29). Properdin forms seven domains, an N-terminal domain of unknown fold, followed by six thrombospondin type I repeats (TSR) domains (29). TSR domains consist of ∼60 amino-acid residues and have a thin and elongated shape (30), formed by only three anti-parallel peptide chains. The TSR-fold is structurally stabilized by regions forming β-sheets, three conserved disulphide bonds and by a structural WRWRWR motif [also referred to as Trp-ladder (31)] that forms a stack of alternating tryptophans and arginines through π-cation interactions (30). The N-terminal domain has often been referred to as TSR0 (9,13,32,33), despite missing the WRWRWR motif. Properdin is highly post-translationally modified, resulting in 14-17 C-mannosylated tryptophans, four O-linked glycans, and one N-linked glycan (34,35). Negative-stain electron microscopy (EM) has shown that oligomeric properdin forms ring-shaped vertices connected by extended and flexible edges (13,27). Based on EM images and TSR domain deletions, it has been proposed that the vertices consist of interlocking C-and N-terminal domains of properdin protomers and the edges consist of three bridging TSR domains from a single protomer (13,27,29). EM images indicate that each properdin vertex binds a single C3bBb complex (13). Higgens et al. (29) showed that domain deletions of properdin TSR domains 4 through 6 results in altered oligomerization and loss of function, whereas deletion of TSR3 has no significant effect on either oligomerization or properdin function. Pedersen et al. (32) introduced a proteolytic cleavage site between properdin-domains TSR3 and TSR4 and thereby generated single properdin vertices for crystallographic studies. A 6.0-Å resolution crystal structure of a single properdin vertex in complex with C3bBb (32) [that was stabilized by S. aureus inhibitor SCIN (36)] showed that properdin binds to the αchain region of C3b, revealing density adjacent to the C-terminal C345c (CTC) domain of C3b. However, the resolution of the crystallographic data (PDB ID: 5M6W) did not allow atomic modeling of the cleaved properdin (Pc) fragment. In this study, we present the production of monomerized properdin variants that stabilize C3bBb using co-expression of properdin N-and C-terminal fragments. We determined crystal structures of monomerized properdin and its complex with the CTC domain of C3/C3b with diffraction data up to 2.0-and 2.3-Å resolution, respectively. These structures reveal the fold of the properdin N-terminal domain, the properdin domain arrangement that yields the properdin ring-shaped vertex structure, stabilization of Trp-Arg interactions in the Trp-ladder provided by tryptophan C-mannosylation, structural flexibility of the TSR4 domain and functionally important extensions of the TSR5 and TSR6 domains. The structure of monomerized properdin in complex with the C3/C3b-CTC domain identifies the specific regions of properdin involved in binding FB and Bb that enhance pro-convertase formation and convertase stabilization, respectively. Finally, we propose a model for properdin oligomers stabilizing convertases on surfaces based on re-analysis of the 5M6W-diffraction data set. Molecular Cloning and Construct Design Human properdin (UniProtKB-P27918) cDNA was obtained from Open Biosystems (Dharmacon Inc.). Domain boundaries were chosen based on both UniProt assignment and crystal structures of thrombospondin I domains TSR2 and TSR3 (PDB ID: 1LSL) (30). In addition to full-length properdin (res. , four N-terminal constructs were created, P N1 (res. 28-132), P N1 ′ (res. 28-134), P N12 (res. , and P N123 (res. , comprising the first two, three, and four N-terminal domains of properdin; and P 456 (res. 256-469) comprising the three C-terminal domains. The N-terminal domain boundary of the C3/C3b-CTC domain (res. 1517-1663) was chosen based on the structure of C3b [PDB ID: 5FO7 (37)]. All inserts were generated by PCR using clone specific primers that include a 5 ′ BamHI restriction site that results in an N-terminal Gly-Ser cloning scar in all constructs and a NotI restriction site at the 3 ′ end of the insert. The NotI site results in a C-terminal extension of three alanine's in all constructs, except for P N12 and C3/C3b-CTC, where a stop codon was introduced prior to the NotI site. All inserts were cloned into pUPE expression vectors (U-Protein Express BV, Utrecht, the Netherlands). For small-scale (4 ml) expression tests, one of the constructs (either the N-or C-terminal fragment) included a 6x-His purification tag. In largescale co-expressions P 456 included a C-terminal 6xHis-tag, with no tag on the N-terminal constructs. Similarly, constructs for full-length properdin included a C-terminal 6xHis-tag and the C3/C3b-CTC construct contained an N-terminal 6xHis-tag. Recombinant proteins were transiently expressed in Epstein-Barr virus nuclear antigen I (EBNA1)-expressing, HEK293 cells (HEK293-EBNA) (U-Protein Express BV, Utrecht, the Netherlands). For crystallization purposes, proteins were expressed in GnTI − HEK293-EBNA cells. N-terminal and C-terminal properdin fragments were co-expressed using a 1:1 DNA ratio. Cells were grown in suspension culture at 37 • C for 6 days post-transfection. For each culture, supernatant was collected by a low-speed spin (1,000 × g for 10 min), followed by a high-speed spin (4,000 × g for 10 min) to remove any remaining cell debris. Subsequently, 3 ml/L Ni-Sepharose Excel beads (GE Healthcare) was added to the supernatant and the mixture was incubated for 2 to 16 h with constant agitation at 4 • C. The beads were washed with 10-column volumes of buffer A (20 mM HEPES pH 7.8, 500 mM NaCl) and 10 column volumes of buffer A supplemented with 10 mM imidazole. Bound protein was subsequently eluted with buffer A supplemented with 250 mM imidazole. For small-scale (4 ml) expression tests of properdin fragments no further purification steps were performed, whilst for large-scale (1 L) expressions, pooled fractions were concentrated, and further purified by sizeexclusion chromatography (SEC). P N12/456 for SPR was purified with a Superdex 200 16/600 (GE Healthcare) using 25 mM HEPES pH 7.8, 150 mM NaCl as the running buffer. All other properdin complexes were purified on a Superdex 200 10/300 Increase (GE Healthcare) column using either 20 mM HEPES pH 7.4, 150 mM NaCl (properdin, P N1/456 , P N1 ′ /456 ) or 25 mM HEPES pH 7.8 with 100 mM NaCl (P N12/456 for crystallizations) as the running buffer. The C3/C3b-CTC domain was purified on a Superdex 200 16/600 (GE Healthcare) in 20 mM HEPES pH 7.4, 150 mM NaCl. Human wild type FB, catalytically inactive (S699A) double-gain-of-function (D279G, N285D) FB mutant (FB dgf ‡ ) (38), factor D (FD), DAF1-4 and Salp20 were purified as described previously (39)(40)(41). C3 and C3b were purified from human plasma as described in Wu et al. (40). Full-length properdin was stored at 4 • C and all other proteins were flash frozen by plunging into liquid N 2 and stored at −80 • C. C3 Convertase Stability Assay To generate C3 convertase, purified C3b (obtained after cleavage of human serum-derived C3) was mixed with catalytically inactive FB dgf ‡ at a ratio of 1:1.1 in the presence of 5 mM MgCl 2 . After incubation for 5 min at 37 • C, FD was added to a ratio of C3bB:FD of 1:0.1 and the mixture was incubated for another 5 min at 37 • C, after which the C3 convertase was stored on ice till further use. C3 convertase was diluted to 1.5 µM with icecold buffer (20 mM HEPES pH 7.4, 150 mM NaCl and 5 mM MgCl 2 ). Either 6 µM P N1/456 or P N12/456 or an equal volume of buffer (control) was added to the C3 convertase in a ratio of 1:2 resulting in a final concentration of 2 µM P N1/456 or P N12/456 and 1 µM C3 convertase. The mixture was incubated at 37 • C for 1 h and subsequently put on ice. The amount of C3 convertase was analyzed by analytical SEC using a Superdex 200 10/300 Increase pre-equilibrated with 20 mM HEPES pH 7.4, 150 mM NaCl, and 5 mM MgCl 2 at 18 • C on a Shimadzu FPLC. Surface-Plasmon Resonance C3b was generated from C3 through the addition of FB and FD to a C3:FB:FD molar ratio of 1:0.5:0.03 in the presence of 5 mM MgCl 2 and incubation at 37 • C. At 10 min intervals fresh FB was added to ensure complete conversion of C3 to C3b. Subsequently, C3b was biotinylated on the free cysteine that is generated after hydrolysis of the reactive thioester (42); EZ-Link Maleimide-PEG2-Biotin (Thermofisher) was added to a final concentration of 1 mM to the freshly produced C3b (13 µM) and the mixture was incubated for 3 h at room temperature. C3b was separated from unreacted Biotin-Peg2-Maleimide by SEC using a S200 10/300 increase column preequilibrated in 20 mM HEPES pH 7.4, 150 mM NaCl. Purity and conversion of C3 to C3b were analyzed by SDS-PAGE. To analyze equilibrium binding to monomerized properdin, we used P N1 ′ /456 that includes an additional Cys-Pro (res. 133-134) at the C-terminus of TSR1. P N1 ′ /456 (45 µM) was biotinylated as described for C3b and separated from excess biotin with a 5 mL HiTrap Desalting column (GE Healthcare) pre-equilibrated in 20 mM HEPES pH 7.4, 150 mM NaCl. Biotinylated proteins were spotted on a SensEye P-Strep chip (SensEye) at 50 nM for 60 min with a continuous flow microspotter (CFM, Wasatch). Equilibrium binding kinetics were analyzed using an IBIS-MX96 (IBIS Technologies). All experiments were performed in 20 mM HEPES pH 7.4, 0.005% (v/v) Tween-20, 150 mM NaCl at 4 µl/s. For experiments involving C3bB and C3bBb the buffer was supplemented with 5 mM MgCl 2 . Analyses at low ionic strength were performed at a NaCl concentration of 50 mM. Analytes were injected from low to high concentration in 14 2-fold incremental steps. In equilibrium binding analyses involving C3bB or C3bBb, C3bB was generated on a C3b coated SPR surface by injecting 100 nM FB dgf ‡ for 5 min prior to each analyte injection. C3bBb was generated from C3bB by injections of 100 nM FD for 5 min after each FB injection. Where indicated, DAF1-4 (1 µM), FD (100 nM), and/or Salp20 [1 µM for experiments with C3b alone and 5 µM in experiments with C3 (pro) convertase] were injected, to regenerate the C3b surface. Salp20, a properdin inhibitor from deer tick (43), was required to dissociate full length properdin from C3b. In all experiments, the SPR surface was washed with buffer supplemented with 1 M NaCl at 8 µl/s for 30 s at the end of each cycle. Temperature was kept constant at 25 • C. Prism (GraphPad) was used for data analysis. K D ′ s were determined by fitting the end point data to Y = Bmax * X KD+X + Background. Crystallization, Data Collection, and Structure Determination P N1/456 and the C3/C3b-CTC domain were dialyzed overnight at 4 • C using a 3.5 kDa cutoff Slide-A-Lyzer Mini Dialysis Unit (Thermo Scientific) against 10 mM HEPES, 50 mM NaCl, pH 7.4. The N-linked glycan on Asn428 of P N1/456 was removed by including 1% v/v EndoHF (New England BioLabs) during the dialysis. P N1/456 , P N12/456 , and the C3/C3b-CTC domain were concentrated to 8.7 mg/ml, 10 mg/ml, and 10.3 mg/ml, respectively. Crystals were obtained using the sitting drop vapor diffusion method at 18 • C. Crystals of P N12/456 were grown in 100 mM sodium citrate pH 5.5 and 20% (w/v) PEG 3,000 and cryoprotected by soaking in mother liquor supplemented with 25% (v/v) glycerol. Crystals of P N1/456 were grown in 0.2 M potassium sulfate and 20% (w/v) PEG 3350, and cryoprotected by soaking in mother liquor supplemented with 25% (v/v) ethylene glycol. P N1/456 and C3/C3b-CTC were mixed in a 1:1 molar ratio at 8 mg/ml and crystals were grown in 8% v/v Tacsimate pH 5.0 and 20% (w/v) PEG 3350, and cryoprotected by soaking in mother liquor supplemented with 25% glycerol. After harvesting, crystals were cryo-cooled by plunge freezing in liquid N 2 . All diffraction data were collected at the European Synchrotron Radiation Facility (ESRF) on beamlines ID29 (P N12/456 ) and ID23-1 (P N1/456 -CTC, P N1/456 ). The diffraction images were processed with DIALS (44) and the integrated reflection data were then anisotropically truncated with the STARANISO web server (45). Structures were solved by molecular replacement using Phaser (46). Atomic models were optimized by alternating between refinement using REFMAC (47), and manual building in Coot (48). C-and O-linked glycosylation restraints were generated within Coot, using ACEDRG (49). The structure of P N12/456 was refined with restraints generated from P N1/456 using ProSMART (50). Data was deposited at the RSCB Protein Data Bank (51) with PDB IDs 6S08, 6S0A, and 6S0B. We also re-analyzed data deposited for Pc in complex with C3bBb-SCIN (32) (PDB ID: 5M6W). An initial position for the properdin molecule was obtained by superposing our P N1/456 -CTC model onto the CTC domain of the two C3b molecules in the model deposited by Pedersen et al. (32). Subsequently, TSR2 from P N12/456 was added and manually adjusted to fit the density. In addition, for one of the two copies in the asymmetric unit, density corresponding to TSR3 was apparent. A TSR model derived from TSR2 from thrombospondin-1 [PDB ID: 1LSL (30)], containing the canonical cysteines and Trp-ladder residues, but otherwise consisting of poly-alanines, was placed into this density. The resulting model was further refined using the LORESTR refinement pipeline (52). Coordinates of the re-refined properdin-C3bBb-SCIN complex (PDB ID: 5M6W) are available from the authors upon request. Production of Monomerized Properdin by Co-expression of N-and C-Terminal Fragments We generated N-terminal constructs of properdin, comprising the N-terminal domain of unknown fold and TSR1, TSR2, and TSR3, denoted P N1 (res. 28-132), P N12 (res. , and P N123 (res. , and a C-terminal construct comprising TSR4, TSR5, and TSR6, P 456 (res. 256-469). Small scale expression of isolated His 6 -tagged terminal fragments followed by IMACaffinity purification resulted in no significant expression of P 456 , whereas co-expression of N-and C-terminal fragments yielded both fragments in ∼1:1 ratio in all cases. We therefore decided to continue with large-scale co-expression of the two shorter N-terminal fragments, P N1 and P N12 , with P 456 with the latter carrying a C-terminal His 6 -tag (see section Materials and Methods). IMAC-affinity purification yielded stable protein complexes consistent with one-toone non-covalent complexes of P N1 with P 456 and P N12 with P 456 , denoted P N1/456 and P N12/456 , respectively. Both P N1/456 and P N12/456 yielded monodisperse peaks during sizeexclusion chromatography (SEC) consistent with a single monomerized species (Figures 1A,B), whereas recombinant fulllength properdin produced a SEC spectrum with multiple peaks consistent with a mixture of dimeric, trimeric, and tetrameric properdin ( Figure 1C). Large-scale expression and purification of P N1/456 and P N12/456 yielded ca. 5-8 mg per liter culture. Monomerized Properdin Binds and Stabilizes C3 Convertases Stabilization of C3 convertases was analyzed by monitoring the decay of pre-formed C3bBb in the presence and absence of properdin (Figures 1D,E). In the absence of properdin, ∼75% of the C3bBb dgf ‡ was dissociated into C3b and Bb dgf ‡ after 1 h at 37 • C, whereas in the presence of P N1/456 or P N12/456 dissociation of C3bBb dgf ‡ was reduced to ∼20-25%, indicating that P N1/456 and P N12/456 stabilized C3 convertase to a similar extent. Binding affinities of P N12/456 for C3b, pro-convertase C3bB and convertase C3bBb were determined using surface plasmon resonance (SPR) equilibrium binding experiments. C3b was biotinylated at its reactive thioester, which allows coupling to streptavidin-coated SPR sensor chips in an orientation reflecting that of surface bound C3b. Under physiological salt conditions, P N12/456 bound C3b with a K D of 6.8 ± 0.2 µM, which is similar to the K D of 7.8 µM reported by Pedersen et al. for single properdin vertices generated by proteolytic cleavage (32), but much lower than the apparent K D of 22 ± 2 nM for oligomeric properdin (Figure 2). At low ionic strength (50 mM NaCl), interaction between P N12/456 and C3b appeared much stronger with a K D of 0.69 ± 0.04 µM. Next, we generated pro-convertases C3bB and convertases C3bBb on the chip (see section Materials and Methods). P N12/456 bound C3bB and C3bBb with a K D of 98 ± 2 nM and 34 ± 1 nM, respectively, (Figure 3), whereas properdin oligomers bound with an apparent K D of 4.6 ± 1 nM and 4.4 ± 1 nM, respectively. Thus, P N12/456 binds to C3b, C3bB and C3bBb (in order of increasing affinity). Previous data (13,32) suggested that the main interaction site of properdin with C3b is localized on the C3b-CTC domain. Therefore, we analyzed binding of the isolated C3/C3b-CTC domain to a P N1 ′ /456 coated SPR chip. The C3/C3b-CTC domain binds P N1 ′ /456 with a K D of 18.6 ± 1.6 µM, which is comparable to the K D of 6.8 ± 0.2 µM we observed for C3b and P N12/456 , suggesting that the primary binding interface of C3b is indeed provided by the CTC domain (Figure 2). Overall, these data indicated that the non-covalent complexes P N1/456 and P N12/456 bound C3b and stabilized C3bBb similar to an excised monomeric version of full-length oligomeric properdin. Structure Determination of Monomerized Properdin and Its Complex With C3/C3b-CTC P N1/456 and P N12/456 crystallized as thin plates, and resulted in highly anisotropic data, with anisotropic resolution limits of 2.0-2.9 Å and 2.5-3.9 Å, respectively. P N1/456 in complex with C3/C3b-CTC crystallized as long rods and pyramids. While the pyramid-shaped crystals showed poor diffraction, P N1/456 -CTC rod-shaped crystals diffracted anisotropically with resolution limits of 2.3-2.7 Å. Data collection statistics are shown in Table 1. We first determined the crystal structure of P N1/456 in complex with C3/C3b-CTC using the C3b-CTC domain [PDB ID: 5FO7 (37)] as a search model for molecular replacement with Phaser (46). A minimal TSR model was generated with Sculptor (53) using a sequence alignment (54) of TSR1, TSR4, TSR5, and TSR6 in combination with TSR2 from thrombospondin-1 [PDB ID: 1LSL (30)]. This model was then used in subsequent rounds of molecular replacement, which resulted in the positioning of TSR1, 4, 5, and half of TSR6 accounting for ∼80% of the total structure. The N-terminal domain and the remaining part of TSR6 were built using Coot (48). Structure determination continued with further rounds of model building (48) and structure refinement (47), until convergence. The refined model of P N1/456 taken from P N1/456 -CTC was used in molecular replacement to solve the structures of P N1/456 and P N12/456 . After initial placement, P N12/456 was completed by molecular replacement using the TSR model. Model refinement statistics for all structures are listed in Table 1 final models are shown in Figure 4. Fold of the Properdin N-Terminal Domain The crystal structure of properdin revealed that the N-terminal domain (res. 28-76) adopts a compact globular fold, containing two β-sheets and a single α-helix stabilized by three disulphide bonds (Figure 4B). A homology search using the Dali server Properdin-TSR Domains Five of the six TSR domains of properdin are present in the structures of P N1/456 , P N12/456 , and P N1/456 -CTC (Figure 4). The TSR domains of properdin display minor to major variations from the TSR domain fold as described for the structures of TSR2 and 3 from thrombospondin-1 (30); these are shown schematically in Figure 4B. Compared to TSR2 and 3 from thrombospondin-1, properdin domain TSR1 (res. 77-133) lacks a five-residue β-bulge preceding β-strand C, referred to as "jar-handle" motif, that provides H-bonding interactions with the indole ring of the first tryptophan of the Trp-ladder. Instead of this β-bulge, the C-strand in TSR1 is extended by two residues and the typical H-bonding interactions of the β-bulge are substituted by Ser112 in the B-C loop, which is observed within Hbond distance of the Trp80 indole ring. In the TSR1-Trp ladder, a glutamine residue resides at the position of the third arginine, resulting in a lost π-cation interaction with the last Trp. Preceding the prototypical C-terminal Cys (Cys133), TSR1 contains an additional cysteine (Cys132) that connects to Cys170 of TSR2, as observed in the structure of P N12/456 ( Figure 5A). However, our construct P N1 is terminated at Cys132. As a consequence, we observed a non-native disulphide bond between Cys93-Cys132 and increased disorder at the Cterminal end of TSR1 in the structure of P N1/456 and P N1/456 -CTC ( Figure 5B); however, the overall fold of TSR1 was not affected. TSR2 (res. 134-191) displayed the consensus TSR fold, with only minor deviations besides the additional cysteine (Cys170).However, this domain was not well-defined by the density as reflected by its high atomic displacement parameters (ADP) (Figure 4A). TSR4 (res. 256-312) showed striking variations in the structures of P N1/456 , P N12/456 , and P N1/456 -CTC (Figures 6A,B). In the Trp-ladder of TSR4 the canonical third tryptophan is replaced by a valine (Val266). A comparison of TSR4 from all three structures shows that TSR4 displays a bending-like motion at this position (Figures 6A,B). The distal part of TSR4 is held in place by interaction with the STB domain, but the proximal part, where the short Trp-ladder, comprising Trp260 and Trp263 in strand A, Arg282 in strand B and Arg302 in strand C, is located is at a different position in each of the three structures resulting in a distance of 28.3 Å between the Cα atoms of TSR4 Ser255 in P N1/456 and P N12/456 . TSR5 (res. 313-376) displayed well-defined electron density in all three structures and closely resembled the TSR-consensus fold. However, the canonical third arginine in strand B of TSR5 is replaced by Gln344. Gln344 forms a H-bond with Arg368 from strand C and Arg364 is in π-cation stacking conformation with Trp324. Thus, the stacking of Trp-ladder residues is effectively conserved. The most striking feature of TSR5 is a six-residue insertion (res. 328-333) (29), in the A-B loop between Cys327 and Cys337 that forms a loop that protrudes from the TSR domain. TSR6 (res. 377-469) showed a larger deviation from the typical TSR-fold and has a boomerang-like appearance, due to a 22 residue-long insertion (res. 412-434) (29) in the B-C loop. This insertion forms a β-hairpin loop that protrudes from the TSR6-core ( Figure 4B). The core part of TSR6 makes an angle of 147 • with TSR5, pointing toward TSR1, and the TSR6 β-hairpin protrudes at a 70 • angle from the domain core toward and beyond TSR5. Residues 430-438 from the TSR6 β-hairpin are part of a β-sheet with the end of strand C from TSR5 (Figure 7). A hydrophobic core consisting of Pro435, Tyr371, and Ile373 from TSR5 and Leu378, Leu411, Pro412, Tyr414, Val418, Val429, and Phe431 from TSR6 stabilizes the base of the β-hairpin. Similar to TSR1, TSR6 lacks a "jar-handle" motif. In this case, the jar-handle H-bonding interactions are substituted by the backbone carbonyl from Glu440 in the B-C loop, which forms a H-bond with NH1 of the first Trp, Trp382, of the Trp-ladder. In TSR6 Arg405 is not stabilized by a residue from strand C and both Arg405 and Trp382 are not in a π-cation stacking conformation and thus do not contribute to the stability of the Trp-ladder. Properdin Glycosylation The tryptophans of TSR Trp-ladders are typically C-type mannosylated, where the C1 of an α-mannose is attached to the C2 in the indole ring of the Trp (34,35,59). We could clearly identify C-mannosylation for 11 out of 14 Trp-ladder tryptophans (Figure 4B). For the majority of these, we observe that the O2 oxygen of the mannosyl-Trp moiety interacts with its backbone nitrogen, whereas the O5 and O6 oxygens form H-bonds with the side chain of the adjacent Arg, which further stabilizes the TSR domain fold (Figures 6B,C). In addition to C-mannosylation, TSR domains usually display O-linked glycosylation of a Thr or Ser residue that precedes the cysteine in loop A-B (35,60,61). This glycosylation constitutes the attachment of a β-glucose-1,3α-fucose glycan through a linkage between the C1 atom of the fucose and the Thr or Ser side chain oxygen (61). In P N12/456 , we observe O-fucosylation of TSR1 (Thr92), TSR2 (Thr151), and TSR4 (Thr272) (Figure 4B), although the TSR2 glycan is poorly defined. In all structures, the O-fucosylation FIGURE 7 | Interactions between TSR5 and TSR6 stabilize the TSR6 β-hairpin. Cartoon representation of the TSR5/TSR6 (green/red) interface with residues that form the hydrophobic core that stabilizes the TSR6 β-hairpin shown in sticks. of TSR4 is especially well-defined and is involved in properdin oligomerization, as described below. Finally, we observe Nglycosylation of Asn428, which is located in the B-C loop insertion in TSR6 and has been shown not to be important in properdin function (29). Properdin Oligomerization A previously reported model for properdin oligomerization described the properdin vertex as a ring formed by four TSR domains each comprising a quarter of the ring (13) and formed by two inter protomer contacts (13,27). The structures of P N1/456 and P N12/456 showed that the properdin vertex consists of the STB domain, TSR1, part of TSR4, TSR5, and TSR6 domains. These domains form a ring-like structure through interfaces formed by the STB and TSR1 domains with TSR4 and TSR6, respectively. TSR2 and ∼66% of TSR4 are protruding from the vertex and form the properdin edges along with TSR3, which is absent in P N1/456 and P N12/456 . The boomerang-shaped TSR6 forms approximately half of the ring, with an extensive interface between the distal end of TSR6 and TSR1, and the long insertion in the B-C loop of TSR6 locked firmly in place by interactions with TSR5 (Figure 7). The interface between TSR6 and TSR1 is formed by the distal end of TSR6, which includes the A-B loop and the C-terminal region of strand C, and the β-sheet of TSR1 (Figures 8A,B). This interface is predominantly mediated by hydrophobic interactions, involving residues Leu99, Tyr101, Trp122, and Leu124 from TSR1 and Pro399, Pro459, Pro464, Cys391-Cys455, and Cys395-Cys461 from TSR6. In addition, hydrogen bonds are formed between the backbone atoms of Leu124 from TSR1 and Cys391 in TSR6, respectively, and between sidechains of Ser90 and Ser97 and the backbone carbonyl of His457 and Leu456, respectively. Additionally, salt bridges are formed between Glu95 and Arg103 in TSR1 and Arg401 and Asp463 in TSR6. The interaction between Glu95 and Arg401 is not visible in P N1/456 -CTC since the region containing Glu95 is not well-defined in this structure. The second interface between properdin protomers is formed by the STB domain and TSR4 (Figures 8A,B). This interface is characterized by a hydrophobic core involving Leu47, Val51, Leu58, Phe62 from the STB domain and Leu275, Ile305, and Pro311 in TSR4. In addition, there are hydrophilic interactions between Asp55 and the backbone carbonyl moiety of Leu58 from the STB domain and Asn307 and the backbone nitrogen of Cys312 of TSR4, respectively. The O-linked glycan on Thr272 from TSR4 contributes directly to the interaction via a hydrogen bond with Asn59 on the STB domain as well as multiple watermediated interactions. To gain insights into the properdin interactions with the C3bBb complex, we modeled and refined the structure of the proteolytic fragment Pc in complex with the SCIN-stabilized C3bBb convertase (PDB ID: 5M6W) (32). Modeling properdin in the density of 5M6W (see section Materials and Methods) resulted in a significant improvement of the refinement statistics (Rfree/Rwork = 0.264/0.219, compared to Rfree/Rwork = 0.315/0.262, when not including properdin). The structure comprises two copies of the SCIN stabilized C3bBbPc complex with density for TSR3 only detectable in one copy (Figure 9D). In both copies of the C3bBb-SCIN-Pc complex, the ring-like structure of properdin and the interface with C3b are similar as observed for P N1/456 in complex with C3/C3b CTC domain. Although the stirrup loops of TSR5 and TSR6 are in the vicinity of the VWA domain of Bb, we observe only two contacts between properdin and Bb within 3.2 Å in the model. The side chains of Lys350 (325 in 5M6W) of Bb and Val421 of properdin are within 2.8 Å and the side chains Met394 (369 in 5M6W) of Bb and Glu422 of properdin are within 3.1 Å distance, thus no direct interactions are apparent between properdin and Bb in the structural model ( Figure 9E). The two C3bBbPc complexes in the asymmetric unit show variation in both TSR4 and TSR2-TSR3; In one of the complexes the conformation of TSR4 is similar to that of TSR4 from P N1/456 , in the second C3bBbPc complex TSR4 is once again bent at the position of V266, but at an angle that does not correspond to TSR4 in P N1/456 , P N112/456 , or P N1/456 -CTC, showing that TSR4 has an even greater range of motion. This structural variability of the TSR4 conformation results in a ∼60 • that is covered by TSR4 in all properdin models ( Figure 9F). Similarly, TSR2 also shows structural flexibility; the orientation of TSR2 in one C3bBbPc complex matches the orientation observed in P N12/456 , whereas in the other copy TSR2 is at a 58 • angle compared to TSR2 from P N12/456 ( Figure 10A). Using the different conformations observed for TSR2 and TSR4 we were able to build models for properdin dimers, trimers and tetramers bound to a C3bBb coated surface ( Figure 10B). In these models, the properdin ringlike vertices (comprising STB, TSR1 and (the distal end of) TSR4 ′ , TSR5 ′ , and TSR6 ′ (with domains from a second protomer indicated by an apostrophe), are orientated perpendicular to the plane of the surface, with the edges comprising TSR2, TSR3 and the proximal part of TSR4 roughly parallel to the surface. DISCUSSION Previous biochemical data (11,12,21) has indicated that properdin enhances complement activity by binding and stabilizing surface-bound C3 pro-convertases (C3bB) and convertases (C3bBb) of the alternative-pathway. Low-resolution structural data suggested that properdin binds C3 convertases at the α ′ -chain of C3b (13,32), consistent with stabilization through putative bridging interactions between C3b and FB or fragment Bb of the pro-convertase and convertase, respectively. The crystallographic data presented here has provided atomic models of the ring-shaped structures previously observed in low-resolution EM images of full-length oligomeric properdin (13,27) and in a crystal of C3bBb-SCIN in complex with the proteolytic Pc fragment at 6-Å resolution (32). Our highresolution data reveals the STB-domain fold adopted by the Nterminal domain, the structural variations and post-translational modifications present in the TSR domains and the non-covalent binding interfaces between N-terminal domains STB and TSR1 and C-terminal domains TSR4 and TSR6, respectively, of two different protomers needed to form the ring-shaped structures of properdin. Next, our data of properdin in complex with the CTC domain of C3b shows the interaction details that position properdin on top of a C3b molecule, when C3b is covalently bound to a target surface, and identified two "stirrup-like" loops, formed by inserts into TSR-folds of TSR5 and TSR6, as interaction sites for binding the VWA domain of FB and Bb for stabilizing the C3 pro-convertase and convertase, respectively. Mass spectrometry of plasma-derived full-length properdin indicated complete C-mannosylation of 14 out of the 17 tryptophans present in the WRWRWR motifs and no or partial C-mannosylation of the remaining three (Trp80, 202, and 318), in addition to three fully (Thr151, Ser208, and Thr272) and one partially occupied (Thr92) O-fucosylation sites and a single N-linked glycosylation site (Asn428) (34,35) (Figure 4B). We observed that the C-mannosyl moieties on tryptophan are part of common H-bonding networks that also include the backbone nitrogen of the mannosylated Trp (positioned on strand A), the guanidium head group of the arginine distal to the Trp (in strand B) and a polar or negatively charged side chain of the residue opposing the Arg (in strand C), thus bridging all three strands providing stabilization to the TSR fold ( Figure 6C). Similar arrangements are found in the structure of TSR domains of C8 (PDB ID: 3OJY), C9 (PDB ID 6CXO), ADAMTS13 (PDB ID:3VN4), and Unc5a (PDBID: 4V2A). In the case of Unc5a (determined at 2.4-Å resolution), the two mannosyl moieties have not been included in the model, but are clearly visible in the density in a conformation similar to that observed in properdin. In C6 structures (3T5O, 4E0S, 4A5W), the mannoses in TSR1 and TSR3 domains are absent or modeled in various alternative conformations, possibly due to the relatively low resolution of these structures, ranging from 2.9 to 4.2 Å. In our structures, we observed clear density for all mannosyl moieties, except two (Trp86 and Trp145), of the reported fully C-mannosylated tryptophans (35). Trp145 is located on TSR2, which exhibits overall poor density in the crystal structure of P N12/456 . Very weak densities for a mannosyl moiety at Trp86 of TSR1 were observed in all three structures. The WRWRWR motif in TSR1 lacks the final arginine residue, instead a glutamine residue is observed at this position. Most likely, the absence of H-bonding potential with a guanidinium moiety at the final position causes local flexibility, explaining the weak density observed for the mannosyl on Trp86. Properdin is N-glycosylated at Asn428 of TSR6, which is located at the base of the β-hairpin insertion. In our structures this glycan is only partially present, however, there is clear density for this glycan in 5M6W. This glycan would not interact with C3bBb upon binding, which is in agreement with previous findings that removal of N-linked glycans had no effect on properdin activity in a hemolytic assay (29). Properdin O-fucosylation is observed in the density at Thr92, Thr151, and Thr272, which are positioned at structural homologous positions in the A-B-loop of TSR1, TSR2, and TSR4. The A-B loop in 63 out of 88 TSR sequences contains the sequence C-X-X-S/T-C, where the serine or threonine is O-fucosylated (60). Similar to TSR1 from C6 and the TSR domain from ADAMTS13, the O-glucosyl-β1,3-fucose is packed against the disulphide bridge that connects loop A-B to the terminal residue of the TSR domain. Oligomeric full-length properdin consists of ring-shaped vertices, formed by N-and C-terminal domains of separate protomers (13,27). The crystal structures of P N1/456 and P N12/456 , obtained by co-expression of N-and C-terminal parts, clearly FIGURE 10 | Models of properdin oligomers binding to surface bound C3 convertases. (A) Structures of P N1/456 (red), P N12/456 (yellow), P N1/456 -CTC (green), and the copy from Pc-C3bBb-SCIN lacking density for TSR3 (pink) superimposed on TSR5 of the other copy of Pc-C3bBb-SCIN (purple). (B) Ribbon representation of properdin oligomers binding to C3 convertases viewed from the front (left) and top (right). C3b and Bb are colored gray and wheat, respectively, Gln1013 from the C3b thioester is shown as red spheres. Each protomer in a properdin oligomers is colored differently. Top: Properdin dimer binding to two C3 convertases (for this model we used P N12/456 with TSR3 positioned relative to TSR2 as it is in the copy of Pc-C3bBb-SCIN that contains TSR3) Middle: Properdin trimer binding to 3 C3 convertases (for this model the properdin copy from Pc-C3bBb-SCIN that contains TSR3 was used). Bottom: Properdin tetramer binding to four C3 convertases (this model was generated with TSR2 as in the middle panel but using TSR4 from P N1/456 ). revealed that the ring-shaped vertices are formed by two contact interfaces between N-terminal domains of one protomer and the C-terminal domains of another protomer (Figure 4A). The N-terminal domain adopts a STB fold and binds the TSR4 ′ domain of another protomer. This interface, which is dominated by hydrophobic interactions, is further stabilized by additional H-bonds between STB Asn59 and the O-glucosyl-β1,3-fucose on Thr272 of TSR4 ′ . A second protomer-protomer interface is observed between TSR1 and TSR6 ′ . This interface is formed between the distal end of TSR6 ′ and the β-sheet at the core of TSR1 and involves hydrophobic interactions as well as several H-bonds and two salt bridges. Overall, the ring-shaped vertex of properdin is formed by STB-TSR1 of one protomer and (approximately ∼1/3 of) TSR4 ′ , TSR5 ′ and, an extended and curved, TSR6 ′ of a second protomer (Figure 8). TSR2, TSR3, and the remaining part of TSR4 consequently form the edges in properdin oligomers. Consistent with low-resolution EM and X-ray data (13,32), we have shown that the TSR5 domain of properdin provides the main interaction interface with C3b by binding along the length of the C-terminal α-helix of the C3b α ′ -chain ( Figure 9A). Protonation of properdin His369, at this main interface, would yield formation of a salt-bridge with C3b Glu1654 (Figure 9C), explaining increased binding of properdin to C3b at low pH (32,62). Comparison with other structures of C3b (37) indicates that binding of properdin to the CTC domain does not require nor likely induces large conformational changes in C3b. We identified two "stirrup"-like loops, residues 328-336 of TSR5 and 419-426 of TSR6, which embrace the end of the Cterminal α-helix of CTC ( Figure 9C). Cleavage of properdin in the TSR5-stirrup loop (between res. 333-334) leads to loss of C3b binding (and, hence, loss of convertase stabilization) (29), which indicates the importance of an intact TSR5 stirrup in C3b binding. The only known properdin type III (loss-offunction) mutation, Y414D (63), is located at the base of the TSR6 β-hairpin that constitutes the TSR6 stirrup. Tyr414 is part of a hydrophobic core between TSR5 and TSR6 (Figure 7) and Y414D likely disturbs this hydrophobic core and destabilizes the TSR6 stirrup and hence affects C3b binding or convertase stabilization (63). Monomerized properdin binds the C3 convertase (C3bBb) and pro-convertase (C3bB) strongly, and C3b weakly (K D ′ s of 34 nM, 98 nM, and 6.8 µM, respectively, in agreement with previous data (12,32) ; Figures 2, 3). Superposition of P N1/456 -CTC onto C3bB and C3bBD (PDB ID: 2XWJ and 2XWB) (39) suggests that the two stirrups are ideally positioned to bridge interactions between C3b and the VWA domain of FB and Bb. The TSR5 stirrup is in close proximity to the N-terminal region of CCP1 in the Ba region of FB, with only one potential Hbond between properdin Asn331 and FB Ser78. The proximity of properdin to FB-CCP1 explains the cross-links observed between Ba and properdin by Farries et al. (64). Re-analysis of C3bBb-SCIN with Pc (at 6-Å resolution) is consistent with the interactions that we observed at high resolution between P N1/456 and an isolated C3/C3b-CTC domain (Figures 9D,E). The low-resolution data of Pc-C3bBb-SCIN suggests small rearrangements in the TSR6 stirrup loop. Nevertheless, the expected additional interactions between Bb and properdin are not observed in Pc-C3bBb-SCIN. Potentially, the inhibitor SCIN enforces a C3bBb conformation that is not compatible with stabilization by properdin (32). Therefore, the interaction details between properdin and FB and Bb that explain higher binding affinities for the pro-convertase and convertase remain unfortunately unresolved. Besides promoting the formation of, and stabilizing the alternative-pathway C3 convertase, properdin is also known to inhibit FI activity (12,13,65); based on kinetic data, this is likely due to competition for the same binding site on C3b (12). Superposition of P N1/456 -CTC with C3b in complex with FH and FI (66) (PDB ID: 5O32) shows that, in a putative properdin-C3b-FH-FI complex, TSR6 of properdin severely clashes with the FI membrane-attack complex domain in FI (Figure 11). Therefore, the structural data supports competitive binding of properdin and FI for the same binding site. No overlaps are observed between properdin and regulators FH, DAF and MCP, when superposing P N1/456 -CTC with other C3b-regulator complexes (37). Thus, reduced decay-acceleration activity of FH and DAF (32) is most likely due to the increased stability of C3bBb upon properdin binding. Native properdin occurs predominantly as a mixture of dimers, trimers and tetramers (8), observed as flexible lines, triangles and quadrilaterals in negative-stain EM (13,27). The oligomers bind with high avidity (with an apparent K D of 22 nM) to surface-bound C3b compared to monomerized properdin binding a single C3b (K D of 6.8 µM). Consistently, properdin tetramers are more active than trimers, which are more active than dimers (8,9). In the structures presented here, overlaid in Figure 10A, we observed structural variability predominantly in TSR2 and TSR4. These variations occur mostly in the plane of the membrane of a properdin oligomer bound to an opsonized surface, which allowed us to create composite models representing symmetric properdin dimers, trimers and tetramers binding to surface-bound C3b, C3bB, or C3bBb in a straightforward manner ( Figure 10B). The ability of properdin to form flexible oligomers is crucial to enhance complement activation only on surfaces by binding deposited C3b molecules with high avidity, while promoting convertase formation (11) and stabilizing formed convertases by binding C3bB and C3bBb complexes with high affinity (12,32). Local production of properdin by immune cells would result in further enhancement near affected sites (23,25,26). DATA AVAILABILITY The datasets generated for this study can be found in the RSCB Protein Data Bank with PDB IDs 6S08, 6S0A, and 6S0B.
2019-09-04T14:27:46.224Z
2019-09-04T00:00:00.000
{ "year": 2019, "sha1": "51b863f002c8513965fd02e4f94ff8f055ba83d5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.02097/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51b863f002c8513965fd02e4f94ff8f055ba83d5", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
270225028
pes2o/s2orc
v3-fos-license
lncRNA CDKN2B-AS1 regulates collagen expression The long noncoding RNA CDKN2B-AS1 harbors a major coronary artery disease risk haplotype, which is also associated with progressive forms of the oral inflammatory disease periodontitis as well as myocardial infarction (MI). Despite extensive research, there is currently no broad consensus on the function of CDKN2B-AS1 that would explain a common molecular role of this lncRNA in these diseases. Our aim was to investigate the role of CDKN2B-AS1 in gingival cells to better understand the molecular mechanisms underlying the increased risk of progressive periodontitis. We downregulated CDKN2B-AS1 transcript levels in primary gingival fibroblasts with LNA GapmeRs. Following RNA-sequencing, we performed differential expression, gene set enrichment analyses and Western Blotting. Putative causal alleles were searched by analyzing associated DNA sequence variants for changes of predicted transcription factor binding sites. We functionally characterized putative functional alleles using luciferase-reporter and antibody electrophoretic mobility shift assays in gingival fibroblasts and HeLa cells. Of all gene sets analysed, collagen biosynthesis was most significantly upregulated (Padj=9.7 × 10− 5 (AUC > 0.65) with the CAD and MI risk gene COL4A1 showing strongest upregulation of the enriched gene sets (Fold change = 12.13, Padj = 4.9 × 10− 25). The inflammatory “TNFA signaling via NFKB” gene set was downregulated the most (Padj=1 × 10− 5 (AUC = 0.60). On the single gene level, CAPNS2, involved in extracellular matrix organization, was the top upregulated protein coding gene (Fold change = 48.5, P < 9 × 10− 24). The risk variant rs10757278 altered a binding site of the pathogen responsive transcription factor STAT1 (P = 5.8 × 10− 6). rs10757278-G allele reduced STAT1 binding 14.4% and rs10757278-A decreased luciferase activity in gingival fibroblasts 41.2% (P = 0.0056), corresponding with GTEx data. CDKN2B-AS1 represses collagen gene expression in gingival fibroblasts. Dysregulated collagen biosynthesis through allele-specific CDKN2B-AS1 expression in response to inflammatory factors may affect collagen synthesis, and in consequence tissue barrier and atherosclerotic plaque stability. Supplementary Information The online version contains supplementary material available at 10.1007/s00439-024-02674-1. Introduction CDKN2B-AS1 (CDKN2B antisense RNA 1; ANRIL) encodes a long non-coding RNA (lncRNAs), a class of molecules that are considered critical players of gene regulation in multiple biological processes.In general, lncRNAs act as transcriptional repressors, downregulating gene activity by directly interacting with the chromatin or mRNA of their Despite the biased expression in the colon, CDKN2B-AS1 is the major genetic risk locus of coronary artery disease (CAD) (Consortium et al. 2013;WTCCC 2007).The CAD associated haplotype block (tagged by GWAS lead SNP rs1333049) is also associated with progressive early-onset forms of the oral inflammatory disease periodontitis (Schaefer et al. 2009) and myocardial infarction (MI) (Helgadottir et al. 2007b;Myocardial Infarction Genetics et al. 2009;Nikpay et al. 2015) (Munz et al. 2018;Schaefer et al. 2009).Genetic risk variants of this haplotype block have an influence on CDKN2B-AS1 transcript levels (Folkersen et al. 2009), implying a molecular biological link between susceptibility for these diseases and regulation of CDKN2B-AS1 expression. However, in general, genes that had differential expression after increasing or decreasing CDKN2B-AS1 activity showed little overlap between the different studies, possibly due to the heterogeneity of the methods and cell models used. The aim of the current study was to investigate the role of CDKN2B-AS1 in gingival fibroblasts.To this end, we investigated the cell type-specific downstream regulatory effects of CDKN2B-AS1 on gene expression in gingival fibroblasts.Here, we followed the rational of previous studies (Alfeghaly et al. 2021;Bochenek et al. 2013;Hubberten et al. 2019;Rankin et al. 2019), which suggested that overexpression of CDKN2B-AS1 can induce cellular stress and that in cells where CDKN2B-AS1 is already naturally expressed, overexpression does not lead to further downregulation of the potentially suppressed target genes because they would have already been silenced.In addition, we searched for biologically functional genetic variants in the associated haplotype block in order to obtain information about the upstream signaling events that regulate CDKN2B-AS1 activity. Here, we show that CDKN2B-AS1 suppresses collagen synthesis in gingival fibroblasts and that the periodontitis and infarction associated susceptibility gene polymorphism rs10757278-G reduced STAT1 binding and increased CDKN2B-AS1 expression. Quantitative real-time PCR Complementary DNA (cDNA) was synthesized from 100 ng total RNA, using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Thermo Fisher Scientific).Quantitative real-time PCR (qRT-PCR) was performed using SYBR Select Master Mix (Applied Biosystems) to validate downregulation of CDKN2B-AS1 mRNA levels.The results were analyzed by using the 2 − ΔCT or 2 − ΔΔCT method and normalized to GAPDH as an internal control.The primer sequences were described in (Supplementary Materials Table.S1). RNA-sequencing Total RNA was extracted from pGFs and iHGF cells using the RNeasy Mini Kit.500 ng total RNA of transfected cell cultures were sequenced with 16 million reads (75 bp single end) on a NextSeq 500 using the NextSeq 500/550 High Output Kit v2.5 (75 cycles).RNA-Seq was performed at the Berlin Institute of Health, Core Facility Genomics.Reads were aligned to the human genome sequences (build GRCh38.p7)using the STAR aligner v. 2.7.8a (Dobin et al. 2013).Quality control (QC) of the reads was inspected using the multiqc reporting tool (Ewels et al. 2016) summarizing a number of approaches, including fastqc (available online at http://www.bioinformatics.babraham.ac.uk/ projects/fastqc), dupradar (Sayols et al. 2016), qualimap (Garcia-Alcalde et al. 2012), and RNA-SeqC (DeLuca et al. 2012).Raw counts were extracted using the STAR program.For differential gene expression, the R package DESeq2 (Love et al. 2014), version 1.30 was used.Gene set enrichment was performed using the CERNO test from the tmod package (Zyla et al. 2019), version 0.50.07, using the gene expression profiling-based gene set included in the package, as well as the MSigDB (Liberzon et al. 2015), v.7.4.1.For the hypergeometric test and the Gene Ontology gene sets, the goseq package, version 1.38 (Young et al. 2010) was used.The P values of the differently expressed genes were corrected for multiple testing using Benjamini-Hochberg correction.The corrected P values are given as q values (false discovery rate [FDR]). Western blotting To validate the expression of iHGF after knockdown the CDKN2B-AS1, the total protein of iHGF was extracted using RIPA with I.P. for 30 min on ice.The lysed samples were separated on polyacrylamide gels and transferred to a polyvinylidene fluoride (PVDF) membrane (Millipore, USA).Then the PVDF membranes were incubated with the primary antibody COL4A1(1:1000, Cell Signaling), Col 6A1 (1:500, Santa Cruz), CAPNS2(1:1000, Invitrogen) and β-actin (1:2000, Santa Cruz) overnight at 4 °C.Subsequently, the membranes were incubated with horseradish peroxidase (HRP)-conjugated secondary antibodies at room temperature for 1 h.The signal was acquired by using chemiluminescence detection (Chemostar Touch, INTAS, Indian).Then, ImageJ software was used to calculate the band intensities, and β-actin antibody was used as an internal control for signal normalization. Screening for functional periodontitis associated variants For screening functional periodontitis associated variants, LD between the lead SNP rs1333049 and other common SNPs of this haplotype block was assessed using LDproxy Tool (Machiela and Chanock 2015) with population groups CEU (Utah Residents from North and West Europe) and GBR (British in England and Scotland).We assessed LD using r 2 as a measure of correlation of alleles for 2 genetic variants (Supplementary Materials Fig. S2 and Table 1).We analyzed whether these SNPs located to chromatin elements that correlate with regulatory functions of gene expression provided from ENCODE (ENCODE-Project-Consortium 2012) (Supplementary Materials Fig. S3).To annotate eQTL effects of the associated SNPs, we used the software tool QTLizer (Munz et al. 2020).To investigate whether these SNPs changed predicted TFBSs, we used the TF databases Transfac (Thomas-Chollier et al. 2011) and the open-access database Jaspar (Sandelin et al. 2004).If Transfac as well as Jasper TF matrix files predicted a TF binding affinity at a SNP, with a stronger binding affinity at the common allele compared to the alternative allele, we selected the SNP for functional follow-up experiments.This conservative selection criterion was preassigned to avoid choosing false positive TFBS predictions for functional follow-up.TF binding motives were confirmed using the web interface for Position Weight Matrix (PWM) model generation and evaluation, PWMTools (Ambrosini et al. 2018). CDKN2B-AS1 modulates the expression of collagen genes in gingival fibroblasts In a first step, we quantified the expression of CDKN2B-AS1 in the gingiva to compare it with the expression in colon tissue, where this lncRNA is relatively highly expressed.We quantified the expression levels of CDKN2B-AS1 poly-A transcripts that terminate with exon-13 and − 19 (numbered as in transcript EU741058 and numbered as in transcript NR_003529, respectively) between healthy gingiva and colon biopsies using qRT-PCR.We found an equal expression in the gingiva compared to colon and similar expression of 3'-exon 13 and 3'-exon 19 transcripts between gingival and colon biopsies (Fig. S1A&B). Silencing CDKN2B-AS1 expression in primary gingival fibroblasts using a mix of four LNA GapmeRs resulted in 84% reduction of CDKN2B-AS1 expression (Fig. 1B).Genomewide expression profiling by RNA-Seq following CDKN2B-AS1 knockdown revealed 1,167 upregulated genes with log 2 Fold change (log 2 FC) > 2) and 2,829 downregulated genes with log 2 FC < -1 (P adj < 0.05; Fig. 1A, Supplementary Materials Table S4).The most upregulated protein-coding gene with the lowest p-value was the gene double-stranded oligonucleotides corresponding to both alleles of rs10757278 flanked by 21 bp in both cold and 3′-biotinylated form were obtained by annealing with their respective complementary primers.For supershift EMSA, 20 fmol biotin-labeled, double-stranded oligonucleotides were incubated for 20 min with nuclear extract (5 µg) in 1x binding buffer and 2 µL of a specific monoclonal antibody (STAT1, 10 µg/50 µL each (Santa Cruz Biotechnology, California, USA) at room temperature.For competition assay, 4 pmol unlabeled double-stranded oligonucleotides were added to the binding reaction.The DNA-protein complexes were electrophoresed in a 5% native polyacrylamide gel in 0.5x TBE buffer at 100 V for 1 h.After electric transfer of the products on a nylon membrane and cross-linking, the biotinylated probes were visualized by chemiluminescence detection (Chemostar Touch, INTAS, Indian).Band intensities were quantified by the absolute value area of the shifted antibody bands using the software ImageJ (Rueden et al. 2017). Luciferase reporter gene assays The putative regulatory DNA sequences (total length 539 bp) spanning 269 bp up-and downstream of the individual alleles of SNP rs10757278 were cloned into the firefly luciferase vector pGL4.24(Promega, Madison, USA).The Further details were described in Supplementary Materials.iHGF cells were seeded at a density of 330,000 cells per 6-well before transfection with Lipofectamine 2000.HeLa cells were seeded at a density of 80,000 cells per well in 6-well plates and cultured until reaching 50-60% confluence.Transfection of HeLa cells was performed using the jetPEI transfection reagent (Polyplus transfection, France) following the manufacturer's instructions.Cells were co-transfected in triplicates with 2.7 µg firefly luciferase reporter plasmid containing the putative regulatory sequence together with 0.3 µg renilla luciferase reporter vector (phRL-SV40, Promega) in 6-well plates for 24 h.In parallel, cells were transfected with the empty pGL4.24plasmid and 0.3 µg phRL-SV40 as control.Firefly and renilla luciferase activities were quantified using the Dual-Luciferase Stop & Glo Reporter Assay System (Promega) with the Orion II Microplate Luminometer (Berthold Technologies).Relative fold changes (FC) in activities were normalized according to the manufacturer's instructions (Promega) and differences of transcript levels were calculated with a T-Test using the software GraphPad Prism 9. CAPNS2 (log 2 FC = 5.65 (P = 9 × 10 − 24 ; Table 2; Fig. 1A).This gene is predicted to be involved in proteolysis [provided by Alliance of Genome Resources, Apr 2022] and based on gene content similarity is part of the gene cluster 'Extracellular matrix organization' (PathCard).Western blotting with protein extract of CDKN2B-AS1 knocked-down gingival fibroblasts showed a significant upregulation in the protein expression of the CAPNS2 (fold change = 1.5, P = 0.0063) (Fig. 1C&D).We also noticed that many lncRNAs were differentially expressed following CDKN2B-AS1 knockdown and separately list the top 10 differentially expressed ncRNAs in Table 3. CDKN2B-AS1 knockdown increased COL4A1 and COL6A1 protein levels in gingival fibroblasts We validated the positive effect of CDKN2B-AS1 knockdown on the expression of the collagen genes COL4A1 and COL6A1 by Western blotting.COL4A1 showed the strongest upregulation on the RNA level in our RNA-Seq data (FC = 12.13, P adj = 4.9 × 10 − 25 ) and COL6A1 (FC = 3.73, P adj = 1.1 × 10 − 12 ) was found in a previous ChIRP-Seq experiments to be a direct regulatory target of CDKN2B-AS1.Western blotting with protein extract of CDKN2B-AS1 knocked-down gingival fibroblasts showed a significant upregulation in the protein expression of the collagen genes COL4A1 (fold change = 1.74,P = 0.0004) and COL6A1 (fold change = 1.57,P = 0.0155) (Fig. 1K).These results proved COL4A1 and COL6A1 repression by CDKN2B-AS1. rs10757278-G allele reduced STAT1 binding in gingival fibroblasts We next searched for putative causal variants within the CAD/periodontitis/MI risk haplotype block that were located STAT1 binding allele rs10757278-A decreased luciferase activity in gingival fibroblasts 41.2%, when compared with the common G-allele (P = 0.0056; Fig. 3A).We validated this finding in HeLa cells, because STAT1 is also expressed in this cell type, which can be efficiently transfected in vitro.Luciferase reporter gene transfection into HeLa cells confirmed decreased luciferase activity in the background of the rs10757278-A allele compared with the alternative G-allele (P = 0.0063; Fig. 3B).These results corresponded with GTEx data that also showed reduced CDKN2B-AS1 expression in homozygous carriers of the A allele compared to the G-allele (P = 3.9 × 10 − 6 ; Fig. 3C).Taken together, these results demonstrated that the reference rs10757278-A allele is part of a biological functional STAT1 binding site that acts as a transcriptional repressor. Discussion In the current work, we provide evidence that CDKN2B-AS1 represses expression of collagen genes in gingival fibroblasts and that CDKN2B-AS1 is under negative control of the inflammatory transcription factor STAT1. A previous study, which used the identical set of Gap-meRs for CDKN2B-AS1 knockdown in the kidney cell line HEK293 and subsequently combined chromatin immune RNA precipitation followed by sequencing (ChIRP-Seq) with genomewide expression profiling, found that CDKN2B-AS1 directly contacted and regulated collagen gene expression (Alfeghaly et al. 2021).This study showed upregulation of the genes COL6A1 and COL12A1 and concluded that these collagen genes were direct targets of CDKN2B-AS1.However, this study did not find any enriched pathways in the list of direct trans-regulated genes.The immortalized kidney cell line HEK293 has a different expression pattern and differentiation state compared with primary gingival fibroblasts.Accordingly, it may not fully represent the biological functions, which CDKN2B-AS1 has in gingival fibroblasts, where this gene is naturally expressed at comparatively high levels.Therefore, to validate the findings of this previous study, we used the identical set of gapmers to reduce CDKN2B-AS2 transcript levels in gingival fibroblasts.Here, we gave evidence that collagen synthesis pathways were the most enriched gene sets in response to suppression of CDKN2B-AS1 transcript levels.We also proved on the protein level that in gingival fibroblasts CDKN2B-AS1 negatively controls the collagen genes COL6A1, but also COL4A1, which was the strongest upregulated collagen gene in our study.Moreover, as a novel finding, we detected that CAPNS2, which encodes the Calpain Small Subunit 2, was the most upregulated gene after CDKN2B-AS1 knockdown.Calpains are calcium-activated within chromatin stretches marked with biochemical modifications characteristic of regulatory DNA elements.The CAD GWAS lead SNP rs1333049 is in strong LD (r 2 > 0.8) with 55 common SNPs (minor allele frequency ≤ 0.05) in North-West European populations (1000 genomes population codes CEU and GBR).Of these, 24 SNPs located within chromatin elements with biochemical marks assigning them as putative regulatory elements (Fig. 2A, Supplementary Materials Table S3).We computationally analyzed whether the alternative alleles of these 24 SNPs changed predicted transcription factor binding sites (TFBSs).We found that Transfac and Jasper TF matrix files predicted TFBSs at two SNPs (rs10757278 and rs7859727) with the alternative alleles decreasing TF binding affinities (Table 1).rs10757278 locates within a STAT1 binding site with the common A allele being part of the STAT1 binding motif (P = 5.8 × 10 − 6 ) and a STAT1 matrix similarity of 94.8% (Fig. 2B).In transfac_2010.1 vertebrates matrix files, the alternative rs10757278-G allele 3,057-fold reduced binding affinity (P = 0.0177).rs10757278-G was described before as a putative causal variant for CAD affecting a STAT1 binding site (Harismendy et al. 2011).STAT1 is strongly expressed in gingival fibroblast (qRT-PCR threshold cycle [C t ] value for STAT1 = 20.45,C t (GAPDH) = 14.61), indicating biological activity of STAT1 in this cell type (Supplementary Materials Fig. S4). rs7859727-T locates within a predicted GATA binding site (P = 0.0006).In transfac_2010.1 vertebrates matrix files, the alternative rs10757278-C allele 836-fold reduced binding affinity (P = 0.549).However, RNA-Seq data of healthy gingival biopsies (Richter et al. 2019) showed GATA expression in gingival tissues below detection limit, indicating that this tissue does not express GATA. Allele specific STAT1 binding at rs10757278 was described by Chromatin Immunoprecipitation (ChIP) in lymphoblastoid cells (LCL) before (Harismendy et al. 2011).We validated rs10757278 allele specific STAT1 binding with protein extract isolated from gingival fibroblasts and performed a STAT1 antibody EMSA with DNA probes that contained either rs10757278 ref-A allele or alt-G allele.Using protein extract of gingival fibroblasts, STAT1 binding at DNA probes with rs10757278 alt-G allele was 14.4% reduced compared to the ref-A allele (Fig. 2C&D).The observation of increased STAT1 binding at the G allele corresponded with the previous observation in LCL cells. STAT1 binding at rs10757278 repressed gene activity We tested the regulatory effect direction of the DNA element at rs10757278 in gingival fibroblasts using luciferase reporter gene assays.The DNA sequence containing the Our data also confirmed the previously described role of rs10757278, a risk SNP for CAD (Tcheandjieu et al. 2022) and MI (Helgadottir et al. 2007a), as being a putative causal variant of this risk haplotype block, which impairs binding of the IFNG responsive signal transducer STAT1 (Harismendy et al. 2011).We confirmed STAT1 binding at this SNP sequence and showed that the rs10757278-G allele reduced STAT1 binding and increased reporter gene expression in gingival fibroblasts.GTEx data also reported increased CDKN2B-AS1 expression in homozygous carriers of the G-allele compared to the A-allele, confirming our result. Validation of the results of previous work through an independent approach in a cell type in which CDKN2B-AS1 is naturally and comparatively highly expressed represents a significant value of our work by emphasizing these results and placing them in a new functional context.This context lies in the regulation of collagen synthesis in a barrier tissue, possibly in response to an inflammatory phase. Periodontitis is characterized by recurrent and prolonged inflammation and gingival bleeding.Periodontal healing after active inflammation requires reconstruction of the gingival barrier tissues.These wound healing processes include tissue formation and tissue remodeling, which follow but partially overlap with inflammation (Yen et al. 2018).To fully restore the tissue barrier, these two processes of healing from gingival bleeding involves collagen deposition and collagen remodeling, respectively. The CDKN2B-AS1 rs10757278-G allele is also associated with increased risk for MI (Helgadottir et al. 2007a).Of note, COL4A1, which is regulated by CDKN2B-AS1 shown in the current study for gingival fibroblasts and also previously for HEK293 cells (Alfeghaly et al. 2021), is also a risk gene for MI (Nikpay et al. 2015).Collagen is a critical component of atherosclerotic lesions and constitutes up to 60% of total plaque protein (Rekhter et al. 1996;Smith 1965).High collagen contributes to plaque structural integrity and mechanical "strength".Therefore, a deficit of collagen reinforcement leads to plaque weakness and vulnerability (Burleigh et al. 1992; Lee and Libby 1997), making atherosclerotic plaque prone to rupture (Rekhter 1999), increasing the risk for MI.The in vitro results of the current study provide a functional link between the two MI risk genes CDKN2B-AS1 and COL4A1.Our study implies that the rs10757278-G allele leads to reduced CDKN2B-AS1 repression by reducing STAT1 binding.As a result, higher CDKN2B-AS1 would in turn lead to increased collagen repression.This could destabilize atherosclerotic plaque and weaken the gingival tissue barrier, independently increasing the risk for MI and periodontitis. A limitation of the current study is that we have not shown that CDKN2B-AS1 directly binds to the collagen genes in cysteine proteases that act as part of numerous intracellular signaling pathways.Of particular interest, calpain activity is required for differentiation/activation of fibroblasts, which lay down extracellular collagen matrix (ECM) proteins (e.g.collagen).In response to tissue injury, calpain is activated and promotes, in addition to the expression and release of proinflammatory cytokines (Ji et al. 2016), the activation and differentiation of fibroblasts, thereby promoting the production of collagens (Scaffidi et al. 2002).In contrast it was shown that inhibition of calpains interrupt the early steps of fibroblast activation and differentiation, thereby attenuating the production of collagen (Kim et al. 2019;Letavernier et al. 2012).Therefore, collagen production and accumulation as seen in repeated injury, chronic inflammation and wound healing requires precise regulation to avoid fibrosis and to maintain barrier tissue function.Our data indicate CDKN2B-AS1 regulates CAPNS2 activity. Additionally, our gene pathway enrichment analysis also revealed, in response to CDKN2B-AS1 repression, a significant downregulation of the pathway 'TNFA Signaling via NFKB' including significant repression of the genes IL1A and − 1B.Consistent with this observation, it was previously shown that knockdown of CDKN2B-AS1 transcripts in endothelial cells inhibited TNFA induced IL6 and IL8 expression (Zhou et al. 2016).It has long been known that endogenous TNFA down-regulates collagen synthesis during normal wound healing (Regan et al. 1993) and that TNFA inhibits collagen-alpha gene expression in cultured fibroblasts (Buck et al. 1996).Considering the association of CDKN2B-AS1 with severe, progressive periodontitis, our data imply that CDKN2B-AS1 is a molecular regulator that aligns TNFA signaling and collagen synthesis in gingival fibroblasts. Fig. 2 rs10757278 is a functional SNP within the chr9p21.3risk haploblock and localizes to a STAT1 binding site.(A) 55 common SNPs (MAF ≤ 0.05) in CEU and GBR populations in strong LD with the GWAS lead SNP rs1333049 and 24 SNPs located within chromatin elements that correlate with regulatory functions of gene expression, which is indicated by chromatin state segmentation for 3 cell types (data from ENCODE; orange = predicted strong enhancer, yellow = weak enhancer, blue = insulator).Some proxy SNPs locate in H3K4me1 and H3K27ac methylation marks, which are often associated with regulation of gene transcription and within TFBS that were determined from ENCODE ChIP-Seq data.The position of rs10757278 is marked with a dashed line.(B) The DNA sequence at rs10757278-A allele shares a matrix similarity of 95% with the STAT1 transcription factor (TF) binding motif.(C) EMSA was performed with rs10757278 allele-specific oligonucleotide probes and nuclear protein extract from gingival fibroblast cells.Binding of STAT1 antibody to allele-specific probes is shown in lane 2 and 7.The supershift caused by STAT1 antibody binding to the DNA probe-protein complex is seen in lane 3 and 6.Unlabeled DNA was added in lanes 1 and 8 to verify that the band shift was antibody-specific.(D) Absolute value area of the antibody-specific bands.In the background of rs10757278-G allele, STAT1 binding to the allele-specific oligonucleotide probe was reduced 14.4% compared to the A-allele 1 3 Fig. 1 Fig. 1 Differentially expressed genes and enriched gene sets in gingival fibroblasts after CDKN2B-AS1 knockdown.(A) Volcano plot of LNA GapmeRs transfected gingival fibroblasts showing differential expression of protein coding genes and numerous lncRNAs and pseudogenes.The names of the most significant differentially expressed protein coding genes are shown.The names of the most significant differentially expressed lncRNAs and pseudogenes are not shown to highlight the observed prominent role of CDKN2B-AS1 interaction with non-protein coding genes.(B) Transfection of primary gingival fibroblasts with LNA GapmeRs induced significant reduction of CDKN2B-AS1 transcript levels (qRT-PCR).(C-D) Western blot analysis validated that reduced CDKN2B-AS1 transcript levels correlated with significantly reduced CAPNS2 protein levels.Western Blot band intensities are normalized to ACTB (*p < 0.05; **p < 0.01).(E-G) Gene set enrichment analysis of GapmeR transfected gingival fibroblasts.Shown are evidence plots (receiver operator characteristic curves) for the significant gene sets with an area under the curve (AUC) ≥ 0.6.(C) From Reactome database, REACTOME_COLLAGEN_BIOSYN-THESIS_AND_ MODIFYING_ENZYMES _ M26999, enriched 62 genes.(D) From Reactome database, REACTOME_COLLAGEN_ CHAIN_TRIMERIZATION_ M27812, enriched 40 genes; (E) From Hallmark database, HALLMARK_TNFA_SIGNALING_VIA_ NFKB_ M5890, enriched 191 genes; (F&G) From Tmod database, EXTRACELLULAR MATRIX (I)_ LI.M2.0 enriched 30 genes and COLLAGEN, TGFB FAMILY ET AL_ LI.M77 enriched 31 genes, respectively.The gray rug plot underneath each curve corresponds to genes sorted by P value, with the genes belonging to the corresponding gene sets highlighted in red (upregulated genes) or blue (downregulated genes).Bright red or bright blue indicates that the genes are significantly regulated.(H) Western blotting validation of LNA GapmeRs transfected gingival fibroblasts showing COL4A1 and COL6A1 upregulation after CDKN2B-AS1 knockdown (COL4A1 fold change = 1.74,P = 0.0004; COL6A1 fold change = 1.57,P = 0.0155 respectively) Table 1 TFs predicted to bind at the SNP sequences with p < 0.01
2024-06-05T06:17:43.145Z
2024-06-04T00:00:00.000
{ "year": 2024, "sha1": "1f4ba1a3c15a0a50fbd44f6f49508f22302d5966", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00439-024-02674-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "11c2c5d2c495b0ef875180cfb667d26338309837", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118615273
pes2o/s2orc
v3-fos-license
Pulsation Frequencies and Modes of Giant Exoplanets We calculate the eigenfrequencies and eigenfunctions of the acoustic oscillations of giant exoplanets and explore the dependence of the characteristic frequency and the eigenfrequencies on several parameters: the planet mass, the planet radius, the core mass, and the heavy element mass fraction in the envelope. We provide the eigenvalues for degree $l$ up to 8 and radial order n up to 12. For the selected values of l and n, we find that the pulsation eigenfrequencies depend strongly on the planet mass and radius, especially at high frequency. We quantify this dependence through the calculation of the characteristic frequency which gives us an estimate of the scale of the eigenvalue spectrum at high frequency. For the mass range 0.5<M_P<15 M_J, and fixing the planet radius to the Jovian value, we find that the characteristic frequency is ~164.0 * (M_P/M_J)^(0.48) microHz, where M_P is the planet mass and M_J is Jupiter's mass. For the radius range from 0.9 to 2.0 R_J, and fixing the planet's mass to the Jovian value, we find that the characteristic frequency is ~164.0 * (R_P/R_J)^(-2.09) microHz, where R_P is the planet radius and R_J is Jupiter's radius. We explore the influence of the presence of a dense core on the pulsation frequencies and on the characteristic frequency of giant exoplanets. We find that the presence of heavy elements in the envelope affects the eigenvalue distribution in ways similar to the presence of a dense core. Additionally, we apply our formalism to Jupiter and Saturn and find results consistent with both the observationnal data of Gaulme et al. (2011) and previous theoretical work. INTRODUCTION Pulsation frequencies and modes are potentially useful tools with which to study the interior structure of the giant planets. Three types of modes are distinguished: g-modes are standing internal gravity waves, p-modes are standing acoustic waves, and f-modes are of intermediate frequency and can be regarded as the fundamental mode of either the p-or the g-modes. The corresponding frequencies are characterized by their radial order n and degree l. The surface movements associated with such pulsations are hard to detect and, as of July 2012, only Jupiter's global velocity oscillations have been observed. The work of Schmider et al. (1991), Mosser et al. (1991), Mosser et al. (1993), and Mosser et al. (2000) resulted in a putative measurement of the mean frequency spacing of 142 ± 3 µHz (Mosser et al. 2000). More recently, Gaulme et al. (2011) claims to have detected Jupiters global modes with a mean noise level five times lower than previously achieved, using observations acquired in 2005 by the SYMPA Fourier spectro-imager of the Teide Laboratory. Their upper troposphere radial velocities are determined by measurements of the Doppler shifts of solar Mg lines (517 nm) reflected by Jupiter's clouds. The resulting velocity maps were decomposed into spherical harmonics to create a set of time series whose power was computed with a discrete Fourier transform. It exhibited excess power between 800 and 2000 Hz and a secondary excess between 2400 and 3400 Hz, with a frequency of maximum amplitude of 1213 ± 50 µHz, a mean spacing of 155.3 ± 2.2 µHz and a mode maximum amplitude of 49 +8 −10 cm s −1 . These measurements bastien.le-bihan@polytechnique.edu, burrows@astro.princeton.edu 1 Ecole Polytechnique, Palaiseau, France. agree with theoretical expectations in terms of the frequency range, the amplitude, and the mean large spacing (Bercovici & Schubert 1987;Provost et al. 1993) and correspond to the signature of p-modes. Several theoretical works intended to bring out possible forcing mechanisms capable of exciting pulsation modes in nearby gaseous planets to magnitudes accessible to observation. Current studies tend to struggle to find significant theoretical amplitudes, but the crucial point is that giant exoplanets may exhibit more favorable conditions for these mechanisms, which may bring about higher oscillation amplitudes. Indeed, Bercovici & Schubert (1987) and Marley (1990Marley ( , 1991 evaluated the possibility of a coupling of acoustic oscillations to turbulent convection on Jupiter and Saturn, as is the case for the Sun (Goldreich, Murray & Kumar 1994). With this kind of coupling, the physical amplitudes of the modes may scale like LM α , where L is the interior luminosity, M is the Mach number, and α is a power that depends on whether the sound generated is via dipole (α = 3) or quadrupole (α = 5) emission (Bercovici & Schubert 1987). Giant exoplanets will be almost fully convective, and their internal luminosities are likely to be 10 to 100 times larger than that of Jupiter (Burrows et al. 2001). Since the Mach number is also likely to be higher, the amplitudes of acoustic oscillations of giant exoplanets may exceed those of Jupiter by a large factor. Moreover, close encounters of planets in an evolving planetary system may promote planet-planet or moon-planet interactions which can excite tides dynamically through the transfer of orbital energy, as pointed out in the stellar case by Lee & Ostriker (1986) and in the moon case by Marley (1990). If the encounters are close enough, then the resultant amplitudes may be significant. Given current technological limitations to the observa-tion of giant planet's oscillations, it is reasonable to think that the global oscillations of giant exoplanets will not be detected in the foreseeable future. However, specific environnments of giant planets outside the solar system may foster excitation mechanisms more vigorous than in the Jovian or Saturnian cases and, thus, lead to significant pulsation amplitudes. Hence, a theoretical exploration of the systematics of the pulsation frequencies for the broad spectrum of recently discovered giant exoplanets might stimulate observers to design methods to detect giant planet oscillations, since such oscillations are so diagnostic of structure. In this spirit, we calculate the eigenfrequencies and eigenfunctions of the pulsationnal modes of giant exoplanets. We quantify the dependence of the modal eigenfrequencies on the planet mass and radius. In addition, we focus on the influence of a dense core on these quantities, since its presence has already been suggested in specific extrasolar giant planets (Burrows et al. 2007;Guillot et al. 2006). Furthermore, we calculate corresponding models for Jupiter and Saturn themselves, and compare them with previous work. Vorontsov et al. (1976), Vorontsov & Zarkhov (1981 a,b) and Vorontsov (1981) added rotation, differential rotation, and ellipticity to their initial spherically-symmetric, nonrotating Jovian models. The influence of the troposphere on the high-frequency oscillations was first adressed by Vorontsov et al. (1989), then in detail by Mosser et al. (1994). Provost et al. (1993) developed an asymptotic method to determine the eigenfrequencies which included the discontinuity of a Jovian core. They introduced the mean spacing or characteristic frequency ν 0 , defined by: , where c 0 is the speed of sound, and emphasized the sensitivity of the Jovian oscillation spectrum to the presence of a dense core. Since then, the Jovian characteristic frequency has been estimated to be between 152 and 160 µHz (Provost et al. 1993;Gudkova et al. 1995;Gudkova & Zarkhov 1999). These estimates are consistent with the recent observations of Gaulme et al. (2011). Saturn's oscillations have also been studied theoretically. Marley (1990) suggested that the f-modes of Saturn are the most likely to be detected through their potential influence on that planet's rings. Using the techniques applied to Jupiter, Gudkova et al. (1995) calculated the eigenfrequencies of the lowest-order p-modes of Saturn, along with their characteristic frequencies. The latter were found to be between 106 and 109 µHz. Throughout our paper, we do not take into account the effects of rotation or oblateness (Vorontsov & Zarkhov 1981 a,b;Lee 1993). Since the adiabatic approximation is appropriate for Jovian planets (Marley 1990), adiabaticity is here assumed for Jupiter, Saturn, and the entire set of exoplanets. Finally, since we focus on the giant exoplanet regime, we select the appropriate range of measured giant planet radii (Udry & Santos 2007): 0.8 R J R P 2.1 R J . Gaulme et al. (2011) andSchmider et al. (1991) suggest that the lowest-degree p-modes are the most likely to be detected. Gaulme et al. (2011) and Marley (1990) take the degree l = 8 to be an upper limit and the Jovian observations of Gaulme et al. (2011) are within the frequency regions [0.8,2.1] mHz and [2.4,3.4] mHz. In the specific case of Jupiter, and for l ∈ [0, 8], these values loosely correspond to the ranges of radial order n ∈ [0, 12] and [15,20], respectively. These observationnal windows are consistent with the theoretical value of the atmospheric cutoff frequency of Jovian modes estimade by Mosser (1995) and which is about 3 mHz. Theoretically, the asymptotic trends are manifest for n ≥ 4 or 5 (Provost et al. 1993;Marley 1991). Thus, we focus on the p-modes and f-modes of low degree (l ≤ 8) and relatively low radial order (n ≤ 12). In §2, we summarize the theory of adiabatic nonradial oscillations of nonrotating spherical planetary models. We present our numerical technique, closely based on the work of Unno et al. (1989) and Christensen-Dalsgaard (1997). To test the validity and precision of our code, we calculate the eigenfrequencies of f-modes, p-modes and g-modes of well-studied polytrope models and compare our results to those of Christensen- Dalsgaard & Mullan (1994). In §3, we describe the giant planet models used in the article. We present our results in the case of Jupiter and Saturn, and compare them to both observationnal data (Gaulme et al. 2011) and theoretical work (Provost et al. 1993;Mosser et al. 1994;Gudkova & Zarkhov 1999). We briefly focus on the dependence of the Jovian modal oscillations on the core mass of Jupiter and present the derivatives of the low-degree acoustic modes eigenfrequencies with respect to the core mass. In §4, we focus on the giant exoplanets. We present our results in terms of the characteristic frequency ν 0 , the eigenfrequencies of low-degree f-modes, and the eigenfrequencies of low-degree p-modes across the giant exoplanet continuum. In separate subsections, we investigate their dependence on the planet mass, planet radius, and core mass. We also focus on the influence of a high fraction of heavy elements in the envelope, by using a high helium mass fraction, Y , as an approximate substitute (Spiegel et al. 2011). Finally, we briefly discuss the temporal evolution of the charateristic frequency ν 0 for simple planetary models considered in isolation. Nonradial Oscillation Eigenvalue Problem We considered non-rotating, spherically-symmetrical planetary models. For adiabatic, nonradial oscillations of such objects, it follows from Unno et al. (1989, chap. 13) that the radial part of the displacement ξ r , the Eulerian perturbations of the pressure p ′ , and the Eulerian perturbation of the gravitational potential Φ ′ take the form: where Y m l are the spherical harmonics of azimuthal order m and degree l, and f is either ξ r , p ′ , or Φ ′ . A given oscillation mode is, thus, described by its azimuthal order m, degree l, radial order n. These variables are govern by a set of differential equations and four boundary conditions, two at the surface, and two at the center (Unno et al. 1989). The corresponding set of equations is given in the first section of the Appendix. Since the azimuthal order m does not appear in the governing equations, the eigenfrequencies are (2l+1)-fold degenerate, and are fully described by their degree l and radial order n. This problem has to be numerically implemented to calculate the corresponding eigenfrequency ν n,l for a given mode. We detail the technique used in this paper in the Appendix, but summarize our overall methodology in the next subsections. Numerical Implementation Several numerical techniques have been previously introduced (Vorontsov et al. 1976;Unno et al. 1989;Christensen-Dalsgaard 1997). They are divided into shooting techniques and relaxation methods. In the shooting technique, solutions satisfying the boundary conditions are integrated separately from the inner and outer boundaries, and the eigenvalue is found by matching these solutions at an arbitrary interior fitting point. The second technique is to solve the equations together with the normalization condition, and all but one of the boundary conditions, using a relaxation technique; the eigenvalue is then found by requiring that the remaining boundary condition be satisfied. The shooting methods are generally considerably faster than the relaxation techniques, but their precision decreases as the degree l increases (Christensen-Dalsgaard 1997. However, since we consider only low-degree modes, a shooting method is quite suitable for our problem. Dimensionless variables are introduced (see the second section of the Appendix). In particular, the dimensionless frequency ω is defined by: where σ is the angular frequency of the modes, ν is the corresponding frequency, G is the gravitational constant, and R and M are the radius and the mass of the studied object, respectively. Solutions are obtained by integration using a fifth-order Runge-Kutta technique. To calculate the eigenfrequencies in a given frequency range, we use a determinant method developed in Christensen-Dalsgaard (1997) and fully described in the third section of the Appendix. Two linearly independent solutions are calculated from the center, and two from the surface. They are connected at an arbitrary inner boundary. The eigenvalues do not depend on its position. 2.3. Mode Order As we calculate the eigenfrequencies and eigenfunctions of the acoustic modes for a given degree l, we determine their order n using the following equation (Christensen-Dalsgaard 1997): where y 1 and y 2 are dimensionless variables defined by: where ρ is the density and g is the gravitational acceleration. In the definition of n, the sum is over the zeros x z1 in y 1 (excluding the center), where x = r/R is the relative radius. The value of n 0 depends on the behavior of the solution close to the innermost boundary. If y 1 and y 2 have the same sign at the innermost mesh point, excluding the center, n 0 = 0. Otherwise n 0 = 1. In particular, for a complete model that includes the center, as in our case, it follows from the boundary conditions at the center that n 0 = 1 for radial oscillations and n 0 = 0 for non-radial oscillations. With these conventions, the order of the f-mode is n = 0. Results for a Polytrope Model In order to test the code, we compute the eigenfrequencies and eigenfunctions of polytropic models. To compare our results with the work of Christensen-Dalsgaard & Mullan (1994), we take the same radius and mass for our calculations: R P = 6.9599×10 10 cm and M P = 1.989×10 33 g. The eigenfrequencies of f-modes, g-modes, and p-modes are given in Tables 1, 2 and 3, in µHz. Our frequencies match those of Christensen-Dalsgaard & Mullan (1994) to a precision of 10 −5 or better, for all types of modes, for l ∈ [0, 3] and n ∈ [−20, 25]. This gives us confidence in our calculational method as we approach more complex models. The Models The giant planet models we use for the calculations for Jupiter and Saturn consist of an adiabatic atmosphere, a hydrogen-helium envelope, and an olivine core. When a core is included, we explore 0 ≤ M core ≤ 10 M ⊕ for Jupiter and 9 ≤ M core ≤ 22 M ⊕ for Saturn. Both ranges are marginally consistent with the core accretion formation models for these planets, which suggest 10 − 20M ⊕ (Saumon & Guillot 2004;Pollack 1996). The hydrogen/helium equation of state that we use for this study is described in Saumon, Chabrier, & Van Horn (1995). The transition between the atmosphere and the envelope has been smoothed to ensure the continuity of density, pressure, and sound speed. We build several models for both Jupiter and Saturn, using different core masses and helium fractions in the envelope. Table 4 presents various parameters of these models: the helium mass fraction inside the envelope, Y , the mass of the core, M core (in Earth units), the central pressure p c (in Mbars), the central density ρ c (in cgs units), and the characteristic frequency ν 0 (in µHz), defined in Provost et al. (1993) using the following equation: where c 0 = (dp 0 /dρ 0 ) ad is the sound speed in hydrostatic equilibrium. Figure 1 portrays the profiles of the density, ρ, the gravitational acceleration, g, and the sound speed, c 0 , for models J4 and S2, defined in Table 4. Both density and sound speed are discontinuous at the core interface. In the case of Jupiter, for the models defined in Table 4, the calculated characteristic frequencies, ν 0 , are consistent with the observational value measured by Gaulme et al. (2011): ν 0 = 155.3 ±2.2 µHz (see also Figure 6). Figure 2 depicts the eigenfrequencies of models S2 and J4 for l ∈ [0, 3] and n up to 25. These results are given in the form oféchelle diagrams based on the results of the asymptotic theory for low-degree oscillations developed in, for example, Provost et al. (1993). This theory predicts that, for low degree l and large radial order n, the eigenfrequencies ν n,l of p-modes are, to a first approximation, proportionnal to the characteristic frequency, ν 0 : Oscillation Modes Anéchelle diagram presents the ratio ν n,l /ν 0 as a function of the difference ν n,l /ν 0 − (n + E[l/2]), where E is the floor function. Thus, it allows us to see the deviation from the approximate value. Both panels of Figure 2 are in qualitative agreement with previous numerical results for Jupiter (Provost et al. 1993;Mosser et al. 1994;Jackiewicz et al. 2012, and their Fig. 3d, Fig. 5a, & Fig. 4 respectively) and for Saturn (Mosser et al. 1994, their Fig. 5b). The Jovian periods of the acoustic fundamental tone and overtones with radial order and degree up to 5 are given in Table 5 for the model J4. Though the differences between our models and those derived by Gudkova & Zarkhov (1999) prevent us from a precise comparison, it is clear that we find similar results to this previous work (see their Table 2). Figure 3 portrays the radial component of the eigendisplacement ξ r for low-degree, lowest-order modal oscillations of the model J4 of Jupiter, as a function of the relative radius x = r/R. The radial displacement is shown for l = 0, 1, 2, and 5. It is taken equal to 1 m at the surface, for every mode. The behavior of the modes near the center is determined by the boundary conditions of the specific numerical problem considered here (Unno et al. 1989), which lead to the following relations, for r ∼ 0: Thus, for l = 1, the radial displacement does not necessary vanish near the center. In Figure 3, for the lowest degrees l, the presence of the dense core is directly visible at the boundary with the envelope (x = 0.13, for this model). For higher degree (here, l = 5), the influence of the core is less obvious in the radial eigenfunctions, because the amplitude of the radial displacement vanishes near the center, for every radial order n. The influence of the size of the core on the frequency spectrum of Jupiter has already been studied (Provost et al. 1993;Gudkova & Zarkhov 1999). However, no determination of the derivative of the eigenfrequencies with respect to the core mass has yet been provided. Focusing on the f-modes and p-modes of Jupiter, we calculate their eigenfrequencies for various core masses, with the helium mass fraction fixed at 0.25 in the envelope. Anéchelle diagram of the eigenfrequencies of Jupiter for l = 2 and for a few core masses is given in Figure 4. The spectra are very well separated for radial order n ≥ 4, which indicates, as mentionned by Vorontsov et al. (1989) and Gudkova & Zarkhov (1999), that the high-frequency acoustic oscillations of low degree l can be very useful in determining the structure and size of the core. We highlight the sensitivity to the core mass of the eigenfrequencies of low-degree acoustic oscillations. Figure 5 shows the eigenfrequencies of such modes for Jupiter as a function of the mass of the core, M c , for n ∈ [0, 10] and for l = 0, 1, 2 and 3. For every radial order n, the eigenfrequencies have been normalized by their coreless value: This normalization allows us to compare the deviation of the frequencies from their coreless values as the core mass increases, regardless of their absolute value, which depends of the radial order n. For l = 0, as n increases the frequency becomes less sensitive to the size of the core. The normalized frequency µ 1,0 is by far the most affected by the variation of the core mass; its value decreases by more than 14% between the coreless version of Jupiter and the model with a 10-Earth mass core. For l ≥ 1, the trend is very different; the derivative of the normalized frequency with respect to the core mass decreases from positive to negative values as n increases. As a result, this derivative approximately vanishes for specific frequencies, for example µ 3,1 (Fig.5, top right), µ 2,2 (Fig.5, bottom left), and µ 2,3 (Fig.5, bottom right). All these frequencies vary by less than 0.5% over a core mass range from 0 to 10 M ⊕ . For higher radial order, the influence of the core mass is more important; for example, µ 10,2 varies by more than 3.0% over the core mass range from 0 to 10 M ⊕ . This increased sensitivity is consistent with the previous discussion concerning theéchelle diagram. The f-mode is also sensitive to the core mass, with a variation up to 3.8% with core mass from 0 to 10 M ⊕ , for l = 2 and 3. These variations may be surprising compared to the stellar case, where the f-modes are constrained to shallow depths, far away from the influence of the core. However, this phenomenon has already been pointed out by Marley (1991) in the Saturnian case. On the other hand, planetary f-modes are also likely to decay with depth and, as the degree l increases, the oscillation moves to the surface of the planet and its frequency is determined by the outer layers (Vorontsov et al. 1976;Gudkova & Zarkhov 1999). Calculations for higher degrees l may thus exhibit a reduction of the influence of the core mass on the frequencies of the f-modes. We conclude, however, that for the low-degree nonradial oscillations the sensitivity of the pulsation frequencies to the core mass is important both for the f-modes and for the high-radial order p-modes. However, for some specific intermediate values of radial order n, the sensitivity nearly vanishes over the whole core mass range. If detected, these particular modes would provide little evidence of the presence of a core. General description For a systematic look at the exoplanets currently known, we use the catalog developed by Schneider et al. (2011) and available at the URL http://www.exoplanet.eu. As of the 7 th of July, 2012, 777 confirmed planets are listed in this catalog. We limit our study to the planets whose radius and mass have both been estimated. Furthermore, we focus on the giant exoplanet regime and, therefore, we select the planets whose radii are within the range 0.8 R J R P 2.1 R J . In terms of mass, most of the detected giant exoplanets have masses less than 5 M J , but the distribution has a long tail towards masses larger than 10 M J (Udry & Santos 2007). Numerically, 88% of the selected exoplanets have masses less than or equal to 5 M J , and 94% have masses less than or equal to 10 M J . In the 10 to 20 M J interval, it is difficult to fix a clear upper limit for giant exoplanets masses, because the planet population and the brown dwarf population overlap (Udry & Santos 2007;Leconte et al. 2009). Therefore, though we restrict the mass range studied, we are aware of the ambiguous status of the heaviest objects. This final set is composed of 174 exoplanets. We calculate the characteristic frequency ν 0 for each object of this group, using the techniques and models developed and described in sections 2 and 3. The helium mass fraction is fixed at 0.25 in the entire envelope and no core is added. The results are shown in Table 6, with the planets sorted from low to high mass. For the selected objects, the characteristic frequency range is 33 µHz ≤ ν 0 ≤ 815 µHz. Around 1 M J , ν 0 is smaller than Jupiter's value for almost every object, since their radii are larger than 1 R J . The spread of values is dramatic around every mass within the range [0.17 M J , 30 M J ], which emphasizes the strong sensitivity of ν 0 to planet parameters. To investigate the crossed dependence of ν 0 on the radius and the planet mass, we calculate it for a wide range of radii and masses in the observed giant planet regime. Figure 6 portrays the corresponding results for 0.5 ≤ M P ≤ 10 M J , and 0.95 ≤ R P ≤ 2.1 R J . All the planet models are coreless, except for R P = 1.0 R J . For this radius value, we calculate the function ν 0 (M P ) for several core masses within the Jovian range 0 ≤ M core ≤ 10 M ⊕ . We place the observed point for Jupiter, taken from Gaulme et al. (2011), at M P = 1.0 M J . As can be seen, our model is consistent with the observational data. For any fixed value of the planet radius, it is clear that ν 0 is an increasing function of the planet mass M P . However, the sensitivity of ν 0 to the planet mass decreases as the planet radius increases. In order to quantify this, we fit the curves of Figure 6 with straight lines, in the high-mass regime (5 ≤ M P ≤ 10 M J ), and calculate their derivatives. We find that, for a radius equal to 0.95 R J , the corresponding derivative is 25 µHz M −1 J . At the extreme opposite end of the giant planet radius spectrum, for a radius equal to 2.1 R J , the corresponding derivative is 8.0 µHz M −1 J . These conclusions are qualitatively consistent with the presumption that ν 0 would scale approximately with the square root of the mean density of the planet (Jackiewicz et al. 2012). Thus, the asymptotic scaling relations between Jupiter and giant exoplanets would be roughly: where ν 0,J refers to the mean frequency spacing of Jupiter. As we show in §4.2.1 and §4.2.2, this expression only approximately holds. Below, we investigate the separate influence of the radius, the mass, and the core mass on giant exoplanet pulsation modes. We focus on one parameter at a time. We select three specific quantities to discuss this influence: the characteristic frequency ν 0 , the eigenfrequencies of low-degree f-modes, and the eigenfrequencies of low-degree p-modes. To determine the influence of planetary parameters on the characteristic frequency and on the frequency spectrum of exoplanets, we calculate these for a wide range of each parameter (radius, mass, entropy, and core mass), all other things being equal. Dependence on the planet mass We build several planetary models with the radius fixed at R P = 1.0 R J and with various masses. We use coreless models, since we are here exploring the dependence on mass. Table 7 presents various parameters of these models: the planet mass, M P (in Jupiter units), the central pressure, p c (in Mbars), the central density, ρ c (in cgs units), the specific entropy, S (in k B /baryon), and the characteristic frequency, ν 0 (in µHz). When the radius is fixed at 1.0 R J , ν 0 is an increasing function of planet mass. We fit this function with a power law and obtain: Thus, we derive a power law consistent with the asymptotic scaling relation suggested by eq. 9. Figure 7 depicts the profiles of the pressure p 0 and the sound speed c 0 along the relative radius x = r/R, in hydrostatic equilibrium, for the first models of Table 7. As can be seen, at every level of the relative radius x, when the radius is fixed, both the pressure and the sound speed are increasing functions of the planet mass. Thus, given its definition (Eq. 5), ν 0 increases as the planet mass increases, all other things being equal. According to the asymptotic theory, we know that, for a given low value of the degree l, and for large radial orders n, ν 0 is approximately the frequency gap between two modes of consecutive radial order: ν n+1,l − ν n,l ∼ ν 0 . Thus, at high frequency, the spacing between eigenfrequencies increases when the planet mass increases, for a given value of the planet radius. Numerically, when the radius is fixed at R P = 1.0 R J , for an object of 1.0 M J , we know that the high-frequency modes are separated by ∼155 µHz, since this corresponds to the Jupiter case. For a 5.0-M J planet, the frequency gap between highfrequency modes is ∼350 µHz, and, at the end of the giant planet regime, for a planet mass of 15.0 M J , the frequency gap exceeds 500 µHz. We now calculate the eigenfrequencies of the lowestorder p-modes for the objects defined in Table 7, and for l ∈ [0, 8]. Figure 8 presents the corresponding eigenvalues for n up to 12, as a function of the degree l, for objects with mass equal to 0.5, 1.0, 2.0, and 3.0 M J , and a radius fixed at 1.0 R J . The frequency spectra of the four planets appear more and more distinct from one another as we go up in frequency, and as we go up in degree l. Indeed, at low l, the f-modes of the four planets are close to one another, whereas the difference between the modes with the same n and l increases rapidly with frequency. Numerically, for l = 2, the f-modes of the four planets are all within the range 0.08 ≤ ν 0,2 ≤ 0.21 mHz. For l = 2 and n = 12, the difference between the eigenvalues for M P = 0.5 M J and M P = 3.0 M J is more than 2.3 mHz. This discrepancy at high radial order n is, of course, due to the differences of the characteristic frequency ν 0 , which is a measure of the frequency scale at low-degree l and high-order n. At high frequency, the value of ν 0 for each planet is clearly visible on Figure 8. This increase of the frequency range continues as the planet mass increases beyond 3.0 M J . To appreciate the difference numerically, we focus on l ∈ [0, 8] and n ∈ [0, 7]. Figure 9 presents several low-order eigenvalues, as a function of the planet mass, for various values of the degree l ∈ [0, 8]. For the calculated modes, it appears from the calculations that the minimum in frequency is always obtained for (n, l) = (0, 2) (middle left panel), and the maximum is obtained for the highest n and l considered: (n, l) = (7, 8) (bottom right panel). This statement is true for every value of the planet mass M P in the selected range. On the bottom right panel (l = 8), the functions ν 0,2 (M P ) have been added (black dashed line). Thus, the frequency range of the calculated modes is contained between the black dashed line (ν 0,2 ) and the solid gold line, defined by (ν 7,8 ). Numerically, The low-degree, low-order eigenfrequencies of a 1.0-M J planet are in the range [0.11,1.8] mHz, whereas the same eigenfrequencies of a 15-M J object are in the range [0.50, 6.4] mHz. Dependence on the planet radius We build several planetary models with the mass fixed at M P = 1.0 M J and with various radii. Table 8 presents various parameters of these models: the planet radius, R P (in Jupiter units), the central pressure, p c (in Mbars), the central density, ρ c (in cgs units), the specific entropy, S (in k B /baryon), and the characteristic frequency, ν 0 (in µHz). When the mass is fixed at 1.0 M J , ν 0 is a decreasing function of the planet radius. We again fit this function with a power law and obtain: We find here a dependence on the radius slightly stronger than the one suggested by the asymptotic scaling relation of eq. 9, though one has to keep in mind the discrepancy between the complexity of the interior models and the roughness of the relation exhibited in equation 9. If we compare Equations 10 and 11, we see that, for the selected ranges of values, the dependence of ν 0 on the radius is significantly more important than the dependence on the mass. As explained in the previous subsection, ν 0 is approximately the frequency gap between two modes of consecutive radial order, for a given low value of the degree l, and for large radial order n. Thus, this decreasing behavior results in a diminution of the frequency gap between high-frequency modes. Numerically, we can see that this gap is around 40 µHz for a 2.0-R J planet (again, the mass is equal to 1.0 M J ), which is less than 26% of the Jovian value. We calculate the eigenfrequencies of the lowest-order pmodes for the objects defined in Table 8, and for l ∈ [0, 8]. Figure 10 presents the corresponding eigenvalues for n up to 12, as a function of the degree l, for objects with a radius equal to 1.0, 1.2, 1.4 and 1.6 R J , and a mass equal to 1.0 M J . It appears that the remarks of the previous subsection, which deals with the dependence on the planet mass, also apply to Figure 10. Indeed, the frequency spectra of the four planet models appear more and more distinct from one another as we go up in frequency, and as we go up in degree l. The frequency range and scale of the low-degree, low-order eigenvalues decrease with the planet radius, when the mass is fixed, whereas the same parameters increase with the planet mass, when the radius is fixed. For instance, numerically, the low-degree, low-order eigenvalues of a 1.6-R J planet are between 0.07 mHz and 0.91 mHz, whereas, in the case of a 1.0-R J planet, the same modes have eigenfrequencies between 0.1 mHz and 2.6 mHz. Dependence on the core mass Even in the cases of Jupiter and Saturn, the presence and mass of a dense core is still not proven. Gudkova & Zarkhov (1999) have shown that measurements of the pulsation modes of Jupiter could constrain the dimensions of the core. This is likely to be true for exoplanets, if and when their modes are measured. Many extrasolar giant planets appear smaller than the theory would allow (Burrows et al. 2007;Guillot et al. 2006). This anomaly can be explained by the presence of heavy elements in a dense core, which shrinks the radii of these planets. One famous example is the case of HD149026b, whose measured radius and mass suggest the presence of a core mass in the range 45 -90 M ⊕ (Sato et al. 2005). We calculate the characteristic frequency ν 0 as a function of the core mass for a selection of exoplanets for which the presence of a core has been inferred. The results are given in Figure 11, which also includes the characteristic frequencies for Jupiter and Saturn, as a reference. The core mass ranges have been taken from Saumon & Guillot (2004) for Jupiter and Saturn, Sato et al. (2005) for HD149026b and Burrows et al. (2007) for the other planets. It is clear that, in any case, with the radius and the mass of the object fixed, ν 0 is a decreasing function of the core mass. This can be easily explained: the presence of a dense core reduces the sound speed in the center of the planet (see, for example, Figure 1) which ultimately increases the integral planet dr c and, thus, diminishes ν 0 , which is inversely proportionnal to the latter. However, the sensitivity to the mass of the core is not identical among the selected objects. We can see that Saturn and HD149026b are much more influenced by the core mass than the others. This is due to their small radius and mass, compared to the other selected planets. Indeed, Saturn's radius is 0.83 R J , its mass is 0.30 M J , HD149026b's radius is 0.72 R J , its mass is 0.36 M J whereas all the other planets have radii within the range [1.0,1.23] R J and masses within the range [0.54,1.30] M J . In this way, for planets with a small radius and mass, the determination of ν 0 through observation can be a powerful tool to investigate the presence of a dense core. For example, for our models of HD149026b, ν 0 loses more than 26% of its value between a 45-M ⊕ core model and a 90-M ⊕ core model. Thus, even a rough estimate of the value of ν 0 might give us information on the core of this type of planet. To investigate the dependence of the low-degree, loworder eigenfrequencies on the core mass, we build several exoplanet models with the radius fixed at R P = 1.0 R J , the mass fixed at M P = 1.0 M J , and the core mass in the range 0-100 M ⊕ . Table 9 presents the various parameters of these models: the core mass, M c (in Earth units), the central pressure, p c (in Mbars), the central density, ρ c (in cgs units), the specific entropy, S (in k B /baryon), and the characteristic frequency, ν 0 (in µHz). The lowestorder eigenvalues of modal oscillations are given in Figure 12, as a function of the degree l, for models with a core mass equal to 0, 10, 20, 30, 50, and 100 M ⊕ , and with the radius and mass fixed at the Jovian values. When the radius and mass are fixed, the eigenfrequencies are monotonic functions of the core mass, but their direction of variation depends on their degree l and their radial order n. For l ≥ 2, the eigenvalues of the f-modes slowly increase when the core mass increases. At high n, for every l, it can be seen that the eigenfrequencies are decreasing functions of the core mass, all things being equal. For instance, numerically, between the coreless model and the model with a core mass fixed at 100 M ⊕ , the eigenvalues decrease by ∼15% at high n (n ∈ [10, 12]), for every l ∈ [0, 8]. Thus, for a given degree l, the f-modes and the high-order p-modes, as functions of the core mass, have opposite directions of variation. Consequently, the frequency range for a given l shrinks when the core mass increases. If the low-degree, loworder modes were to be unambiguously identified by its spherical harmonic quantum numbers, the corresponding frequency range may constrain the presence of heavy elements in the deep interior. We have also constructed several exoplanet models with the mass fixed at M P = 1.0 M J , and the specific entropy fixed at S = 6.67 k B /baryon, which is the specific entropy of our coreless model with R P = 1.0 R J and M P = 1.0 M J . These models possess a core with a mass in the range 0-100 M ⊕ and this set of planet models approximately probes the situation in which, for a given planet mass and a given age, the presence of a dense core shrinks the radius. Table 10 presents various parameters of these models: the core mass, M c (in Earth units), the planet radius, R P (in Jupiter units), the central pressure, p c (in Mbars), the central density, ρ c (in cgs units), and the characteristic frequency, ν 0 (in µHz). For the selected values of planet mass (M P = 1.0 M J ) and entropy (S = 6.67 k B /baryon), the shrinking of the radius with core mass is visible. For instance, the planet radius decreases by ∼20 % when a 100-M ⊕ core is added. The direct consequence of the reduction of the radius is the increase of the characteristic frequency ν 0 . Qualitatively, the increase of ν 0 is consistent with its dependence on the planet radius, discussed in section 4.2.2. For the fixed radius and entropy, the eigenvalues of the lowest-order modal oscillations are given in Figure 12, as a function of the degree l, for models with a core mass equal to 0, 10, 20, 50, and 100 M ⊕ . When the mass and the entropy of the planet models are fixed, the frequency range of the low-degree, low-order p-modes decreases as the core mass increases. This is again a direct consequence of the shrinking of the planet radius as the core mass goes up, for given mass and entropy. However, as can be seen, the frequency spectra of the models defined by M c = 0 M ⊕ and M c = 10 M ⊕ are quite similar for l ∈ [0, 3] and n ∈ [0, 12]. In particular, for l = 1 and 2, the eigenfrequencies of these two models differ by less than 1% for the frequency range considered. These similarities may be due to two opposite effects. First, we know that the presence of a 10-M ⊕ dense core shrinks the model radius by 2% for the selected values of planet mass and entropy (see Table 10). When the planet mass is fixed, this decrease of the radius causes an increase of the low-degree, low-order eigenvalues, as already discussed in section 4.2.2. On the other hand, when the planet radius and the planet mass are both fixed, the presence of a dense core implies a decrease of the same eigenvalues. Figure 12 shows that, for the selected values, for low core mass and low degree l, the presence of a dense core compensates for the effect of the radius reduction on the eigenvalues. Nonetheless, it can be seen that, for higher degrees l (l ≥ 4) and for higher core masses (M c ≥ 20 M ⊕ ), the effects of the radius reduction exceed the pure effect of the core. Dependence on the metallicity If not contained in a dense core, heavy elements can be laced throughout the envelope (Guillot 2005). In this spirit, we investigate the influence of heavy elements in the envelope itself, regardless of the presence of a core. Though there is no published robust equation of state that properly includes heavy elements beyond helium, we can mimic their presence by using a higher helium mass fraction than Y = 0.25 (Guillot 2008;Spiegel et al. 2011). We assume the excess of helium mass fraction ∆Y (compared with the default value Y 0 = 0.25) is given by the value of the metallicity Z: Using the value from Asplund et al. (2009), we take Z ⊙ = 0.014, to be the heavy element mass fraction in the Sun. In this way, an helium fraction of Y = 0.30 mimics a metallicity equal to ∼3.6 × solar metallicity. We build three coreless models of exoplanets with a helium mass fraction of 0.25 and 0.30, respectively, with the radius and mass fixed at the Jovian values. The corresponding parameters, in particular the characteristic frequencies ν 0 , are given in Table 11. As can be seen, a higher helium mass fraction, hence a higher fraction of heavy elements in the envelope, tends to lower the value of ν 0 , all things being equal. Figure 14 portrays the low-degree, low-order eigenfreqdeuencies of the modal oscillations for the models of Table 11. It appears that the remarks made concerning Figure 12, which deals with the dependence on the core mass, can be also made for Figure 14. Such similarities are expected since, in both cases, it is the dependence on the global fraction of heavy elements that is under consideration. As can be seen on Figure 14, for low l and very low n, the eigenfrequencies slightly increase with the helium mass fraction Y , whereas for higher l and n, the eigenvalues unambiguously decrease with Y . As previously stated when discussing the presence of a dense core, if the low-degree, low-order modes were to be unambiguously identified, the corresponding frequency spectrum might give us a hint of the presence of heavy elements in the interior. Evolution of ν0 with time To investigate the evolution of ν 0 for a given explanet, we build simple evolutionnary models of exoplanets with a mass fixed at 0.5, 1.0, 2.0, 10, and 20 M J . We use the default formalism and modeling tools outlined in Burrows et al. (1997Burrows et al. ( , 2001Burrows et al. ( , 1995. The planets are in isolation, which means that no stellar irradiation is taken into account. No core has been added. As the specific entropy decreases with time, the planet's radius decreases. Figure 15 portrays the evolution of ν 0 and the planet radius up to 5 Gyrs for the five fixed planet masses considered. The large early radii of the models result in small values for the characteristic frequencies, compared to their final values. Numerically, ν 0 (0) is between 16% and 22% of the final ν 0 values, for the five models considered here. As the radius stabilizes, so does ν 0 . We fit the curves with straight lines in the region [2,5] Gyrs, and we calculate the corresponding derivatives. We find that the derivative increases with the planet mass, from ∼3.1 µHz Gyrs −1 for M P = 0.5 M J , to ∼8.2 µHz Gyrs −1 for M P = 20 M J . CONCLUSION We have calculated the eigenfrequencies and eigenfunctions of the pulsational modes of planets for a broad range of giant exoplanet models. In particular, we have investigated the dependence of the characteristic frequency ν 0 , the eigenfrequencies of low degree f-modes, and the eigenfrequencies of low degree p-modes on several parameters: the planet mass, the planet radius, the core mass, and the helium fraction in the envelope. We provide the corresponding eigenvalues for a degree l up to 8 and a radial order n up to 12. We also present values of ν 0 for 174 known giant exoplanets, and highlight the strong dependence on the radius, around any value of the planet mass. For Jupiter and Saturn, we find that our results are consistent with both observationnal data (Gaulme et al. 2011) and previous theoretical work (Provost et al. 1993;Mosser et al. 1994;Gudkova & Zarkhov 1999). In the specific case of Jupiter, we presented ν 0 and the lowdegree, low-order eigenfrequencies of acoustic modes as a function of the core mass. We conclude that, for nonradial oscillations (l ≥ 1), the sensitivity of the pulsation frequencies to the core mass is important both for the f-modes and for the high-order p-modes. However, for specific intermediate values of radial order n, this sensitivity is minimal over the whole range of core masses considered. Focusing on giant exoplanets, we find that the dependence of the characteristic frequency on the core mass is more important for small radii and masses. As an example, the characteristic frequency of HD149026b, with measured radius and mass of 0.72 R J and 0.36 M J , varies by more than 26% across the range of core masses considered. We quantify the influence of the core mass on exoplanet models with arbitrary fixed mass and radius. For l ∈ [0, 8] and for frequencies up to 2.6 mHz, we find that eigenfrequencies shrink as the core mass increases, which is consistent with previous work on Jupiter. A big core (M c = 100 M ⊕ ) induces a reduction in ν n,l of ∼15% for n ≥ 10 compared to coreless values. We also develop an approach to quantify the influence on the eigenfrequency spectrum of a high heavy-element fraction in the planet envelope. We find that, quantitatively, the presence of heavy element in the envelope affects the eigenvalue distribution in ways similar to the presence of a dense core. We have also quantified the influence of mass and radius on the modal oscillations of giant exoplanets. For the selected values of l and n, we find that the pulsation eigenfrequencies depend strongly on both parameters, especially at high frequency. This dependence can be measured through ν 0 . For the mass range 0.5 ≤ M P ≤ 15 M J , and fixing the planet radius to its Jovian value, we find that ν 0 ∼ 164.0 × (M P /M J ) 0.48 µHz. For the radius range from 0.9 to 2.0 R J , and fixing the planet's mass to its Jovian value, we find that ν 0 ∼ 164.0 × (R P /R J ) −2.09 µHz. These variations of ν 0 directly affect the high-frequency spectrum of modal oscillations. We thank Dave Spiegel and Sudhir Raskutti for helpful discussions. The authors would also like to acknowledge support in part under HST grants HST-GO-12181.04-A, HST-GO-12314.03-A, and HST- GO-12550.02, and JPL/Spitzer Agreements 1417122, 1348668, 1371432, 1377197, and 1439064. NONRADIAL OSCILLATION EIGENVALUE PROBLEM For adiabatic, nonradial oscillations of non rotating spherically symmetrical planetary models, the governing differential equations are (Unno et al. 1989, eqs. 14.2 -14.4): and 1 r 2 where c = (Γp 0 /ρ 0 ) 1/2 is the sound speed and the Lamb frequency, L l , is: and the Brunt-Väisälä frequency, N, is: Γ = (d ln p 0 /d ln ρ 0 ) ad is the adiabatic exponent, G is the gravitation constant, g = GM r /r 2 is the gravitational acceleration, and r is the radius. Primed variables refer to the Eulerian perturbation at a given position; zero subscripts refer to the equilibrium value. Equations (1) -(3) are the full fourth-order set of differential equations. There are four corresponding boundary conditions (Unno et al. 1989, eqs. 14.8-14.11). At r = 0, At r = R, where R is the radius of the planet, and Here, δ is the Lagrangian perturbation for a given fluid element. Equation (9) has various limiting forms, depending on the physical conditions at r = R. For the case in which the density and pressure vanish at the surface, (9) can be written (Unno et al. 1989, eq. 14.12): This condition is valid whenever at r = R. Thus, we need to estimate the pressure scale height, H, for every input model of the planet. As an example, Saturn's value is about 40 km at 1 bar (Marley 1990), making this condition for a free boundary appropriate. The differential equations, boundary conditions, and a normalization condition at r = R, ξ r /r = 1, comprise the eigenvalue problem. EIGENVALUE CALCULATION: THE DIMENSIONLESS PROBLEM The eigenvalue differential equations (Eqs.1 -3) may be recast as four first-order differential equations with four dimensionless variables (Unno et al. 1989). The variables are: The dimensionless variable is used in place of r. The resulting four equations are as follows: x x and The dimensionless quantities are and The dimensionless boundary conditions are c 1 ω 2 l y 1 − y 2 = 0 at r = 0 (23) ly 3 − y 4 = 0 at r = 0 (24) (l + 1)y 3 + y 4 = 0 at r = R (25) y 1 − y 2 + y 3 = 0 at r = R. (26) NUMERICAL IMPLEMENTATION OF THE OSCILLATION SOLUTIONS Using the previous equations and boundary conditions, we build two linearly independent solutions that satisfy the appropriate boundary conditions at the center and two linearly independent solutions that satisfy the appropriate boundary conditions at the surface. These solutions are carried out using a fifth-order Runge-Kutta method with adjusted stepsize to ensure accuracy. The solution vectors y = (y i ) are, respectively: y C,1 (x), y C,2 (x), y S,1 (x), and y S,2 (x), where the superscripts C and S denote a solution integrated from the center and the surface, respectively. A continuous match of the interior and exterior solutions at an arbitrary fitting point x f requires the existence of non-zero constants K C,1 i , K C,2 i , K S,1 i , and K S,1 i , i ∈ {1, 2, 3, 4} such that (Christensen-Dalsgaard 2003): For all i ∈ {1, 2, 3, 4}. This set of equations has a solution only if the determinant vanishes. Hence, the eigenfrequencies are determined as the zeros of ∆ f (ω 2 ). This determinant has the advantage of behaving smoothly over the whole frequency spectrum and allows us to use a simple bisection method to find all the roots of ∆ f , while scanning a given interval of frequency. The roots of ∆ f are supposed to be independent both of the choice of the fitting point and of the initial values of y C,1 , y C,2 , y S,1 and y S,2 . However, it is possible to control the amplitude of the determinant by using the regular solutions near the center and the surface, given in Unno et al. 1989 and characterized by: Thus, we define the following initial values for the center: and for the surface: y S,1 3 = g 1 · y S,1 1 , y S,2 3 = g 2 · y S,1 1 , where f 1 , f 2 , g 1 and g 2 are arbitrary coefficients such that f 1 = f 2 and g 1 = g 2 . Then, for any particular frequency (actually ω 2 ), the values of y C,j i and y S,j i for i = 2, 4 and j = 1, 2 are fixed by the boundary conditions (23) -(26). NUMERICAL IMPLEMENTATION OF THE PLANET INTERIOR PROFILES Using the classical equations of the hydrostatic equilibrium, the interior profiles are derived using a fifth-order Runge-Kutta method with adjusted stepsize to ensure accuracy. As a consequence, the number of points of the radial grid is not constant, but is about 1500 points. The density, pressure, sound speed, and gravitational acceleration are obtained by linear interpolation. Three layers are considered for the planet interiors: an adiabatic atmosphere, a hydrogen-helium envelope, and an olivine core. If a core mass is specified, the adaptive stepsize permits the code to carry out the profile solution from the center to the exact radius interior to which the specified core mass is reached. At this point, pressure continuity is ensured, whereas density, temperature, and sound speed are recalculated using the equation of state of the hydrogen-helium envelope. The transition between the atmosphere and the envelope has been smoothed through linear interpolation to ensure the continuity of density, pressure, and sound speed. 1 Eigenfrequencies of p-modes of degree 0 and 1 for a polytrope of index 3, in µHz.The radius is R P = 6.9599 × 10 10 cm and the mass is M P = 1.989 × 10 33 g. TABLE 2 Eigenfrequencies of p-modes of degree 2 and 3 for a polytrope of index 3, in µHz. The radius is R P = 6.9599 × 10 10 cm and the mass is M P = 1.989 × 10 33 g. TABLE 3 Eigenfrequencies of g-modes for a polytrope of index 3, in µHz. The radius is R P = 6.9599 × 10 10 cm and the mass is M P = 1.989 × 10 33 g. Note. -The helium mass fraction has been fixed at 0.25 in the entire envelope. No core has been added. Estimated radii and masses have been taken from http://www.exoplanet. eu TABLE 6 Characteristic frequency ν 0 (in µHz) for a set of coreless models using the estimated radius and mass (here in Jupiter units) of detected giant exoplanets Note. -The helium mass fraction has been fixed at 0.25 in the entire envelope. No core has been added. Estimated radii and masses have been taken from http://www.exoplanet. eu TABLE 6 Characteristic frequency ν 0 (in µHz) for a set of coreless models using the estimated radius and mass (here in Jupiter units) of detected giant exoplanets Mc with a planet mass fixed at M P = 1.0 M J and a specific entropy fixed to 6.67 k B /baryon (value for a coreless model with R P = 1.0 R J and M P = 1.0 M J ). The helium mass fraction in the envelope is 0.25. distribution of sound speed c 0 (km s −1 ) as a function of the relative radius r/R for coreless exoplanet models with various masses, with the radius fixed at R P = 1.0 R J . Right panel: distribution of pressure p 0 (M bar) for the same models, with a logarithmic scale for the Y-axis. At every radius r, both the sound speed and the pressure are increasing functions of the planet mass, when the planet radius is fixed at R P = 1.0 R J . -Low-order eigenfrequencies of oscillation modes (in mHz) as a function of the degree l for coreless exoplanet models with various masses, with the radius fixed at R P = 1.0 R J . The helium mass fraction in the envelope is 0.25. For the sake of clarity, the eigenvalue dots of the different models has been seperated and drawn on both sides of each integer degree l. , as a function of the degree l, for exoplanet models with various core masses, with the radius fixed at R P = 1.0 R J and the mass fixed at M P = 1.0 M J . The helium mass fraction in the envelope is 0.25. For the sake of clarity, the eigenvalue dots of the different models has been seperated and drawn on both sides of each integer degree l. , as a function of the degree l, for exoplanet models with two different helium mass fractions Y in the envelope, with the radius fixed at R P = 1.0 R J and the mass fixed at M P = 1.0 M J . For the sake of clarity, the eigenvalue dots of the different models has been seperated and drawn on both sides of each integer degree l. . The helium mass fraction has been set to 0.25 in the envelope. The planets are considered in isolation during their evolution, which means that no irradiation is taken into account. Right panel: The corresponding evolution of the planet radius for the same coreless planetary models, in Jupiter units. The X-axis does not begin at the origin.
2012-12-23T15:09:47.000Z
2012-09-12T00:00:00.000
{ "year": 2013, "sha1": "0e01a4d0e873a1343bdd5fcc5dbe848df98135cc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1209.2728", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0e01a4d0e873a1343bdd5fcc5dbe848df98135cc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14644742
pes2o/s2orc
v3-fos-license
The Pros and Cons of Diagnosing Diabetes With A1C An International Expert Committee was convened in 2008 by the American Diabetes Association (ADA), the European Association for the Study of Diabetes, and the International Diabetes Federation to consider the means for diagnosing diabetes in nonpregnant individuals, with particular focus on the possibility to indicate A1C as an alternative if not a better tool (1). After reviewing the available literature and a thorough discussion on the advantages and the limits of previous diagnostic strategies (essentially based on fasting glucose assessment) and the considered alternative approach (based on A1C measurement), a consensus was reached that the latter (i.e., A1C) should be included among diagnostic tools for diabetes and, with the exception of a number of clinical conditions, should even be preferred in diabetes diagnosis in nonpregnant adults. The main conclusion of the International Expert Committee was implemented in the most recent clinical recommendations issued by the ADA. However, in these guidelines, A1C is indicated as a diagnostic tool alternative but not superior to blood glucose, leaving to the health care professional the decision about what test to use in an individual. The World Health Organization is currently examining the proposal made by the International Expert Committee and is carefully addressing the controversial issues still remaining, most of which have been the subject of letters to the editor and articles recently published in the literature. Nevertheless, the use of A1C for diagnosing diabetes is rapidly becoming a reality in many Western countries. In the text that follows, one of us (E.B.) will present the main points supporting A1C (pros) and the other (J.T.) will illustrate the main counterpoints challenging A1C (cons) as the primary tool for diabetes diagnosis. The text has been prepared in full coordination and the final conclusions represent the opinion of both authors. Tables 1 and 2 summarize the … A n International Expert Committee was convened in 2008 by the American Diabetes Association (ADA), the European Association for the Study of Diabetes, and the International Diabetes Federation to consider the means for diagnosing diabetes in nonpregnant individuals, with particular focus on the possibility to indicate A1C as an alternative if not a better tool (1). After reviewing the available literature and a thorough discussion on the advantages and the limits of previous diagnostic strategies (essentially based on fasting glucose assessment) and the considered alternative approach (based on A1C measurement), a consensus was reached that the latter (i.e., A1C) should be included among diagnostic tools for diabetes and, with the exception of a number of clinical conditions, should even be preferred in diabetes diagnosis in nonpregnant adults. The main conclusion of the International Expert Committee was implemented in the most recent clinical recommendations issued by the ADA. However, in these guidelines, A1C is indicated as a diagnostic tool alternative but not superior to blood glucose, leaving to the health care professional the decision about what test to use in an individual. The World Health Organization is currently examining the proposal made by the International Expert Committee and is carefully addressing the controversial issues still remaining, most of which have been the subject of letters to the editor and articles recently published in the literature. Nevertheless, the use of A1C for diagnosing diabetes is rapidly becoming a reality in many Western countries. In the text that follows, one of us (E.B.) will present the main points supporting A1C (pros) and the other (J.T.) will illustrate the main counterpoints challenging A1C (cons) as the primary tool for diabetes diagnosis. The text has been prepared in full coordination and the final conclusions represent the opinion of both authors. Tables 1 and 2 summarize the pros and cons. PROs A1C captures chronic hyperglycemia better than two assessments of fasting or 2-h oral glucose tolerance test plasma glucose Diabetes has been diagnosed for decades with fasting plasma glucose (FPG) assessment or, much less frequently, with an oral glucose tolerance test (OGTT). Hyperglycemia as the biochemical hallmark of diabetes is unquestionable. However, fasting and 2-h OGTT gauge just a moment of a single day. In addition, the two assessments required to confirm diagnosis might be fallacious in describing a chronic and complex clinical condition. In this respect, there is no doubt that a biochemical or clinical parameter describing the extent of a biological phenomenon over a long period provides a more robust indicator of glycemia than a parameter describing it in the short term or in a given moment only. Accordingly, there are some good examples in medicine: urinary albumin excretion rate provides more reliable information on the presence and the degree of microalbuminuria than spot urinary albumin-to-creatinine ratio; serum IGF-I is definitely more efficacious than serum growth hormone when monitoring patients with acromegaly, etc. Labeling a person with a diagnosis of diabetes has several psychological and legal implications and requires a robust and reliable approach. The measurement of A1C equals the assessment of hundreds (virtually thousands) of fasting glucose levels and also captures postprandial glucose peaks; therefore, it is a more robust and reliable measurement than FPG and/or 2-h OGTT plasma glucose. This is particularly valid when FPG oscillates above and below the cut point of 126 mg/dL or 2-h plasma glucose (PG) oscillates above and below the cut point of 200 mg/dL. Of note, the 2-h PG had poor reproducibility. From a clinical standpoint, having an FPG of 120 or 130 mg/dL or having a 2-h PG of 185 or 215 is virtually the same, but from the patient's perspective (perception of having a disease, psychological well-being, health insurance, recognition of particular benefits, or imposition of certain limitations, etc.), it makes a substantial difference. Therefore, a diagnostic tool gauging chronic rather than spot hyperglycemia is certainly preferable. A1C is better associated with chronic complications than FPG Different from National Diabetes Data Group criteria, which were essentially based on distribution of glucose levels within the general population, the 1997 ADA criteria (and the subsequently recommended World Health Organization criteria) established diabetic glycemic levels by means of their association with retinopathy, the most exclusive and specific diabetes complication. Various observational studies documented that an increased prevalence of nonproliferative diabetic retinopathy can be observed with fasting glucose levels around 7.0 mmol/L (126 mg/dL) and 2-h PG around 11.1 mmol/L (200 mg/dL). Interestingly, the same studies documented that retinopathy increased with A1C levels around 6.5% (2)(3)(4). These results were confirmed in a more recent study including almost 30,000 subjects recruited in several countries. Such study clearly showed that prevalent retinopathy started to increase in the A1C category of 6.5-7.0% (5). Therefore, a cut point of A1C for diagnosing diabetes with an approach similar to the one used with FPG and 2-h PG is available (and indeed already was available in older studies). It is well known that cardiovascular disease (CVD) is the most frequent chronic complication of diabetes, with incidence rates 5-to 10-fold higher than with microvascular disease. For this reason, the association of A1C with CVD can be considered a major issue when discussing the potential use of A1C for diagnosing diabetes. In this regard, it is worth mentioning that, in the general population, FPG is a poor marker of future CVD events, whereas 2-h OGTT and A1C are good predictors (6,7). Fasting is not needed for A1C assessment and no acute perturbations (e.g., stress, diet, exercise) affect A1C Plasma glucose levels are not stable but rather vary throughout the day, mainly in postprandial periods. Although it is believed that fasting glucose levels are reproducible across days, a number of acute perturbations of glucose homeostasis have been described. Acute stress can increase endogenous glucose production substantially and impair glucose utilization. People who are worried about blood sampling or experience a stressful situation in the hours preceding blood sampling can have an increase in fasting glucose concentration. On the contrary, exercise can decrease glucose levels, and an evening or early-morning session of physical exercise can affect the level of fasting glycemia. Moreover, most individuals do not pay attention to the request or are not asked to consume a diet with at least 200 g carbohydrate in the days before testing glucose. Some individuals do not abstain from food in the 8 h before testing, thus arriving to the laboratory in the postabsorptive rather than fasting condition. In addition, smoking or taking certain medications can adversely affect fasting glucose. The lack of appropriate preparation for glucose testing makes FPG less reliable for diabetes diagnosis, with results sometimes falsely elevated and sometimes apparently normal. On the contrary, A1C is not influenced by acute perturbations or insufficient fasting. Indeed, A1C can be measured anytime, irrespective of fasting or feeding. A1C has a greater pre-analytical stability than plasma glucose Even when preparation to glucose testing is optimal, plasma glucose values may still be misleading because of pre-analytical instability. In fact, tubes for blood collection do not always contain antiglycolytic substances, and even when they do, significant glucose consumption occurs in blood cells in the first 1-2 h after sampling because glycolysis is inhibited in its more distal steps by NaF or other preservatives. As long as the sample is not processed and plasma and blood cells are separated by centrifugation, a significant glucose loss is observed. In this regard, it must be emphasized that, quite often, blood samples reach the laboratory and are processed hours after withdrawal. Consistently, glucose concentration decreases 5-7% (on average ;0.5 mmol/L) per hour and even more rapidly in cases of high ambient temperature (8,9). In such cases, glucose levels can show results lower than they are and diabetes diagnosis can be missed. It has been estimated that pre-analytical variability of FPG is 5-10%. On the contrary, pre-analytical variability of A1C is negligible. As for analytical variability, it is superimposable for glucose and A1C, being ;2%. Standardization of A1C assay is not inferior to standardization of glucose assay One of the main concerns surrounding A1C and raising perplexities on its use for diabetes diagnosis is the poor standardization of the assay. Quite surprisingly, the same concerns and perplexities do not extend to A1C use for diabetes monitoring despite the understanding that only when A1C is aligned to the Diabetes Control and Complications Trial (DCCT)/UK Prospective Diabetes Study (UKPDS) standard should the recommended target be pursued (in general ,7%). A great effort was made in the U.S. and other countries to make reproducible A1C across laboratories with an effective standardization program. Such a program has been recently completed and is being implemented worldwide to provide more reliable information to physicians who monitor diabetic patients (10). The standardization is expected to minimize laboratory biases and is a prerequisite to use A1C not only for monitoring but also for diagnosing diabetes. Although it is generally believed that glucose assay is highly reproducible across laboratories, this is not true. A recent survey conducted in 6,000 U.S. laboratories clearly documented a significant bias in glucose assessment in as many as 41% of them, yielding a misclassification of glucose tolerance in 12% of subjects (11). Therefore, the argument that A1C cannot be used for diabetes diagnosis because of poor standardization is no longer tenable. Biological variability of A1C is lower than that for FPG When the same subjects have two assessments of the available glucose-related parameters, the correlation is stronger among the individual A1C measurements than among the FPG or 2-h PG measurements. The coefficients of variation of A1C, FPG, and 2-h PG are 3.6, 5.7, and 16.6%, respectively (12). This reflects of course both biological and analytical variability. However, although the latter was similar for A1C and FPG (;2%), biological variability of A1C was severalfold lower than that of FPG (,1 vs. ;4%) (13). This finding confirms that the two required assessments of FPG to diagnose diabetes can provide quite unreliable information, whereas A1C, especially if measured twice as recommended, provides more robust clinical information. Individual susceptibility to glycation might be an additional benefit of A1C assessment It is a common clinical finding that many subjects have an A1C value lower or higher than expected when examining their daily glycemic profiles. Using the DCCT database, McCarter et al. (14) calculated the hemoglobin glycation index (HGI) as the difference between observed and predicted A1C level and identified categories of patients with low, moderate, or high HGI. Most interestingly, they found that subjects with high HGI had a greater risk of developing retinopathy and nephropathy, even when they had good glucose control, and that subjects with lower HGI had a low incidence of microangiopathy despite high mean blood glucose levels. This finding demonstrates that A1C assessment might provide not care.diabetesjournals.org DIABETES CARE, VOLUME 34, SUPPLEMENT 2, MAY 2011 S185 only information on chronic hyperglycemia but also a measure of whole-body susceptibility of protein glycation and, therefore, risks of diabetes complications that are more strictly related to this pathogenic mechanism. Using the same biomarker for diagnosing and monitoring diabetes might be an advantage A1C is used to monitor diabetes and to establish the degree of metabolic control. Deviation from individualized A1C targets prompts physicians to modify treatment strategies with lifestyle intervention and/or drug titration or changes. The use of A1C for diagnosing diabetes has the advantage that, in subjects with A1C $6.5% (i.e., diabetes), baseline A1C is already measured and deviation from target is immediately available (no A1C measurement as a second step after FPG assessment). In subjects with A1C of 6.00-6.49% (i.e., high risk of diabetes), an effective prevention strategy can be immediately undertaken with the awareness that a single A1C is definitely more reliable than a single FPG to stratify the risk of the disease. Yet, in subjects with A1C of 5.50-5.99% plus other diabetes risk factors (e.g., central obesity, atherogenic dyslipidemia, hypertension, and/or metabolic syndrome), counseling can be immediately offered because diabetes risk is substantial, and single A1C assessment is definitely more reliable than single FPG to capture chronically high-normal glucose levels. Pertinent to this issue is the firm belief that the implementation of the standardization of A1C assay would proceed more rapidly worldwide if A1C were to also be used for diagnosing diabetes. A1C assessment is crucial for diabetes monitoring, and establishing the individual A1C target definitely requires that the parameter is International Federation of Clinical Chemistry (IFCC) standardized and DCCT aligned. In fact, the A1C target and the deviation from it in the single patient remain totally uncertain when the laboratory provides A1C data that are not aligned to standard. Cost of the assay: savings or no savings? One of the major concerns raised by critics of the use of A1C for diagnosing diabetes is the higher cost of the assay when compared with FPG. There is no doubt that from an analytical point of view (cost of reagents and equipment), FPG is cheaper than A1C. However, other considerations about cost should be made. FPG assessment requires overnight fasting, whereas A1C can be assessed any time. This means that a person could go or could be driven by a relative/friend to the laboratory, even during lunch or in the late afternoon, avoiding loss of work hours. It is also possible to collect blood for A1C assessment in the evening and hand it to the laboratory in the following days. Yet, in subjects with FPG $7 mmol/L ($126 mg/dL), A1C assessment would be needed the next few days as a second step in a newly diagnosed diabetes workup. On the contrary, when A1C assessment yields a value $6.5%, the second step required to initiate diabetes monitoring after diagnosis would be completed, with a substantial savings of both analytical and nonanalytical costs. On the other hand, when using FPG to screen for diabetes and finding a value in the range of 5.6-6.9 mmol/L (100-125 mg/dL; impaired fasting glucose), an OGTT is frequently prescribed (mainly in Europe and less frequently in the U.S.) to establish glucose tolerance. This test requires hours in the laboratory, with additional analytical and nonanalytical costs. In such cases, which represent a sizable portion of the general population, A1C rather than FPG would provide an immediate diabetes diagnosis or a valuable risk stratification (15) without supplementary testing. Impact of changing the diagnostic laboratory parameter on epidemiology of diabetes A further critique to the program of moving from FPG to A1C for diabetes diagnosis comes from people who state that epidemiology of the disease is based on FPG and that the scenario would change if A1C were used instead of FPG. A recent report based on the U.S. population (16) showed that the use of A1C rather than FPG would not significantly change diabetes prevalence and that the categorization would not change in as many as 97.7% of subjects. Moreover, this study showed that half of the subjects with FPG $7 mmol/L ($126 mg/dL) had an A1C value in the 6.00-6.49% range, thus deserving strict monitoring and an intervention. In this regard, however, it should be emphasized that any comparison of A1C with FPG (or 2-h OGTT PG) is equivocal because a true gold standard is not available. FPG, which in classic studies relating glucose parameters (including A1C) to retinopathy was measured just one time and with less than optimal pre-analytical and analytical procedures, cannot be taken as the gold standard. Therefore, any study examining sensitivity and specificity of A1C for diagnosing diabetes suffers from these limitations and is questionable. At present, the gold standard is probably the combination of FPG, 2-h PG, and A1C assessments with optimal pre-analytical, analytical, and standardized procedures and confirmatory testing for all parameters. This is not feasible on a large-scale basis and cannot be recommended. A1C seems to be a reasonable approach for all reasons discussed above (summarized in Table 1). CONs Diabetes is clinically defined by high blood glucose and not by glycation of proteins The introduction of A1C as the diagnostic tool for diabetes, in particular, if this parameter is considered the primary tool, will lead to a major change in the pathophysiological paradigm that defines the syndrome called "diabetes." So far, diabetes has been defined as "a clinical condition of elevated glucose concentration in blood". High A1C represents high Table 1-Reasons to prefer A1C compared with plasma glucose determination for diagnosing diabetes Chronic hyperglycemia is captured by A1C but not by FPG (even when repeated twice). Microangiopathic complications (retinopathy) are associated with A1C as strongly as with FPG. A1C is better related to cardiovascular disease than FPG. Fasting is not needed for A1C assessment. No acute perturbations (e.g., stress, diet, exercise, smoking) affect A1C. A1C has a greater pre-analytical stability than blood glucose. A1C has an analytical variability not inferior to blood glucose. Standardization of A1C assay is not inferior to blood glucose assay. Biological variability of A1C is lower than FPG and 2-h OGTT PG. Individual susceptibility to protein glycation might be caught by A1C. A1C can be used concomitantly for diagnosing and initiating diabetes monitoring. Diabetes assessment with A1C assay is not necessarily greater than with glucose assessment. S186 DIABETES CARE, VOLUME 34, SUPPLEMENT 2, MAY 2011 care.diabetesjournals.org glycation of proteins in the body, which is a substantially different biochemical abnormality, although it is certainly secondary to high blood glucose. In medicine, it is important to pay attention to primary phenomena before emphasizing the secondary ones. Moreover, high A1C is only observed subsequently to an increase in blood glucose, but there are few data on how long the delay is. Regardless of the length of this delay (weeks, months), diagnosis of diabetes using A1C would occur later than with blood glucose assessment. In many cases, such a delay might have negative clinical consequences. A1C is a poor marker of important pathophysiological abnormalities featuring diabetes OGTT and 2-h post-glucose levels do reflect the pathophysiology behind diabetes better than any other glycemic parameter, since they provide information on what occurs in the postprandial state, when glucose levels are at the highest levels during the day and when the health of the pancreatic b-cell is essential. On the contrary, fasting glucose is the least informative among glycemic parameters, since in most subjects, it corresponds to the lowest glucose level during the day and it reflects the long nocturnal period when there is no intake of food and no particular stress for b-cells. However, humans spend most of their time in postprandial or postabsorptive states that are deranged in diabetes. A1C is a poor indicator of what occurs in the postprandial state. A1C captures only chronic hyperglycemia, but it will miss acute hyperglycemia. Normal blood glucose levels 2 h after glucose load indicates a good b-cell capacity, whereas high 2-h OGTT glucose levels document an impairment of b-cell function (17). This means that only 2-h OGTT PG can provide reliable information on the key pathophysiological defect of diabetes, also providing advice regarding the correct therapy to overcome it. This can be compared with ambulatory blood pressure monitoring (ABPM), where the main features predicting cardiovascular events are not only the long-term average blood pressure but the daily variation in blood pressure (especially the lack of a physiological nocturnal dip). Thus, ABPM is clinically useful in finding out blood pressure patterns, not estimating the long-term average. Recently, the Insulin Resistance Atherosclerosis Study (IRAS) showed that A1C is a weaker correlate of insulin resistance and insulin secretion in studies of metabolism compared with FPG and 2-h PG (18). A1C has a poor sensitivity in diabetes diagnosis and would change the epidemiology of diabetes Diabetes diagnosis based on A1C misses a large proportion of asymptomatic early cases of diabetes that can only be identified by the OGTT. According to a recent Chinese study, A1C sensitivity is inferior compared with fasting blood glucose at the population level (19). Also, people with impaired glucose tolerance (IGT), in whom the efficacy of diabetes prevention has been unequivocally proven (20), cannot be detected by A1C. Epidemiological studies carried out in the general population showed that A1C and plasma glucose (FPG and/or 2-h OGTT) identify partially different groups of diabetic subjects (21). A1C $6.5% identifies~30-40% of previously undiagnosed patients with diabetes (16). A larger percentage is detected by FPG (~50%) and 2-h PG (~90%). These findings are based on several recent studies, including the 2003-2006 NHANES (30% of diabetic individuals detected by A1C $6.5%, 46% by FPG $126 mg/dL, and 90% by 2-h PG $200 mg/dL) (18) and the IRAS (32, 45, and 87%, respectively) (18). In Qingdao, China, the $6.5% A1C cut point detects 30% of individuals with diabetes according to 2003 ADA criteria (19). In Chennai, India, however, A1C $6.5% detects 78% of individuals with newly diagnosed diabetes according to these criteria (22). In the IRAS, A1C of 5.7-6.4% predicted type 2 diabetes better with increasing BMI, and there were significant ethnic differences in the performance of A1C of 5.7-6.4% to detect diabetes (18). The ethnic differences in A1C compared with glucose measurements were also well demonstrated in the Diabetes Prevention Program population (23) and in a recent multiethnic database by Christensen et al. (24) that showed that there are no systematic interpretations as to why a shift to an A1C-based diagnosis for diabetes has substantially different consequences for diabetes prevalence across ethnic groups and populations. 2-h Glucose level and IGT are stronger predictors of CVD than A1C Because high glucose is toxic and causes many types of tissue damage, any indicator of hyperglycemia is predictive of diabetes complications. In the general population, FPG is a poor marker of mortality and future CVD events, whereas 2-h PG and A1C are better predictors (8,10,(25)(26)(27). When analyzed jointly, only 2-h PG remains a statistically significant predictor of mortality and CVD (28,29). The findings regarding associations of FPG, 2-h PG, and A1C with retinopathy from the Pima Indians in the ADA 1997 report describing diagnostic thresholds of each glycemic parameter were derived by univariate analyses, and the multivariate analysis aiming at identifying the best glycemic parameters for diagnosis has never been reported. One of the main issues is that people with IGT have~40% increased mortality compared with normoglycemic people, and these individuals cannot be identified by measuring FPG or A1C. In addition, lifestyle intervention has been shown to prevent the progression from IGT to diabetes and also reduce their mortality risk to the level observed among normoglycemic people (30,31). Such prevention trial evidence does not exist for A1C or FPG, and this evidence should not be forgotten when deciding the approaches to identify intermediate hyperglycemia. Moreover, these results indicate that early intervention is effective in reducing mortality in people with IGT, and therefore, we should attempt to make the diagnosis of hyperglycemia as early as possible. Fasting is not essential to identify perturbation in glucose metabolism Measuring blood glucose in the fasting state in nondiabetic individuals is probably the least efficient way to identify early signs of perturbations in glucose metabolism. Because excessive postprandial glucose excursions are marking the first signs of abnormal glucose regulation and they also seem to best predict cardiovascular outcome, fasting is not really the central issue. It is likely that fasting has been overemphasized in diagnosing type 2 diabetes. We may pay attention to approaches used in the diagnosis of high blood pressure that also vary markedly during the day, but despite this variation, we are able to identify individuals with hypertension, even though measurements are not restricted to certain hours of the day but are done at any time. Standardization of A1C assay is very poor and standardization of glucose assay is easier to implement Inaccuracies in measurement and poor standardization of A1C assays are still care.diabetesjournals.org DIABETES CARE, VOLUME 34, SUPPLEMENT 2, MAY 2011 S187 a common problem, even in Western countries. Although a less than perfect standardization also exists for plasma glucose, this assay might be more easily aligned to a standard than A1C. Such programs now exist in the U.S., Japan, and Sweden, but there is still a long way to a global standardization of the A1C assays. Actually, all glycemic assessments require confirmation to make the diagnosis of diabetes correctly, mainly to avoid errors in sample handling and laboratory procedures. A1C assay is unreliable and cannot be used in many subjects Abnormal hemoglobin traits are not uncommon in many regions of the world, and they significantly interfere with A1C assay (32), leading to spurious results. Also, there are several clinical conditions that influence erythrocyte turnover (e.g., malaria, chronic anemia, major blood loss, hemolysis, uremia, pregnancy, smoking, and various infections) that are responsible for misleading A1C data. Still, we are aware of ethnic differences in the relation between blood glucose and A1C levels (33) as well as an effect of aging. If different cut points regarding all these conditions need to be considered, A1C cannot be easily used to diagnose diabetes. Within-day biological variability of plasma glucose might unveil disturbance of glucose metabolism Biological variability in plasma glucose reflects our daily patterns of diet, physical and mental activity, sleep, etc., and also depends on possible pathophysiological processes that may underlie type 2 diabetes. By definition, postprandial, and also 2-h PG, vary more than FPG. In this regard, A1C, which does not have any substantial biological variability, provides little information on pathophysiological processes leading to type 2 diabetes. The variability in A1C is entirely due to other phenomena, not pathophysiological disturbances. Individual susceptibility to glycation of hemoglobin is not relevant to diabetes diagnosis The HGI was calculated in patients with type 1 diabetes from the DCCT (17). This parameter is not relevant to the diagnosis of diabetes in the general population, in which 99% of subjects have A1C levels definitely lower than patients with type 1 diabetes. Subjects with high HGI had a greater risk of developing retinopathy and nephropathy, even when they had good glucose control (i.e., FPG was not very high), whereas subjects with lower HGI had a very low incidence of microangiopathy despite high mean blood glucose levels. This finding indicates that postprandial glucose excursions must have been very high in the former and very low in the latter. A1C reflects high mean exposure to glucose but not glucose fluctuations during the day. Unfortunately, in this analysis with HGI, postprandial glucose excursions and daily glucose variability were not taken into account. Using the same biomarker for diagnosing and monitoring diabetes might not have positive effects only This approach may be useful, but it also may lead to problems in two ways. First, people who have diabetes (based on their glucose values) will remain undiagnosed and untreated, since they are considered "nondiabetic" according to their A1C. Also, if the intermediate level of A1C (6.00-6.49 or 5.70-6.49%) was used to predict diabetes, it performed less well than impaired fasting glucose and/or IGT (18). Whereas the 6.5% A1C threshold misses a large percentage of previously undiagnosed diabetes, its clinical consequences remain unknown. It is important to recognize this problem. One obvious consequence is that with a less sensitive test, individuals who fall below the threshold are not considered in cardiovascular management algorithms as high-risk individuals and are probably treated less effectively for other risk factors. Second, a large proportion of newly diagnosed diabetic patients based on current glucose criteria have A1C ,6.5%. In the Finnish Diabetes Prevention Study, the sensitivity of A1C $6.5% to diagnose diabetes was only 39%, i.e., 61% of newly diagnosed case subjects had A1C ,6.5% (34). If this same threshold were to be used for treatment, these patients would not be accepted to be treated, even though their glucose levels were twice the glucose threshold for diabetes. This would also mean that in 61% of high-risk people who were regularly monitored for diabetes, the actual diagnosis would have been delayed-for how long, we do not know, since diabetic people were referred to antidiabetic therapy based on their high glucose values. Diabetes is clinically defined by high blood glucose and not by glycation of proteins. A1C is a poor marker of important pathophysiological abnormalities featuring diabetes. A1C has a poor sensitivity in diabetes diagnosis and would change the epidemiology of diabetes. 2-h glucose level and IGT are stronger predictors of CVD than A1C. Fasting is not essential to identify perturbation in glucose metabolism. Standardization of A1C assay is poor, even in Western countries, and standardization of glucose assay would be easier to implement. In many subjects, A1C assay is unreliable and cannot be used. A1C has significant differences in various ethnic groups, which are poorly understood and characterized. Within-days biological variability of plasma glucose might unveil disturbance of glucose metabolism. Individual susceptibility to glycation of hemoglobin is not relevant to diabetes diagnosis. Using the same biomarker for diagnosing and monitoring diabetes might have negative effects. Cost of the assay: glucose is unquestionably cheaper than A1C, and A1C assay is not available on a large scale in most of the countries. A1C levels vary not only according to glycemia, but also to erythrocyte turnover rates (e.g., hemoglobinopathies, malaria, anemia, blood loss) as well as other factors. Correlation between A1C and FPG is~0.85%, which means that as many as 30% of the variation in FPG is not explained by A1C and vice versa. Nothing is known about changes in A1C during the development of diabetes. A1C levels of 6.0-6.5% do not predict diabetes as effectively as FPG and 2-h PG (OGTT). Sensitivity of A1C to detect diabetes defined by the OGTT is ,50%; thus, the majority of diabetic individuals will remain undiagnosed if A1C is used. The levels of A1C predicting future retinopathy, nephropathy, etc., in the population is not well established (,6.5%?). No diabetes prevention trials have selected their populations based on A1C. Using A1C will delay the diagnosis of diabetes in~60% of incident cases. S188 DIABETES CARE, VOLUME 34, SUPPLEMENT 2, MAY 2011 care.diabetesjournals.org Cost of the assay: glucose is unquestionably cheaper than A1C Whichever way we calculate the assay costs, A1C assay is more expensive than glucose assay, and it will thus remain so despite the speculative claim that the cost of A1C assay will become less expensive when used more extensively. In addition, many individuals at high risk of diabetes would need other laboratory tests that require fasting (e.g., lipid profile, hepatic profile, etc.), and therefore adding a glucose determination to the panel is not really a major issue. Also, the vast majority of laboratories in primary care collect samples in the morning, and they do not operate after "working hours." This makes the claim that A1C can be measured "any time of the day" rather theoretical. In a large part of the world, A1C is not available, and its cost is so high that it is meaningless to even discuss whether it should be given a priority over simple and inexpensive glucose measurements. This step would divide the world into two categories: developed societies in which diabetes diagnosis is made with A1C and less developed societies (between and within countries) in which diabetes diagnosis is made with plasma glucose: such a division should be avoided. It would add to the inequities in health and health care. CONCLUSIONS-There is no doubt that hyperglycemia is the biochemical hallmark of diabetes and is a prerequisite for diagnosis. In this respect, moving from blood glucose to A1C might sound like a sort of heresy. There is also no doubt that all epidemiological data based on blood glucose assessment might be considered less important if the disease were mainly diagnosed with A1C. This might create confusion, disappointment, anxiety, and concern in all who lived a glucocentric existence. Partly rewarding would be the fact that, in several clinical conditions, A1C could not be used and blood glucose assessment would remain the standard diagnostic procedure. In all other conditions (most subjects), A1C could become the reference method, provided that its assay be aligned to international standards. Also mandatory is that the cost of assay declines and becomes affordable in less developed societies. Longitudinal studies should also reassure us about the relative benignity of clinical conditions in which A1C is below the diagnostic threshold of 6.5% but FPG and/or 2-h OGTT PG are above the thresholds of 7 or 11 mmol/L, respectively. This is currently one of the most relevant worries related to potentially missing diagnosis. Glucose assessment is familiar and cheaper, but A1C seems to provide several advantages, especially in a scenario in which OGTT is rarely used and never repeated as a confirmatory testing. Perhaps accepting a double diagnostic approach in which both blood glucose and A1C do coexist as diagnostic tools is reasonable. In the meantime, epidemiological and clinical studies will hopefully provide further data to better understand whether the current recommendations to replace FPG with A1C are well founded. We agree that the research and debate on the pros and cons of using A1C versus glucose assay as a diagnostic tool for diabetes should continue in a constructive manner until a larger and truly evidencebased consensus is reached.
2016-05-12T22:15:10.714Z
2011-04-22T00:00:00.000
{ "year": 2011, "sha1": "b95a9a9d7074140717e78b15f5708d2cfdd0202a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.2337/dc11-s216", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b95a9a9d7074140717e78b15f5708d2cfdd0202a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210040291
pes2o/s2orc
v3-fos-license
Traumatic Bladder Ruptures: A Ten-Year Review at a Level 1 Trauma Center Bladder rupture occurs in only 1.6% of blunt abdominopelvic trauma cases. Although rare, bladder rupture can result in significant morbidity if undiagnosed or inappropriately managed. AUA Urotrauma Guidelines suggest that urethral catheter drainage is a standard of care for both extraperitoneal and intraperitoneal bladder rupture regardless of the need for surgical repair. However, no specific guidance is given regarding the length of catheterization. The present study seeks to summarize contemporary management of bladder trauma at our tertiary care center, assess the impact of length of catheterization on bladder injuries and complications, and develop a protocol for management of bladder injuries from time of injury to catheter removal. A retrospective review was performed on 34,413 blunt trauma cases to identify traumatic bladder ruptures over the past 10 years (January 2008–January 2018) at our tertiary care facility. Patient data were collected including age, gender, BMI, mechanism of injury, and type of injury. The primary treatment modality (surgical repair vs. catheter drainage only), length of catheterization, and post-injury complications were also assessed. Review of our institutional trauma database identified 44 patients with bladder trauma. Mean age was 41 years, mean BMI was 24.8 kg/m2, 95% were Caucasian, and 55% were female. Motor vehicle collision (MVC) was the most common mechanism, representing 45% of total injuries. Other mechanisms included falls (20%) and all-terrain vehicle (ATV) accidents (13.6%). 31 patients had extraperitoneal injury, and 13 were intraperitoneal. Pelvic fractures were present in 93%, and 39% had additional solid organ injuries. Formal cystogram was performed in 59% on presentation, and mean time to cystogram was 4 hours. Gross hematuria was noted in 95% of cases. Operative management was performed for all intraperitoneal injuries and 35.5% of extraperitoneal cases. Bladder closure in operative cases was typically performed in 2 layers with absorbable suture in a running fashion. The intraperitoneal and extraperitoneal injuries managed operatively were compared, and length of catheterization (28 d vs. 22 d, p=0.46), time from injury to normal fluorocystogram (19.8 d vs. 20.7 d, p=0.80), and time from injury to repair (4.3 vs. 60.5 h, p=0.23) were not statistically different between cohorts. Patients whose catheter remained in place for greater than 14 days had prolonged time to initial cystogram (26.6 d vs. 11.5 d) compared with those whose foley catheter was removed within 14 days. The complication rate was 21% for catheters left more than 14 days while patients whose catheter remained less than 14 days experienced no complications. The present study provides a 10-year retrospective review characterizing the presentation, management, and follow-up of bladder trauma patients at our level 1 trauma center. Based on our findings, we have developed an institutional protocol which now includes recommendations regarding length of catheterization after traumatic bladder rupture. By providing specific guidelines for initial follow-up cystogram and foley removal, we hope to decrease patient morbidity from prolonged catheterization. Further study will seek to allow multidisciplinary trauma teams to standardize management, streamline care, and minimize complications for patients presenting with traumatic bladder injuries. significant morbidity if undiagnosed or inappropriately managed. AUA Urotrauma Guidelines suggest that urethral catheter drainage is a standard of care for both extraperitoneal and intraperitoneal bladder rupture regardless of the need for surgical repair. However, no specific guidance is given regarding the length of catheterization. e present study seeks to summarize contemporary management of bladder trauma at our tertiary care center, assess the impact of length of catheterization on bladder injuries and complications, and develop a protocol for management of bladder injuries from time of injury to catheter removal. A retrospective review was performed on 34,413 blunt trauma cases to identify traumatic bladder ruptures over the past 10 years (January 2008-January 2018) at our tertiary care facility. Patient data were collected including age, gender, BMI, mechanism of injury, and type of injury. e primary treatment modality (surgical repair vs. catheter drainage only), length of catheterization, and post-injury complications were also assessed. Review of our institutional trauma database identified 44 patients with bladder trauma. Mean age was 41 years, mean BMI was 24.8 kg/m 2 , 95% were Caucasian, and 55% were female. Motor vehicle collision (MVC) was the most common mechanism, representing 45% of total injuries. Other mechanisms included falls (20%) and allterrain vehicle (ATV) accidents (13.6%). 31 patients had extraperitoneal injury, and 13 were intraperitoneal. Pelvic fractures were present in 93%, and 39% had additional solid organ injuries. Formal cystogram was performed in 59% on presentation, and mean time to cystogram was 4 hours. Gross hematuria was noted in 95% of cases. Operative management was performed for all intraperitoneal injuries and 35.5% of extraperitoneal cases. Bladder closure in operative cases was typically performed in 2 layers with absorbable suture in a running fashion. e intraperitoneal and extraperitoneal injuries managed operatively were compared, and length of catheterization (28 d vs. 22 d, p � 0.46), time from injury to normal fluorocystogram (19.8 d vs. 20.7 d, p � 0.80), and time from injury to repair (4.3 vs. 60.5 h, p � 0.23) were not statistically different between cohorts. Patients whose catheter remained in place for greater than 14 days had prolonged time to initial cystogram (26.6 d vs. 11.5 d) compared with those whose foley catheter was removed within 14 days. e complication rate was 21% for catheters left more than 14 days while patients whose catheter remained less than 14 days experienced no complications. e present study provides a 10-year retrospective review characterizing the presentation, management, and follow-up of bladder trauma patients at our level 1 trauma center. Based on our findings, we have developed an institutional protocol which now includes recommendations regarding length of catheterization after traumatic bladder rupture. By providing specific guidelines for initial follow-up cystogram and foley removal, we hope to decrease patient morbidity from prolonged catheterization. Further study will seek to allow multidisciplinary trauma teams to standardize management, streamline care, and minimize complications for patients presenting with traumatic bladder injuries. Introduction Urogenital tract injury occurs in approximately 10% of all abdominopelvic traumatic injuries with bladder rupture representing just 1.6% of these cases [1,2]. Due to the structural protection from the bony pelvis, injury to the bladder is rare and usually associated with a high-impact injury [2,3]. Bladder rupture can be classified as either extraperitoneal (EP) or intraperitoneal (IP). EP ruptures are more common and usually result from forceful impact to the anterior bladder [2,3]. IP ruptures usually result from a rise in intravesicular pressure following an abdomino pelvic impact that causes rupture of one of the weaker points of the bladder such as the dome [2,3]. e clinical presentation of bladder trauma patients may vary based on injury severity, but most patients have gross hematuria, difficulty with or painful voiding, and suprapubic pain [3]. Pelvic fractures are very common in patients with bladder injury. Pelvic fractures have been found to be associated with increased morbidity and mortality in bladder trauma patients, and identification of a pelvic fracture should raise clinical suspicion to assess for urogenital injury [4]. Although bladder rupture is rare, it is associated with significant patient morbidity and a mortality rate of approximately 22% [5]. In studies assessing mortality trends in patients with bladder rupture, it has been shown that there has been no improvement in the mortality rate in these patients over the last two decades [4]. e American Urological Association (AUA) has guidelines in place regarding the management of urogenital trauma [6]. e guidelines recommend formal retrograde cystography in stable patients with gross hematuria and/or a pelvic fracture or any other patient with signs and symptoms suspicious of bladder injury [6]. e guidelines state that urethral catheter drainage is the standard of care for both EP and IP bladder ruptures [6]. e AUA guidelines also recommend surgical repair for all IP bladder ruptures for prevention of peritonitis following intraperitoneal exposure to bladder contents [6]. Although guidelines recommend catheterization for all patients with bladder rupture, no specific recommendations regarding length of catheterization are in place. Additionally, there are no recommendations on ideal operative technique for management of intraperitoneal and/or complicated extraperitoneal bladder ruptures. e present study seeks to summarize the patients with bladder trauma at our center using 10 years of data. Herein, we assess the management of these patients including the impact of length of catheterization on bladder injuries and associated complications. Methods Following IRB approval, we performed a retrospective review of 34,413 blunt trauma cases using a database provided by the trauma registry at our institution during the last ten years (January 2008-January 2018). Patients included in the study all had an ICD-9 diagnosis of bladder injury. Patients were excluded if they were less than 18 years old. Patient data collected included age, ethnicity, gender, BMI, mechanism of injury, type of injury, any associated additional injuries, and complications. We additionally assessed management of patients including imaging studies performed, catheterization length, and surgical modalities used. Two-tailed T-test was utilized via Microsoft Excel for comparison of means with p < 0.05 being considered significant. Demographic Data. Chart review identified 44 total patients with a bladder injury in the last ten years. Of these patients, the mean age was 41.8 years old, and the mean body mass index (BMI) was 24.8 kg/m 2 . Demographic data also showed that 55% identified as female, 45% identified as male, and 95% were of Caucasian ethnicity. Motor vehicle collisions (MVC) were the most common mechanism of injury representing 45% of total cases. Other common mechanisms of injury were falls (20%) and ATV accidents (14%) ( Table 1). Characterization of Injuries. Of the 44 identified injuries, 31 cases were classified as EP injuries (70.1%), and 13 were classified as IP injuries (29.5%). Pelvic fractures were present in 93% of cases, and gross hematuria was present in 95%. Additional solid organ injuries were present in 39% of cases. e most common additional solid organ injured was the lung, followed by the spleen and the brain. Interestingly, 18% of all bladder trauma cases had a concomitant second genitourinary organ injury with kidney being most commonly followed by the urethra. Management. Formal retrograde cystography was performed in 59% of bladder trauma cases during the initial hospitalization. Of these patients, mean time from presentation at our facility to cystogram was 4 hours. All patients were catheterized following initial presentation and/or Of the extraperitoneal injury patients who were managed operatively, the indications included planned orthopedic open fixation with permanent hardware to be exposed to the area of injury (n � 8), presence of a bony spicule within the bladder (n � 3), concomitant bladder neck or rectal injury (n � 1), and initial concern for intraperitoneal injury (n � 1). In patients with catheter dwell time less than or equal to 14 days, the average time to fluorocystogram was 11.5 days. ere were no complications in this cohort. In patients with catheter dwell time greater than 14 days, the average time to fluorocystogram was 26.6 days. Complications were present in 21% of these cases including urinary tract infection, DVT, and gross hematuria with clot retention ( Table 2). Mortality during initial hospitalization was noted in 3 patients (6.8%). Discussion Over the past ten years, nearly all bladder injuries identified were associated with both gross hematuria and pelvic fractures. is finding falls in agreement with the AUA Urotrauma Guidelines that recommend formal bladder imaging for all patients with both findings [4]. However, only 59% of these patients received appropriate bladder imaging on initial presentation in the Emergency Department (ED) indicating an opportunity for improvement. Prior studies have also demonstrated weaker compliance with imaging recommendations of a CT cystogram or plain cystogram in patients with suspected bladder trauma. At a level 1 trauma center in Utah, only 24% of bladder injuries in a 15-year time span were diagnosed with cystogram or CT cystogram [7]. Additionally, they found that in bladder injury patients who only received standard CT imaging on presentation, 13% were missed or incorrectly diagnosed on initial presentation. and some cases were inappropriately operatively explored as a result [7]. Another study assessed patients with traumatic pelvic and acetabular fractures and found that only 47% of patients with pelvic or acetabular fractures that also had hematuria had a formal urologic evaluation on initial presentation [8]. ese findings emphasize that although guidelines are in place for appropriate bladder imaging, they are not always being followed. We additionally found that 39% of patients with bladder injuries had an additional solid organ injury. In another study of outcomes following genitourinary injury in United States (US) military members, it was found that genitourinary injury patients had a high incidence of concurrent traumatic brain injury as well as trends between Cystogram or foley removal in 7-10 days * * Cystogram in 10-14 days * * Figure 1: Revised institutional bladder trauma protocol. * Surgical repair is performed in all intraperitoneal injuries and extraperitoneal injuries involving the bladder neck, concomitant rectal or vaginal injury, exposure to orthopedic hardware (i.e., pelvic fixation), or nonhealing state after management with catheter drainage. * * Revisions to our institutional protocol reflect standardization of length of catheterization based on the finding that catheterization less than 14 days resulted in decreased morbidity. Previously, there was no institutional consensus or protocol on length of catheterization, and it was highly variable. GU injury and PTSD [9]. is emphasizes the significant morbidity of bladder trauma and the need for thorough evaluation for any additional injuries in all bladder injury patients. Patients with catheterization longer than 14 days had a longer time to follow-up imaging as well as an increased complication rate. When assessing reasons for delayed catheter removal, it was often found that the delay was due to coordination of follow-up appointments rather than any medical indication for longer catheterization. e increased catheter dwell time and associated complications emphasize the need for sooner follow-up and imaging to allow catheter removal as soon as clinically indicated to minimize the risk of complications in these patients. Based on the findings of this study, our institution generated a standardized treatment algorithm which we plan to integrate into our institutional trauma protocols ( Figure 1). House staff members covering the trauma service are provided with a written handbook containing all pertinent solid organ protocols to promote adherence to established protocols and homogeneity in treatment for all trauma patients in our high volume, level 1 trauma center. To date, length of catheterization after bladder injury was a metric that was not included or addressed at an institutional level or a national level as evidenced by no specific AUA Urotrauma recommendations. Given the results of our study, we aim to limit catheterization to less than 14 days for all bladder injuries given the higher complication rate observed for patients with catheter dwell times exceeding this metric. Implications e present study provides a 10-year retrospective review characterizing the presentation, management, and followup of bladder trauma patients at a tertiary care facility. Following this investigation, we introduced an institutional protocol for the management of bladder trauma patients hoping to increase adherence to Urotrauma Guidelines and to strictly define length of catheterization prior to initial follow-up cystogram. In patients with both gross hematuria and pelvic fracture, formal retrograde cystography should be performed in the emergency department following the primary trauma survey. If bladder injury is identified, urology consultation should be obtained promptly to guide management. Bladder catheterization should be performed, and if the injury is IP, operative management will follow. When possible, follow-up cystogram and urinary catheter removal should be performed within 10-14 days to limit the risk of complications in those patients who have a normal initial follow-up cystogram. Further study will seek to allow multidisciplinary trauma teams to standardize management, streamline care, and minimize complications for patients presenting with traumatic bladder injuries. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Disclosure An abstract was accepted as a podium presentation in representation of some of the original work in this manuscript at the 76th Annual Meeting of the Mid-Atlantic Section of the American Urological Association in Washington, DC. Conflicts of Interest e authors declare that they have no conflicts of interest. Supplementary Materials e supplementary data file is a table which has all of the demographic and clinical data for bladder trauma patients listed in a deidentified manner. Demographic data include age, gender, ethnicity, and BMI. e clinical variables include date and time of initial injury, mechanism of injury, concomitant injuries, and imaging studies. Additional data regarding length of catheterization, complications, and follow-up studies are also included. (Supplementary Materials)
2019-12-19T09:16:00.980Z
2019-12-12T00:00:00.000
{ "year": 2019, "sha1": "ae1b9c266085855695f06f8bce5d5f8dbacdae7a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/2614586", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1fda4bef372118b6700ab500c3e7138b13b9eabc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222152448
pes2o/s2orc
v3-fos-license
Never and under cervical cancer screening in Switzerland and Belgium: trends and inequalities Background Research on inequalities in cervical cancer screening (CCS) participation has overlooked the distinction between ‘never-’ and ‘under-screeners’ while different socioeconomic and demographic determinants may underlie ‘non-’ and ‘under-’ screening participation. This study examines socioeconomic and demographic inequalities in never and under CCS participation. We compare cross-national prevalence and trends among these two groups in Switzerland and Belgium, two countries with similar opportunistic CCS strategy but different healthcare systems. Methods Data on 38,806 women aged 20–70 from the Swiss Health Interview Survey (1992–2012) and 19,019 women aged 25–64 from the Belgian Health Interview Survey (1997–2013), both population-based cross-sectional nationally representative surveys, was analysed. Weighted adjusted prevalence ratios were estimated with multivariate Poisson regressions. Results Over the studied period, never screening prevalence was about 15% in both Switzerland and Belgium and under screening prevalence about 14.0%. Socioeconomic gradients were found among both never- and under-screeners. Higher income women had lower never and under screening prevalence in Switzerland and a similar gradient in education was observed in Belgium. Importantly, distinct socioeconomic and demographic determinants were found to underlie never and under screening participation. Never screening was significantly higher among foreign nationals in both countries and this association was not observed in under screening. Never screening prevalence was lower among older age groups, while under screening increased with older age. Over time, age inequalities diminished among never- and under- screeners in Switzerland while educational inequalities increased among never-screeners in Belgium. Conclusion Findings revealed that determinants of screening inequalities differed among never- and under-screeners and hence these should be addressed with different public health strategies. Crucially, socioeconomic and demographic inequalities were more pronounced among never-screeners who appeared to face more structural and persistent inequalities. Differences between the two countries should also be noted. The more liberal-type Swiss healthcare systems appeared to shape income-related screening inequalities, while education appeared to be a stronger determinant of never- and under-screening in Belgium. Introduction Cervical cancer (CC) ranks fourth worldwide for both incidence and mortality [1]. Europe's overall incidence rate is 11.2 per 100,000 women per year, in agestandardised rate by world population, and Switzerland and Belgium's incidence rates are lower than the European average with 3.8 and 7.8 per 100,000, respectively [1]. CC is the cancer that can most effectively be prevented by screening. It was shown that cervical cancer screening (CCS), and particularly organised populationbased CCS, reduces both incidence and mortality [2][3][4]. CCS may have reduced CC incidence and mortality by 80% in different contexts [5]. The beneficial effects of CCS are reflected in the low CC incidence and mortality rates of countries which introduced effective CCS. For example, Western Europe has an average CC incidence rate of 6.8 per 100,000 women, whereas that of Central and Eastern Europe, characterised by lower screening coverage, is of 16.0 per 100,000 [6]. Hence, European and international guidelines recommend that CCS should be organised and population-based [5,7]. Socioeconomic and demographic factors such as education, income, age, nationality and marital status were shown to be associated with adherence to CCS, and that such disparities persisted over time [8][9][10][11][12]. However, to our knowledge, few studies analysed the distinct social and demographic characteristics of 'never-' and 'under-' screeners (those who never screened and those who did screen but not within the 3-year recommendation period) and previous research on CCS in Switzerland and Belgium focused on women who screened within the 3-year recommended period [11,13]. This study addresses 'never' and 'under' CCS participation in Switzerland and Belgium using cross-sectional data from the Swiss and Belgian national health interview surveys spanning from 1992 to 2013. Comparing two countries with similar CCS strategy but different healthcare systems appeared to be particularly relevant. That is, neither country has implemented CCS programmes (although Belgium's Flemish region started a CCS programme since 2013 [14]) and hence rely on opportunistic CCS. As opposed to organised CCS, opportunistic CCS leans on the individual's awareness and initiative to screen and on the doctor's screening recommendation to their patients. In both countries, it is mainly the gynaecologist who recommends a CCS to women during routine examinations [15,16]. A crossnational perspective on CCS is also relevant since most cancer screening studies tended to focus on specific countries [11,14,17]. The present study distinguishes between 'never-' and 'under-screeners' and hypothesises that these two groups are affected by different socioeconomic determinants of cancer screening participation. In contexts of opportunistic CCS, we expect to find never and under screening inequalities to persist over time in both Switzerland and Belgium. We also expect to find different patterns and trends of inequalities in CCS participation in the two countries since these vary across contexts and are embedded in health systems. In sum, this study aims to compare the prevalence and trends of never and under CCS in Switzerland and Belgium and investigate socioeconomic and demographic inequalities and the trend of those inequalities over time. Methods Our study is based on data collected by the 1992-2012 Swiss Health Interview Survey (SHIS) and the 1997-2013 Belgian Health Interview Survey (BHIS), both nationally representative cross-sectional surveys comprising five waves and based on a stratified random selection of residents older than 14 years of age. The former was implemented in 1992, 1997, 2002, 2007 and 2012, and the latter in 1997, 2001, 2004, 2008 and 2013. The SHIS study sample included women aged 20 to 70 years old (N = 38,806) and that of the BHIS included women aged 25 to 64 years old (N = 19,019), based on each country's CCS age recommendation. After excluding respondents who had missing data on the outcome variable, predictors of interest and covariates, and had received cancer treatment/ diagnosis in the past 12 months, 31,800 women were included in the SHIS final sample, and 9442 in the BHIS sample. National contexts The Swiss Law on Health Insurance mandated private health insurances to reimburse one CCS every 3 years for women from 18 to 69 years old since 1996 and the Swiss medical guidelines advised to perform the Pap smear test every 3 years on women aged 21-70 years [18]. The Belgian cervical cancer screening policy followed the European guidelines and recommended one CCS every 3 years to 25-64 years old women [14]. Dependent and independent variables Both the SHIS and the BHIS asked women if they 'ever had a Pap smear' (yes, no) and, if they answered 'yes', when was the last time they had it. The SHIS asked to provide the month and year of the last Pap smear, while the BHIS asked if the Pap smear was undertaken, 'within past 12 months, more than 1 year but not more than 2 years ago, more than 2 years but not more than 3 years ago, more than 3 years but not more than 5 years ago, and not within the past 5 years'. We computed two binary dependent variables to analyse women who 'neverscreened' (0 = ever screened, 1 = never screened), and women who 'under-screened' (0 = screened within the past 3 years, 1 = screened more than 3 years ago). Predictors of interest and covariates were selected based on their potential association with CCS [9,11,16,19] and their availability in both the SHIS and BHIS. The following predictors of interest were analysed: education (primary, upper secondary, tertiary), monthly household income (1st to 5th quintile), employment status (employed, non-employed), partnership status (living, not living with a partner/ spouse), nationality (national citizen, foreign national), area of residence (urban, rural), and age ranges (20-29, 30-39 …). Educational levels followed the International Standard Classification of Education 2011 [20]. Household income was weighted according to the OECD-modified scale in both the SHIS and BHIS (based on the number of adults and children living in the household). Women who were unemployed, at home, retired and out of the labour force were all grouped in the non-employed category. Different age ranges were applied to the SHIS and BHIS samples since CCS recommendations differed in the two countries (Switzerland 20 to 70 and Belgium 25 to 64 years old). To control for potential associations with CCS screening, we included the following covariates in our analysis: self-rated health (very good, good, fair, bad, very bad), self-reported body mass index (underweight < 18.5 kg/ m2, normal weight 18.5 to < 25, overweight & obese 25 to ≥30), a doctor visit (general practitioner or any specialist) in the last 12 months (yes, no) and currently smoking (yes, no). In the BHIS, the "doctor visit in the last 12 months" included a dentist visit whereas it did not in the SHIS. Statistical analysis In order to evaluate inequalities in never and under CCS, adjusted prevalence ratios (APR) with 95% confidence intervals (CI) were estimated with Poisson regression models and robust variance estimators. Such models were shown to be adequate to analyse binary outcomes, particularly with cross-sectional data, and easier to interpret and communicate with prevalence terms as the measure of association [21,22]. The two dependent variables were analysed in separate models and treated as binary variables. Models presented in Table 2 and Tables S.3a and S.3b (supplementary materials) were adjusted for all the independent variables mentioned above. Models presented in Table 2 analysed data from pooled waves and were also adjusted for time (survey wave variable). In order to evaluate the potential time trend of our predictors of interest, we performed one separate multivariate model for each of these with an interaction term between the predictor and the survey wave variable, along with all independent variables. The P-values of the interaction terms for time trend were reported in Table 2. Descriptive statistics and regression analyses were weighted for survey sampling and performed with SPSS 25 and STATA 14. SHIS data were also weighted for non-response bias, as detailed elsewhere [23]. Collinearity between independent variables was tested with variance inflation factors and did not reveal any potential collinearity. SHIS and BHIS analysis were performed separately by the authors of this study and results were subsequently compared and discussed. Participants' characteristics Swiss and Belgian samples are summarised in Table 1. Some differences between the two samples were partly due to the different age ranges applied; for example, in the Swiss sample, less women achieved primary and tertiary education and less women were in the first and second household income quintiles compared to the Belgian sample. In the Swiss sample, there was a higher proportion of underweight women and foreign nationals, and a lower proportion of women visited a doctor in the last 12 months, compared to the Belgian sample. Prevalence and trends in never and under cervical cancer screening Over the studied period, never CCS prevalence was 15.8% in Switzerland and 15.0% in Belgium ( Table 1). The prevalence increased in Switzerland by 6% (APR 1.06, 95%CI 1.04-1.09) and decreased in Belgium by 12% on average by survey wave (APR 0.88, 95%CI 0.84-0.92) ( Table 2). Under CCS prevalence was 13.0% in Switzerland and 14.7% in Belgium. It increased in Belgium by 7% (APR 1.07, 95%CI 1.02-1.13) in average by survey wave and did not show a significant tendency in Switzerland throughout the studied period (APR 1.01, 95%CI 0.98-1.04). Figures 1 and 2 show the prevalence of never and under screening over time for both Switzerland and Belgium. Inequalities and trends in never and under cervical cancer screening Never screening In both Switzerland and Belgium, never screening prevalence was lower among women with higher levels of education, higher household incomes, living in couple and from older age groups (Table 2). Prevalence was higher among foreign nationals. In Switzerland, never screening prevalence followed a gradient from the 3rd to 5th household income quintiles, i.e. the higher income, the less women never screened. Never screening was higher among women living in a rural area. As showed by the time interaction term, prevalence significantly fluctuated over the 5 waves for partnership status however no clear trend was observed (Table S.3a). The time interaction term for age was also significant and the difference between the APRs of the 20-29 year-old women and older age groups reduced from 1992 to 2012. In Belgium, never screening prevalences followed an education and income gradient as these diminished among more educated and higher income women (Table 2). Nationality and area of residence had a significant time interaction term but did not show a clear trend (Table S.3a). Education inequality also varied significantly throughout the period, APRs were significant in 1997, 2008 and 2013 and the differences between educational strata slightly increased in 2013. Non-employment also had a significant time trend. At the beginning of the period (1997-2001), non-employed women had a lower never screening prevalence compared to employed women (although APRs were not significant, see Table S.3a, supplementary materials), and this tendency reversed from 2004 to 2013 with non-employment increasing never screening prevalence. Under screening In both Switzerland and Belgium, under screening prevalence was significantly associated and diminished with 5th household income quintiles, upper secondary education and living in couple (Table 2). Under screening prevalence increased with the increase of age. In Switzerland, under screening prevalence significantly decreased following a gradient throughout the 3rd, 4th and 5th household income quintiles. Under screening prevalence increased with rural area of residence. Time fluctuation for household income and age groups was significant. Household income did not show a clear trend and age groups showed a tendency as inequalities between women aged 20-29 and older age groups reduced over time (see Table S.3b, supplementary materials). In Belgium, under screening prevalence followed a gradient in education and significantly decreased from upper secondary to tertiary education level. Predictors of interest did not change over the studied period since time interactions were not significant. Discussion This study examined trends of never and under CCS prevalence, and socioeconomic and demographic inequalities, using five waves of the SHIS and BHIS. Over the studied period, both countries had similar prevalences, about 15% of never screening and 14% of under screening. Although these levels of 'never' and 'under' CCS participation are relatively low compared to other European countries [24], socioeconomic inequalities were observed as women with higher education and income showed lower never screening prevalence in both countries. These inequalities resembled those revealed in other screening tests in other countries which also relied on opportunistic screening [8,25,26]. The higher participation in CCS of highly educated women could be explained by their higher 'health literacy', a more futureoriented attitude and better risk perception which favour more preventive-focused health decision [27]. Additionally, negative attitudes towards cancer screening among lower socioeconomic participants could also be mediating the association between income and screening participation [17]. Belgium showed a gradient of screening inequalities in education and income levels among never-screeners while this gradient was only evident for income in Switzerland. We may advance that inequalities were shaped by economic determinants in the more liberaltype Swiss healthcare system. Although health insurance is compulsory for all Swiss residents, patient's healthcare cost participation and high out-of-pocket payments cause high healthcare forgoing particularly among those with lower incomes [28]. The Swiss healthcare system shortcomings in addressing health inequalities and implementing more health prevention were already identified and partly attributed to the fragmented nature of the Swiss public health system (with high autonomy of the cantons), which remains an obstacle to coordinate APR Adjusted prevalence ratios. APR are weighted for sampling strategy in the SHIS and BHIS, and also for non-response in the SHIS. Variables used for adjustment: self-rated health, body mass index, doctor visit in the last 12 months, smoking * = p < 0.05; ** = p < 0.01; *** = p < 0.001 a SHIS 1992-2012 and BHIS 1997-2013 b P-values for trend were estimated separately for each predictor of interest with multivariate models including the interaction term (between the predictor of interest and the survey wave variable) healthcare at the national level [29]. This income gradient of screening inequalities persisted among underscreeners in Switzerland suggesting that lower income women might be foregoing preventive healthcare. As a qualitative study in Switzerland pointed out, women who faced financial hardship either perceived the screening cost to be an "issue" or an "unnecessary expense", particularly if they considered themselves to be "in good health" [30]. In both countries, education and income inequalities seemed to be less pronounced among under-screeners compared to never-screeners. We hypothesise that women who screened at least once were more acquainted with prevention and screening and hence less affected by socioeconomic barriers to screening. Among under-screeners, more practical issues might constitute barriers to screening, such as scheduling a doctor's appointment during the working week. Conversely, among never-screeners, socioeconomic barriers to undertake a very first screening appeared to be stronger and more persistent over time. Older age reduced never screening prevalence in both countries and so did living in couple. Women who are in a partnership, and older women, are more likely to visit a gynaecologist, either to conceive or for contraception, and we may expect them to undertake CCS at least once, as they enter their reproductive period [11]. On the other hand, our results interestingly revealed that under screening increased among older women in both countries. We may advance that, as they become older, women start neglecting their routine screening, particularly after repeated negative screens, or their doctors may insist less on screening within the recommended time. Qualitative studies suggested that CCS participation declines as women enter a life stage in which sexuality and pregnancy are less central and visits to the gynaecologist less frequent [30]. Older women are also more likely to cite lower levels of concern with CC (or lower perceived risk) and more likely to express embarrassment and fear of pain [31]. Based on our results, we suggest that never-and under-screeners should be addressed with different strategies. A study on CCS uptake in the United Kingdom stressed that policy interventions should consider the CCS non-participants' heterogeneity of motivations and attitudes. It showed that 51% of CCS non-participants "intended to screen but were overdue" since they failed to translate intention into action, and 28% were unaware of screening [32]. Younger and more disadvantaged women were more likely to be found among those groups. We suggest that measures to reduce inequalities in CCS should focus on never-screenerswith an effort to tackle issues such as screening awareness among the most disadvantagedwhile interventions among underscreeners should pay attention to the intentionbehaviour gap in order to improve participation (for example, through reminders), access and practical issues (such as in scheduling a doctor's appointment), although inequalities among under-screeners should not be neglected. In both countries, screening inequality between nationals and foreigners was found among never-screeners, although not among under-screeners which supports our hypothesis that sociodemographic inequalities were Table 2). Notes: CH = Switzerland; BE = Belgium stronger among never-screeners. Living in rural area increased both never and under screening prevalence in Switzerland, while such inequalities were not found in Belgium. This is consistent with studies of other cancer screening tests in Switzerland [33] which pointed out that women living in a rural area might under-screen while their urban counterparts are more likely to overscreen [34,35]. Over the studied period, never CCS prevalence increased 6% in average by survey wave in Switzerland while it decreased 12% by survey wave in Belgium. Under screening did not show a clear tendency in Switzerland and it increased 7% by survey wave in Belgium. In Switzerland, Burton-Jeangros et al. [11] observed a slight decrease of CCS prevalence, based on the same dataset of the present study. Our results suggested that this decrease stemmed from a slight increase of never screeningrather than under screeningover the same period. In Belgium, an invitation programme was in place in Flanders from the mid 1990's until the early 2000's and could have contributed to reducing never screening. Nevertheless, no impact of sending invitations on screening uptake was found [14]. Inequalities between age groups among both neverscreeners and under-screeners diminished in Switzerland throughout the studied period. This may have been caused by a cohort effect, i.e. by a generation of women who started to screen more at a younger age than the previous generation, and continued to do so as they became older. In Belgium, we observed increasing education-based inequalities among never-screeners over time which were explained in previous studies by a combination of under-screening among women with lower education and over-screening among women with higher education in a context of opportunistic screening [13]. Reliance on opportunistic screening in Switzerland and Belgium may have contributed to the persistence of the CCS inequalities observed in our study. Studies showed that opportunistic screening entailed higher screening inequalities, inconsistent quality and inefficiencies such as over-screening [8,[36][37][38]. As a Swiss-Belgian comparative study on screening overuse showed, although declining, over-screening is persistent in both countries [35]. An organised approach to CCS, with quality assurance framework and strategies to improve never and under-screeners' participation, may minimise the adverse effects of unequal screening and maximise benefits from a public health and cost perspective [37,39]. In Switzerland, a nation-wide CCS programme would help tackle screening inequalities in the context of a fragmented healthcare system which contributes to reproducing health inequalities. In Belgium, the CCS organised programme which was launched in Flanders should be extended nationwide to avoid reinforcing regional inequalities. Limitations of our study are worth noting. The SHIS and BHIS are cross-sectional surveys hence our results do not measure whether individual respondents complied or not across time with the 3-year recommended screening interval. However, from an aggregated (population) level perspective, our 'under-screening' variable allowed us to account for the women who screened 'more than 3 years ago' (the 'under-screeners'), as Table 2). Notes: CH = Switzerland; BE = Belgium opposed to those who screened within the recommended interval. We could not control for related preventive practices of HPV vaccination and HPV testing in our models since this information was not available in the SHIS and the BHIS (only the BHIS 2013 wave collected information on HPV vaccine uptake). HPV vaccination programmes were implemented among teenagers and young women in 2008 and 2010 in Switzerland and Belgium respectively (2011 in the French-speaking part of Belgium) [40,41]. Consequently, these could only have affected the CCS practices of the youngest cohort of the SHIS' and BHIS' last wave (notably, only 25 women aged 25-49 had the vaccine in the BHIS 2013 wave) and, hence, influenced our results in a negligible way. Regarding HPV testing, this test has a 5year recommended screening interval which is larger than the one of Pap smear. GPs and gynaecologists could have offered this test as an alternative to Pap smear which may have affected our under-screening measure. Nonetheless, CCS recommendation guidelines were based on Pap smear as primary screening in both countries during the studied period [14,18]. Gynaecologists implemented Pap smear as part of the routine check-up in both countries and HPV testing was not reimbursed by the health insurance in Switzerland, and only partly in Belgium [14,15,[42][43][44]. The effects of HPV vaccination and testing on CCS practices are difficult to evaluate. However, our data did not show an increase of under screening which would suggest that new preventive techniques had supplanted Pap smear since 2008 in either country. We cannot account for the complexity of national contexts and healthcare systems. The healthcare system design (public and private mix), levels of public health expenditures, density of general practitioners, payment schemes for general practitioners and specialists, insurance coverage, amount of private out-ofpocket payments, accessibility of care, as well as cultural and environmental factors, affect screening participation and inequalities and may produce confounding despite adjustments [12,45,46]. Self-reported CCS data may be affected by recall bias. Studies showed that women fairly correctly reported CCS uptake, however, they tend to report their last CCS more recently than it actually took place -a phenomenon described as "telescoping" -which may cause over-reporting of screening within a specific timeframe and hence underestimation of underscreening [47,48]. Women may also over-report CCS by recalling a routine gynaecologic exam without CCS as including a CCS [47]. Additionally, social desirability bias may lead to underestimates of never and under participation or time since last screening test [47] and response bias might also affect the data since women with higher education level tend to report screening participation more often [49]. In spite of the SHIS and BHIS representativeness limitations, the use of the weighting factors allowed for inference from the sample to the total population of Switzerland and Belgium. Finally, further research is needed to inquire the motivations and attitudes which lie behind 'never' or 'under' screening participation in order to design policy interventions. Conclusion Screening inequalities among never-and underscreeners persisted over time in both Switzerland and Belgium and socioeconomic and demographic determinants of screening inequalities differed between these groups. Inequalities appeared to be more pronounced amongst never-screeners compared to under-screeners hence results stressed that the two groups should be addressed with different strategies. Differences were highlighted between the two countries. Inequalities appeared to be shaped by economic determinants in the more liberal-type Swiss healthcare system, as showed by the income gradient among never-and under-screeners, while inequalities followed an education gradient in Belgium. Finally, both Switzerland and Belgium could benefit from an organised approach to CCS in order to mitigate the screening inequalities observed in our study and improve efficiency from a public health perspective. Additional file 1 : Table S.3a. Adjusted prevalence rations (APR) for never CCS among eligible women in Switzerland and Belgium a .
2020-10-07T14:16:47.018Z
2020-10-07T00:00:00.000
{ "year": 2020, "sha1": "4847d04f9c13fc51158d4818eb3726f1a40f7eff", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-09619-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4847d04f9c13fc51158d4818eb3726f1a40f7eff", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
237503548
pes2o/s2orc
v3-fos-license
Polynomial decompositions with invariance and positivity inspired by tensors We present a framework to decompose real multivariate polynomials while preserving invariance and positivity. This framework has been recently introduced for tensor decompositions, in particular for quantum many-body systems. Here we transfer results about decomposition structures, invariance under permutations of variables, positivity, rank inequalities and separations, approximations, and undecidability to real polynomials. Specifically, we define invariant decompositions of polynomials and characterize which polynomials admit such decompositions. We then include positivity: We define invariant separable and sum-of-squares decompositions, and characterize the polynomials similarly. We provide inequalities and separations between the ranks of the decompositions, and show that the separations are not robust with respect to approximations. For cyclically invariant decompositions, we show that it is undecidable whether the polynomial is nonnegative or sum-of-squares for all system sizes. Our work sheds new light on polynomials by putting them on an equal footing with tensors, and opens the door to extending this framework to other tensor product structures. Introduction In a theory, the description of the elementary constituents is as important as the description of their composition.In quantum theory, a few postulates describe the behaviour of individual quantum systems, and one postulate describes how to compose them (mathematically, with the tensor product).Another example are multivariate polynomials, which can be constructed as the composition of the spaces of univariate polynomials with the tensor product.Both aspects are crucial-the elementary constituents and the composition-, and it is a misconception of reductionism to overestimate the importance of individual systems. The opposite of composing is decomposing-expressing an object in terms of elementary constituents.This can be seen as an inverse problem of the structure provided by the composition, and is generally a very rich problem.In many occasions, we want a decomposition that reflects the properties of the global object, that is, that provides a "certificate" of a global property in the local objects.For example, combining identical objects gives rise to a symmetric global object, or a sum of positive elementary constituents gives rise to a positive global object-the latter is particularly important in quantum theory, where entangled objects are those not admitting a certain kind of positive decomposition.Which properties of the global object can be "witnessed" by the local objects?Answering this question amounts to solving the inverse problem, as it requires characterising which global properties can be transferred to the local objects, and how. Recently, a framework to describe decompositions in tensor product spaces has been introduced [8], focusing on two aspects of this characterisation.The first is invariance, namely, if the global object is invariant under the exchange of some elementary constituents, can this be reflected in the decomposition?Ref. [8] clarified what it means 'to be reflected in the decomposition' by defining an 'invariant decomposition', and gave sufficient conditions for the transfer of invariance from the global to the local objects.The second aspect is positivity, namely, if the global object has some positivity property (is in some cone), can this be reflected in the decomposition?Ref. [8] also studied this question, in combination with the invariance.In addition, this framework was extended to the approximate case, where the decomposition is content with almost realising the global object-often giving rise to big savings in the cost of the decomposition [9]. This framework is inspired by tensor decompositions-in particular, by the description of quantum many-body systems.Yet, it applies to all tensor product structures.In this paper, we apply it to real multivariate polynomials.These are objects in the tensor product space of polynomials in each of their variables, P := R[x [0] , x [1] , . . ., x [n] ] ∼ = R[x [0] ] ⊗ R[x [1] ] ⊗ • • • ⊗ R[x [n] ], where ⊗ denotes the algebraic tensor product and x [i] a collection of variables x In other words, every polynomial p ∈ P can be expressed as a finite sum of "elementary constituents" p [0] (x [0] ) • p [1] (x [1] ) • • • p [n] (x [n] ), where every p [i] is itself a polynomial that only depends on the variables x [i] .We consider two questions: (a) If p is symmetric under the exchange of, say, systems [i] and [j], can this symmetry be reflected in the decomposition?(b) If p is positive (for some notion of positivity), can this positivity be reflected in the decomposition?Our framework solves these two questions in the following way-in particular applied to polynomials: (a) The summation structure is described by a weighted simplicial complex Ω, so that every system i is associated to a vertex of Ω, and every summation index to a facet of Ω.(b) By definition, an (Ω, G)-decomposition of a polynomial contains a certificate of invariance under the group G.We characterise which G-invariant polynomials admit an (Ω, G)-decomposition.(c) By definition, a separable or sum-of-squares (sos) (Ω, G)-decomposition contains a certificate of invariance and of membership in the separable or sos cone, respectively.We characterise which separable or sos polynomials admit such decompositions.To be specific, this framework is inspired by decompositions of quantum many-body systems provided by tensor networks [18].The latter are prominent in quantum information theory and condensed matter physics (and recently machine learning), and favour certain arrangements of the summation indices-for example, the indices can be arranged in a circle: p = r α 0 ,...,αn=1 p [0] α 0 ,α 1 (x [0] ) • p [1] α 1 ,α 2 (x [1] ) αn,α 0 (x [n] ).(1) (This arrangement is motivated by the structure of physical interactions).Note that we have already written the previous equation for a polynomial p, as both quantum many-body systems and polynomials compose with the tensor product.From a mathematical perspective, the natural decomposition is the one with a single index, namely p = r α=1 p [0] α (x [0] ) • p [1] α (x [1] ) • • • p [n] α (x [n] ).(2) In both cases, the smallest integer r measures the cost of decomposing the polynomial-the one of ( 2) is called the tensor rank.Our framework puts both decompositions under one umbrella: in Equation (1), the weighted simplicial complex is the circle graph, and in (2), it is the full simplex (cf.(a)). Symmetries are central in physics, both conceptually and practically, and it is impossible to overstate their importance in mathematics.Our framework models symmetries as follows: we have a group G acting on the set {0, . . ., n}, and the induced action on the polynomial space P is obtained by permuting system [i] to [gi], g : x [i] → gx [i] := x [gi] . A polynomial is G-invariant if it is invariant with respect to all such permutations g ∈ G, and we want to make this invariance explicit in the decomposition of p.For example, the following decomposition p = r α 0 ,...,αn=1 p α 0 ,α 1 (x [0] ) • p α 1 ,α 2 (x [1] ) • • • p αn,α 0 (x [n] ) makes explicit that p is invariant under the cyclic group, x [i] → x [i+1] .(Note that there are no superscripts [i] in contrast to Equation (1)).And p = r α=1 p α (x [0] ) • p α (x [1] ) • • • p α (x [n] ) makes explicit that p is invariant under the full symmetry group.The former is known in quantum physics as the translationally invariant matrix product operator form (and the minimal number r as the t.i.operator Schmidt rank [10]), and the latter as the symmetric tensor decomposition (and the minimal r as the symmetric tensor rank [5,20]).In our framework, the former corresponds to the circle with the cyclic group, and the latter to the full simplex with the full permutation group (cf.(b)). Finally, if p is in a cone (such as sum-of-squares (sos) polynomials or the cone of nonnegative polynomials), we want a certificate of this fact (cf.(c)).In quantum physics, a quantum state is positive semidefinite and the certificate is called a purification.In probabilistic modelling, the certificate of a probability distribution is a nonnegative decomposition.In real algebraic geometry, the natural certificate of positivity of a polynomial is being sum of squares.In all of these cases, witnessing the positivity of a global element is a central problem with many ramifications. Note that decompositions of tensors and polynomials have already been studied a lot from different perspectives.Also symmetries and positivity have been considered combined, but the arising decomposition are by far not as clean as the separate decompositions.To give a short overview, and thus also motivate our combined approach, let us explain some of the existing decompositions, and point out why they are not directly related to our approach. The Waring decomposition is a decomposition of polynomials, which is also inspired by tensors.Let p ∈ R[x 1 , . . ., x n ] of degree d.The Waring rank of p is defined as the minimum where ℓ α (x 1 , . . ., x n ) = a α,1 x 1 + . . .+ a α,n x n is a linear form.The Waring rank is equivalent to the symmetric tensor rank by applying the correspondence between symmetric tensors in T ∈ C d ⊗n and homogeneous polynomials of degree n.Yet, the Waring decomposition cannot exhibit any additional symmetry of the polynomial, since the corresponding tensor is already fully symmetric for any polynomial.For generalizations of the Waring problem to polynomials instead of linear forms, we refer to [13].Another related decomposition is the completely decomposable decomposition [1]. For symmetric polynomials, the decomposition into power-sum polynomials is an example of an explicitly invariant decomposition.Every symmetric polynomial p can be written as p = q(p 1 , . . ., p n ), where In other words, the ring of symmetric polynomials with real coefficients corresponds to the ring R[p 1 , . . ., p n ] generated by power-sum polynomials.The same statement is true by replacing the set of power-sum polynomials by elementary symmetric polynomials.Also, the combination of symmetry and positivity is well-studied.It is, for example, known that symmetric sum-of-squares polynomials do, in general, not decompose into a sum of symmetric squares, to fully characterize the set of symmetric sum-of-squares polynomials, one has to introduce a more general notion of symmetric sum-of-square decomposition [11]. In this paper we do the following: (i) We define invariant decompositions of polynomials (Definition 9).We show that every invariant polynomial admits an invariant decomposition if the group action is free on the weighted simplicial complex (Theorem 15), and that every group action can be made free by increasing the number of summation indices (Proposition 8).In addition, every invariant polynomial can be written as the difference of two invariant decompositions if the group action is blending (Theorem 20).(ii) We define the invariant separable decomposition (Definition 23), and the invariant sos decomposition (Definition 31), and show that every invariant separable/sos polynomial admits an invariant separable/sos decomposition if the group action is free (Theorem 24 and Corollary 34, respectively).These decompositions combine positivity and symmetry in a very clean way.(iii) We provide inequalities and separations between the ranks of three invariant decompositions (Proposition 39 and Corollary 45, respectively).(iv) We show that the separations are not robust with respect to approximations (Theorem 49).(v) For decompositions on the circle with translational invariance, we show that it is undecidable whether the global polynomial is sos or nonnegative for all system sizes (Theorem 51). Throughout this work, an 'invariant decomposition' refers to an (Ω, G)-decomposition, and an 'invariant polynomial' to a G-invariant polynomial.Similarly, an 'invariant separable/sos decomposition' refers to a separable/sos (Ω, G)-decomposition.This paper is organized as follows.In Section 2 we define weighted simplicial complexes and group actions.In Section 3 we define and study the invariant decomposition, the invariant separable decomposition and the invariant sum of squares decomposition.In Section 4 we study inequalities and separations between the ranks.In Section 5 we study the approximate case.In Section 6 we show that a problem related to positive polynomials is undecidable.In Section 7 we conclude and provide an outlook. Weighted simplicial complexes and group actions Here we define weighted simplicial complexes (Section 2.1) and groups acting on them (Section 2.2), both defined in [8].These constitute the underlying topological structure on which we will consider invariant polynomial decompositions. 2.1.Weighted simplicial complexes.We now define weighted simplicial complexes and refer to [6] for details.Examples of weighted simplicial complexes are given in Section 2.2 and in [8]. Definition 1 (Weighted simplicial complexes). (i) A weighted simplicial complex on [n] is a map such that Ω(S 1 ) divides Ω(S 2 ) whenever S 1 ⊆ S 2 .Ω is called a simplicial complex if Ω(P n ) = {0, 1}.(ii) A set S ∈ P n is called a simplex of Ω if Ω(S) ̸ = 0. We will always assume that each singleton {i} is a simplex, and call the elements i ∈ [n] the vertices of the weighted simplicial complex.We call a maximal simplex (with respect to inclusion) a facet of Ω.Moreover, we denote the collection of all facets by and for each vertex i the collection of facets that contain i by By restricting Ω to F or F i we can interpret these mappings as multisets which we call F and F i . F contains each facet F exactly Ω(F )-many times.Moreover, we introduce the canonical collapse map c : F → F, c : F i → F i , mapping all copies to the underlying facet.(iii) Two vertices i, j are neighbours if Two vertices are connected if there exists a sequence of neighbours i 0 , . . ., i k such that i = i 0 and j = i k .We say that the weighted simplicial complex is connected if every pair of vertices is connected. Note that a simplicial complex Ω is the characteristic function of a subset A ⊆ P n .By definition of Ω, A is closed under passing to subsets.This is the usual definition of an (abstract) simplicial complex. Note also that a weighted simplicial complex is a special case of a multihypergraph [4], in the sense that all simplices of a facet are included, and in addition the multiplicities satisfy Definition 1 (i).Our framework could also be formulated with multihypergraphs, as the decompositions only depend on the multifacets F. Nonetheless, we find the slightly less general notion of a weighted simplicial complex more convenient to apply to this framework. In the following we introduce two basic examples-the single and the double edge-which will serve as a running example throughout the paper. Example 2 (The simple and double edge). (i) Consider two vertices and the weighted simplicial complex Ω = Λ 1 which maps every subset of {0, 1} to 1.This is just the simple edge, consisting of exactly one (multi)-facet (ii) Adding a second facet, we obtain the double edge ∆, which is the weighted simplicial complex on P 1 that assigns the value 1 to the sets {0}, {1} and the value 2 to {0, 1}. 2. Group actions.We now introduce group actions on the set [n], and promote them to actions on weighted simplicial complexes.For the reader not familiar with group actions on sets, we refer to [17].Throughout this paper, we denote the identity element of a group G by e. Definition 3 (Group actions). (i) Let G be a group acting on the sets X and Y , respectively.A map f : holds for all x ∈ X, g ∈ G.If G acts trivially on Y (i.e.gy = y for all g ∈ G and y ∈ Y ), we instead call f G-invariant.(ii) If G acts on X, for any map f : X → Y and any g ∈ G we define a new map It is immediate that h ( g f ) = hg f and e f = f, so this defines an action of G on the set of all maps from X to Y .In particular, the function f → g f is a bijection on this set.If f is defined only on a subset A ⊆ X, then g f acts on the translated subset gA := {gx : x ∈ A} ⊆ X. (iii) An action of G on X is called free if all its stabilizers are trivial, i.e.Stab(x) = {e} for every x ∈ X, where Stab(x) := {g ∈ G : gx = x} .(iv) We call an action G on [n] blending if {g 0 0, . . ., g n n} = [n] for certain g 0 , . . ., g n ∈ G implies the existence of g ∈ G with gi = g i i for all i = 0, . . ., n.In words, a permutation of [n] given by different group elements can also be achieved by a single group element. We now promote a group action on [n] to a group action on a weighted simplicial complex: Definition 4 (Group action on a weighted simplicial complex). (i) A group action of G on the weighted simplicial complex Ω consists of the following: (a) A group action of G on [n] such that the map Ω is G-invariant with respect to the canonical action of G on P n (i.e. it permutes vertices in a way that simplices become simplices of the same weight).This induces a well-defined action of G on F. (b) An action of G on the set of multifacets F such that the canonical collapse map We call the action G on the weighted simplicial complex Ω free if the action of G on F is free. Remark 5 (Group actions).(i) Since every weighted simplicial complex consists of finitely many vertices, we will assume the group G to be finite as well.We could also assume that G is a subgroup of the permutation group S n+1 (since every group action can be understood as a collection of permutations on [n]), but sometimes it is more convenient not to choose the latter representation.(ii) A group action on a weighted simplicial complex Ω permutes the vertices [n] in a way that preserves the structure of the complex.In particular, it induces an action of G on F, where all facets in the same orbit are of the same weight.Note that each g ∈ G provides a weight-preserving bijection (iii) To obtain a group action on a weighted simplicial complex with multifacets one needs to provide additional information, namely how elements g ∈ G permute the different copies of facets when mapping a facet F to gF .Obviously, any group action can be refined, but there are many ways of doing so.(iv) The notion of a blending group action (on a weighted simplicial complex) just refers to the action of G on the vertices [n].(v) The notion of a free group action on a weighted simplicial complex always concerns the action of G on F. The action of G on the vertices can be free without the action of G on Ω being free (see Example 7).On the other hand, an action of G on Ω can be free without the action of G on the underlying vertices [n] or on the facets F being free.As we will see in Proposition 8, any action of G on Ω can be refined to a free group action, after enlarging the weights of the facets.This, combined with Theorem 15, justifies our choice of weighted simplicial complexes in our framework.(vi) An action of G on a set X is free if and only if there exists a G-linear map z : X → G where G acts on itself via left-multiplication (which is obviously free).To construct z for a free action, choose for each orbit an element x and map gx to g.The reverse implication is immediate.△ Let us now discuss the group actions on the simple and double edge of Example 2. Example 6 (The simple and double edge with group actions). (i) For the simple edge Λ 1 there is only one interesting group action, namely by C 2 = S 2 , which permutes the vertices 0, 1.Although this group action is free and blending on {0, 1}, it is not free on the weighted simplicial complex, since the (only) facet remains fixed under each group element.(ii) For the double edge ∆ the group action of C 2 can be extended to the multifacets in two different ways.One extension keeps each multifacet fixed, in which case the action is not free, and the other one permutes the multifacets, i.e. flips a and b, in which case the action is free.Henceforth, when we refer to C 2 on ∆ we always refer to the free refinement.△ There are other canonical examples of weighted simplicial complexes and group actions which will play a role in the development of invariant polynomial decompositions.Let us introduce them now. Example 7 (The simplex, the line and the circle). (i) The simplicial complex Ω = Σ n mapping each subset of [n] to 1 is called the n-simplex. For n = 4 it can be depicted as where it contains only one facet, (ii) For n ≥ 1, the line of length n is the simplicial complex Ω = Λ n given by the following graph: The collection of facets F = F consists of n elements.The only non-trivial group action on Λ n is given by the cyclic group with two elements G = C 2 , where the generator inverts the order of the vertices, i.e. vertex i is sent to vertex n − i.This action is free if and only if n is even, and blending if and only if n ≤ 2. If n is odd, the action admits a free refinement if the weight of the middle edge is increased to 2. For n = 1 we regain the single edge.(iii) For n ≥ 3, the circle of length n is the simplicial complex Ω = Θ n corresponding to the following graph: which has n facets.A canonical action is given by the cyclic group G = C n , which is generated by translation of the vertex i → i + 1 mod (n).This action is free on Ω but not blending.△ We now state what we have already seen in Example 7 (i), (ii) and (iii) in a more general setting, namely that by increasing the multiplicity of facets of a weighted simplicial complex Ω we can make every group action free.In short, every group action has a free refinement.It is good to bear this in mind for the rest of the paper, because we will need to assume freeness in many results, but this is a "mild" assumption because of Proposition 8.This proposition is proven in [8]. Proposition 8 (Free refinement [8]).Every action of a finite group G on a connected weighted simplicial complex Ω has a free refinement, which in particular can be obtained by multiplying the weight of every facet of Ω by |G|. Invariant polynomial decompositions and ranks In this section we define invariant polynomial decompositions and their ranks.To this end we first set the stage (Section 3.1), and then define and study the invariant decomposition (Section 3.2), the invariant separable decomposition (Section 3.3), and finally the invariant sum-of-squares decomposition (Section 3.4). 3.1.Setting the stage.Throughout this section we consider polynomials in the space m i ] is the space of real polynomials in m i variables, and ⊗ denotes the algebraic tensor product.These polynomials use collections of local variables, denoted x [i] , for each local site i = 0, . . ., n.The case where all m i = 1 is already very interesting, as it describes how the multivariate polynomial ring is decomposed into a tensor product of univariate polynomial rings. In particular, R[ where x [i] is a single variable, means that every multivariate polynomial can be expressed as a sum of products of uni-variate polynomials, i.e. We define the local degree of p ∈ P, denoted deg loc (p), as the smallest positive integer d ∈ N such that ] d where R[x] d is the space of real polynomials in x of degree at most d.A polynomial with deg loc (p) ≤ d contains monomials consisting of variables in x [i] with degree at most d, for each i.Note that the local degree can be related with the (global) degree of the polynomial by A given group action G on [n] also induces a group action on the space P. The action is defined for g ∈ G and p ∈ P by (gp)(x [0] , . . ., x [n] ) := p(x [g0] , . . ., x [gn] ).(3) Note that this definition only makes sense if the local polynomial spaces R[x [i] ] and R[x [j] ] are isomorphic whenever i, j ∈ [n] are in the same orbit of G (i.e.gi = j for some g ∈ G), i.e. the number of local variables needs to coincide for i, j, namely m i = m j .The canonical isomorphism between elements in R[x [i] ] and R[x [j] ] is given by replacing the variables x [i] with x [j] in every polynomial and vice versa.We will frequently use this isomorphism in an implicit way, as for a polynomial p [i] ∈ R[x [i] ] we will denote its corresponding element in R[x [j] ] as p [i] (x [j] ). We say that p ∈ P is G-invariant if for each g ∈ G we have gp = p, or equivalently ) for every g ∈ G. For example, if ) for every permutation σ : [n] → [n] which means that p is invariant with respect to arbitrary permutations of variables. For two sets A, B we denote the set of all functions from A to B by B A .If the set A is finite, such functions are sometimes written as an |A|-tuple of values in B. In our case, we will consider I to be a finite index set, and sometimes write a map α ∈ F → I as a tuple α ∈ I F with entries from I and where the entries are indexed by the facets in F. If we have a function α : F → I and want to restrict its domain to F i (for some index i ∈ [n]), in the tuple notation we write which means that we delete all entries which are indexed by a facet not containing i.We will in general stick to the functional notation except for the examples, where we will switch to the tuple notation.Their connection will be made explicit in Example 13. 3.2.The invariant decomposition.We now define the basic invariant decomposition, called (Ω, G)-decomposition, simply called the invariant decomposition.Afterwards we will study the existence of decompositions without invariance (page 13), the existence of invariant decompositions with free group actions (page 14) and with blending group actions (page 16). The idea of the invariant decomposition is to consider finite sums of elementary polynomials (i.e.polynomials written as a product of local polynomials depending on one collection of variables x [i] ), where each local polynomial is associated to a vertex of Ω, and the summation indices are described as functions α | i on the facets.The following definition is illustrated in Example 11, 12 and 13. Definition 9 (Invariant decomposition). (i) An (Ω, G)-decomposition of p ∈ P consists of a finite index set I and families of polynomials where p [i] such that (a) p can be written as and denote its rank by rank Ω . Condition (i) (a) provides an arrangement of the summation indices encoded in the functions α, and condition (i) (b) ensures that the decomposition has the desired symmetry, by requiring that the coefficients of particular local polynomials in different local spaces coincide.Note again that this equality only makes sense if the collections x [i] and x [gi] have the same cardinality (i.e.m i = m gi ). Remark 10 (Admitting an (Ω, G)-decomposition implies being G-invariant). (i) If a polynomial has a (Ω, G)-decomposition then it is G-invariant: where we have used Definition 9 (i) (b) in the third equality, and the fact that α → g α is a bijection on I F and that i → gi is a bijection on [n] in the fifth equality. In the converse direction, the following holds: If a polynomial is G-invariant, then it has an (Ω, G)-decomposition if G acts freely on Ω.Moreover, every Ω can be refined so that G acts freely on it (Proposition 8).(ii) The existence of an (Ω, G)-decomposition might imply an even stronger symmetry than G-invariance.As we will see in Example 13 (i), the existence of a (Σ n , G)-decomposition for any transitive group action of some group G already implies S n+1 -invariance.This is closely related to the action not being free.△ Let us now revisit our running examples-the simple and double edge of Example 2-in the light of invariant decompositions. Example 11 (The simple and double edge with invariance). (i) On the simple edge Λ 1 , the elements in I F are just single values, and thus the corresponding decomposition is given by [1] α (x [1] ). Note that the order of the indiced α, β does not matter here, since there is no connection between the local polynomials at site 0 and 1.But for the non-trival C 2 action, Definition [1] ). ( 4) △ Let us now consider an invariant polynomial on the double edge which we will revisit in Example 35 in the light of sum-of-squares invariant decompositions. Example 12 (Invariant polynomial on the double edge).Consider the polynomial which is invariant with respect to the permutation of x and y.A (∆, C 2 )-decomposition of p has the form It is easy to see that a decomposition of rank 1 does not exist, showing that the (∆, C 2 )-rank is indeed 2. △ Let us now see more standard examples of (Ω, G)-decompositions based off the weighted simplicial complexes of Example 7. Example 13 (The simplex and the circle with their symmetry). (i) For n ≥ 1 consider an n-simplex Σ n , whose facets are given by F = {[n]}.Since F only contains one facet encompassing all vertices, the corresponding Σ n -decomposition is given by ). The minimal integer r among all such decompositions is the rank Σn (p)-this is usually called the tensor rank.Now assume there is a group action G on [n] which is transitive, i.e. it generates only one orbit, namely Gi = α for all i, j, α, and hence the corresponding (Σ n , G)-decomposition reads This decomposition is manifestly fully symmetric with respect to every permutation of x [i] with x [j] .The minimal such r is the rank (Σn,G) (p)-usually called the symmetric tensor rank. The minimal such r is the rank Θn (p)-this is usually called the operator Schmidt rank. Since the cyclic group C n acts freely on Θ n , we obtain the (Θ n , C n )-decomposition p = r α 0 ,...,αn=0 This decomposition is manifestly translational invariant, that is, invariant with respect to permutations x [i] → x [a+i] for a ∈ N where the addition is modulo n + 1.Note that polynomials with such a decomposition are generally not S n -invariant.The minimal such r is called the rank (Θn,Cn) (p)-usually called the translationally invariant operator Schmidt rank.△ Decompositions without invariance The first result on the existence of polynomial decompositions does not involve any invariance.It is an adaption of the result for tensor decompositions (see [8,Theorem 11]), which we will prove here for completeness. Theorem 14 (Existence of Ω-decompositions).For every connected weighted simplicial complex Ω and every p ∈ P there exists an Ω-decomposition of p, i.e. rank Ω (p) < ∞.Moreover, given a decomposition of the form where p [i] j ∈ R[x [i] ], there exists an Ω-decomposition of p only using the p Note that the Ω-decomposition obtained by "reusing" the polynomials of (5) may not be optimal, i.e. it may need more terms than its rank. Proof.We start with an elementary polynomial decomposition where I is a finite index set and p [i] j ∈ R[x [i] ] for all j ∈ I.For i ∈ [n] and β ∈ I F i we define Since Ω is connected, for α ∈ I F the restricted functions α | i are all constant if and only if α is constant.It follows that ) is an Ω-decomposition of p. □ Invariant decompositions with free group actions We now show that if G acts freely on Ω, then every G-invariant polynomial admits an (Ω, G)-decomposition.Recall that 'free' was defined in Definition 3 (iii).The proof is similar to that of [8,Theorem 13], but we include it here for completeness.We will illustrate the idea of the proof in Example 17. Theorem 15 (Invariant decompositions with free group actions).Let Ω be a connected weighted simplicial complex, G a group action on Ω, and p ∈ P a G-invariant polynomial.If G acts freely on Ω, then p has an (Ω, G)-decomposition, i.e. rank (Ω,G) (p) < ∞.Moreover, given a decomposition of the form (5), an (Ω, G)-decomposition of p can be obtained by using only nonnegative multiples of the p [i] j as local polynomials at each site i.As in Theorem 14, the (Ω, G)-decomposition obtained by "reusing" the polynomials of (5) will generally not be optimal. Note that every weighted simplicial complex Ω can be refined so that G acts freely (by Proposition 8), and refining will translate to adding more summation indices in the polynomial decomposition, as in Example 11 (ii). The idea of the proof is simple.Starting from the decomposition in (5), we essentially build where gp is defined in (3), and let g act on each of the local terms in the decomposition.The latter can then be transformed into an (Ω, G)-decomposition of p. Proof.Since G acts freely, by Remark 5 (iv), there exists a G-linear map z : F → G, where G acts on itself by left-multiplication.In the following, we fix one such mapping.For the polynomial p we first obtain by Theorem 14 an Ω-decomposition and denote the local elements by where q [i] together with the projection maps π 1 : Î → I and π 2 : Î → G.For each i ∈ [n] and β ∈ Î F i we now define the following local polynomials: ) is well-defined since g is uniquely determined by the relation But this implies that g 1 = g 2 .In addition, the defined local polynomials fulfil Definition 9 (i) (b) since for g, h ∈ G we obtain It only remains to show that the local polynomials form an (Ω, G)-decomposition of p.To this end we compute Using that Ω is connected and z is G-linear, for each z fulfilling the conditions from the outer sum on the right, we obtain g i = g j =: g for all i, j ∈ [n].So the corresponding inner sum becomes using G-invariance of p. Hence the total sum equals a positive multiple of p, where the factor is the number of all z which fulfill the above conditions.In fact, this number is just |G|, since the g −1 z for g ∈ G are precisely the different possible choices for z.So dividing by |G| and absorbing its positive (n + 1)-th root into the local polynomials yields an (Ω, G)-decomposition of p.The last statement is immediate by construction.□ The following are some immediate and useful relations between ranks: Corollary 16 (Relations among ranks).Let Ω be connected and G a free group action on Ω, and Σ n the simplex (defined in Example 7 (i)).Then for every G-invariant p ∈ P we have In words, the first inequality says that one can impose invariance by increasing the rank by at most |G|, i.e. imposing invariance "costs" at most |G| (as long as G is free, else one cannot impose invariance within our framework).The second inequality says that the tensor rank is always the most expensive rank, i.e. having one joint index is the most costly decomposition. Proof.The first inequality is immediate from the construction in the proof of Theorem 15, and the second inequality follows from the construction in the proof of Theorem 14. □ Let us now illustrate the proof of Theorem 15 for the double edge. Example 17 (Invariant decomposition on the double edge).The cyclic group C 2 provides a free group action on the double edge ∆, so every C 2 -invariant polynomial admits a (∆, C 2 )decomposition, given by Equation ( 4).Let us now construct it. For the group action of C 2 = {e, c} on F = {a, b} (with ca = b) there exists a G-linear map z : F → G, which can be chosen as z : a → e b → c. Then there exist r 1 , r 2 ∈ N and v 1 , . . ., v r 1 , v r 1 +1 , . . ., v r 1 +r 2 ∈ R d such that If n is even, there exists a decomposition The last statement is not given in [5], but it is obvious, since the minus sign can be absorbed into the odd number of terms n + 1 (because (−1) n+1 = −1). The minus sign in Equation ( 7) is necessary, for consider the simple case of real matrices, namely when the tensor T lives in the space R d ⊗ R d ∼ = M d (R).Without a minus sign, Equation ( 7) would read (where we have used that v ⊗ w = vw t ), implying that every symmetric matrix is positive semidefinite.This is false, so the minus sign is crucial.The importance of the minus sign will be illustrated in Example 21. The second lemma states the subadditivity and submultiplicativity of the (Ω, G)-rank, and is proven in [8, Proposition 16]. We are now ready for the existence of invariant decompositions with blending group actions. Theorem 20 (Invariant decompositions with blending group actions).Let Ω be a connected weighted simplicial complex, and G a blending group action on Ω.For any G-invariant p ∈ P there exist two polynomials q 1 , q 2 ∈ P with p = q 1 −q 2 , where both have an (Ω, G)-decomposition. If n is even we can set q 2 = 0. Proof.We start with a non-invariant decomposition of p, as given in Equation ( 5), where I is a finite index set.Now we choose real numbers d and ℓ ∈ {1, . . ., r 1 + r 2 }, such that the following holds: else This is possible since the tensor on the right hand side is real and symmetric, hence the existence follows by Lemma 18.For i ∈ [n], ℓ ∈ {1, . . ., r 1 + r 2 } and β ∈ I F i we define ℓ p [gi] j (x [i] ) : β takes the constant value j ∈ I 0 : else . We now define q 1 as where we have used that Ω is connected for the third equality, and thus α | i constant for all i if and only if α is constant.Note that q 1 has an (Ω, G)-decomposition by Lemma 19, since all p ℓ do.We define q 2 similarly as Because of the definition of d ℓ , and the fact that the action of G is blending, the difference q 1 − q 2 simplifies to where ∼ stands for positive multiple of.Note that we have used that p is G-invariant in the last equality.Dividing by |G| and the positive scaling factor proves the statement, since the scaling can be absorbed in the local polynomials. The last statement of the Theorem follows from the statement in Lemma 18 for even n.□ Example 21 (The minus sign in the single and double edge).The minus sign in the decomposition of Theorem 20 is necessary (as long as we do not switch to complex coefficients).For example, the polynomial p = x 2 + y 2 is C 2 -invariant, and since C 2 is blending on the single edge Λ 1 , there exists an (Λ 1 , C 2 )-decomposition for p with this additional minus sign (by Theorem 20): But for degree reasons there cannot exist an actual (Λ 1 , C 2 )-decomposition for p, i.e. an invariant decomposition without the additional minus sign. 3.3. The invariant separable decomposition.In this section we assume that every local space of polynomials is equipped with a convex cone C [i] ⊆ R[x [i] ], i.e. a set which fulfills αp + βq ∈ C for all p, q ∈ C and α, β ≥ 0. Important examples of such cones are the cone of sum-of-squares (sos) polynomials the cone of nonnegative polynomials and the cone of polynomials with nonnegative coefficients For a given set of local cones C [0] , . . ., C [n] we define the global separable cone This is the smallest global convex cone generated by the elementary tensors formed from the local cones.For a given group action of G on Ω, we further assume that C [i] = C [gi] for all g ∈ G (again we suppress the canonical isomorphism between the local polynomial spaces in the notation). We now define and study the invariant separable decomposition of polynomials, i.e. decompositions which are inherently G-invariant, and where the containment in the separable cone is explicit-i.e. a positive combination of elementary polynomials where each factor is in the local cone. (i) A separable (Ω, G)-decomposition of p is an (Ω, G)-decomposition with the additional restriction that for all i ∈ [n] and β ∈ I F i .(ii) The minimal cardinality of I among all separable (Ω, G)-decomposition of p is called the separable (Ω, G)-rank of p, denoted sep-rank (Ω,G) (p).If p does not admit an (Ω, G)decomposition, we set sep-rank (Ω,G) (p) = ∞.(iii) If G is the trivial group action, we call the separable (Ω, G)-decomposition just separable Ω-decomposition, and its minimal number terms the separable rank, denoted sep-rank Ω . We now show the existence of invariant separable decompositions with free group actions.This follows from Theorem 15, since it can be constructed via positive multiples of the initial decomposition. Theorem 24 (Invariant separable decompositions with free group actions).Let Ω be a connected weighted simplicial complex with a free action from the group G. Every G-invariant p ∈ C sep admits a separable (Ω, G)-decomposition. Proof.Let p be decomposed as in Equation ( 5) with p [i] j ∈ C [i] , which is a separable decomposition of p. Applying the construction of the proof of Theorem 15 we obtain a separable (Ω, G)-decomposition, since all local polynomials p Since the local cones coincide on the orbits of G, this guarantees that p [i] β ∈ C [i] .□ Example 25 (Invariant separable decomposition on the double edge).The (∆, C 2 )-decomposition of p = x 2 + y 2 given in Example 21 is in fact an invariant separable decomposition with respect to the local sos cones, proving that sep-rank We can now easily promote the results of Corollary 16 to the (invariant) separable ranks.The proof is analogous. Corollary 26 (Relation between separable ranks). Let Ω be connected and G a free group action on Ω.Then for every G-invariant p ∈ P we have An analogue of Theorem 24 for blending group actions is not true!One reason is that, if the action is blending, we cannot construct a decomposition using the local polynomials from the initial tensor decomposition.This is visible already in the simplest case, namely for (Λ 1 , C 2 )-decompositions, as illustrated in Example 21.Another reason is that Theorem 20 (with blending group actions) uses a difference of two (Ω, G)-decompositions, and a difference of separable elements is in general not separable. Finally we show that the global cone of sos polynomials C sos is strictly larger than the cone of separable polynomials over local sos polynomials sos .In other words, there exist polynomials which admit a sos decomposition over all variables, but cannot be written as tensor decomposition where every term is a sos polynomial.This is even true for polynomials in two variables x and y, as the following example shows.The example relies on the Gram map, which will be the cornerstone of invariant sos decompositions (Section 3.4). Example 27 (Sos polynomials which are not separable).We consider the following Gram map G between real-valued matrices M ∈ M 2 ⊗ M 2 and polynomials p ∈ R[x, y]: where m 1 (x) := (1, x) t is the monomial basis in x of degree at most 1. It where all M [i] j are positive semidefinite and G(M ) = p.For example, consider the matrix where b = (e 1 ⊗ e 1 + e 2 ⊗ e 2 ) ∈ R 2 ⊗ R 2 is known in the quantum information community as an (unnormalized) Bell state.Note that M is positive semidefinite but not separable, which can easily be seen with the celebrated positive partial transposition criterion [19].Furthermore, M is the only positive semidefinite matrix representing the polynomial is not positive semidefinite for any α ∈ R \ {0}, and G −1 ({p}) = {M α : α ∈ R}.This implies that p = (1 + xy) 2 is sos but not separable with respect to the local sos cones.More generally, in order to show that a polynomial is sos but not separable, one needs to show that every positive semidefinite matrix M with G(M ) = p is not separable.This is generally a hard problem.△ 3.4. The invariant sum-of-squares decomposition.In this section we introduce a sumof-squares (sos) decomposition in the (Ω, G)-framework.To start off, notice that not every Ginvariant sos polynomial p can be decomposed into G-invariant polynomials q k via p = N k=1 q 2 k , as the following example shows. . This gives rise to a sos decomposition of p via G. Furthermore, since gM = M for all g ∈ G, we obtain where the second equality holds by the G-invariance of M , and the last equality by the commutativity of polynomial multiplication. (i) ⇒ (ii).Assume that p , where M ′ need not be G-invariant.By the G-invariance of p, we additionally have that G(gM ′ ) = p for every g ∈ G. Defining M as the average we obtain a G-invariant and positive semidefinite matrix M .By linearity of the Gram map, we have that G(M ) = p.□ Remark 30 (Gram matrix of invariant separable polynomials). A similar version of Lemma 29 relates invariant separable polynomials p sos with invariant separable matrices M .The only difference is that the vectors v k should be elementary tensors factors.△ In order to state and prove the main result of this section (Theorem 32), it only remains to define invariant sos-this is the non-stringent version advocated above. Definition 31 (Invariant sos decompositions).Let G act on the weighted simplicial complex Ω, and let q = (q k ) k∈S be a family of polynomials. (i) An (Ω, G)-decomposition of the family q is a decomposition for every k ∈ S, where ) The smallest cardinality of I among all (Ω, G)-decompositions is called the (Ω, G)-rank of q, denoted rank (Ω,G) (q). (ii) An sos (Ω, G)-decomposition of p ∈ P is given by a sos decomposition into a family q (that is, p = k∈S q 2 k ), together with an (Ω, G)-decomposition of q.The minimal (Ω, G)-rank among all such sos decompositions is called the sos (Ω, G)-rank of p, denoted sos-rank (Ω,G) (p).If G is the trivial group action, we call the sos (Ω, G)-decomposition just sos Ω-decomposition and denote its rank by sos-rank Ω . We are now ready to prove the main result regarding the existence of invariant sos polynomials: Every G-invariant sos polynomial p has a G-invariant family q (Theorem 32 (i)), and q has an (Ω, G)-decomposition if G is a free group action on Ω (Theorem 32 (ii)).The idea of the proof of Theorem 32 (i) is to define q as the square root of p, and show that this square root is also G-invariant.Some ideas of the proof are illustrated in Example 35. (i) Let p be a G-invariant sos polynomial.Then there exists a G-invariant family of polynomials q = (q k ) k∈S such that p = k∈S q 2 k .Moreover, every element q k admits a decomposition in which the local polynomials at site i only depend on k i , namely kn,j (x [n] ). (ii) Let Ω be a connected weighted simplicial complex with a free group action from G. Then q has an (Ω, G)-decomposition, i.e. rank (Ω,G) (q) < ∞. Proof.(i) We denote the monomial x = (x 1 , . . ., x m ) with exponent α = (α 1 , . . ., α m ) by Without loss of generality we can assume that deg loc (p) ≤ 2d.Define Note that S can be identified with the set of monomials in P of local degree at most d via the correspondence Note also that the permutations of variables x [i] → x [gi] coincide with the group action of G on S, since Since p is G-invariant and sos, by Lemma 29 there exists a positive semidefinite and Ginvariant matrix M such that G(M ) = p.Now let B be the (unique) positive semidefinite square root of M , i.e.M = B 2 .Since M is a matrix, B admits a polynomial expression in M and hence B is also G-invariant.Define the polynomials q k as where we have used the fact that B k,k ′ = B gk,gk ′ for every g ∈ G (which is just the G-invariance of B), together with Equation ( 8) and bijectivity of the map k ′ → gk ′ .In addition, Using the definition of q k leads to the last statement of (i). (ii) The proof is similar to that of Theorem 15.Start with decompositions kn,j (x [n] ) for every k = (k 0 , . . ., k n ) ∈ S. From the construction of Theorem 14 it follows that every polynomial q k has a decomposition of the form where F is the set of facets of Ω.We now construct a decomposition for every q k which additionally satisfies the symmetry conditions of Definition 31 (i).Since G is free, by Remark 5 (vi), there exists a G-linear map z : F → G.We consider the new index set Î := I × G, together with the projection maps π 1 : Î → I and π 2 : Î → G.For each i ∈ [n] and β ∈ Î F i we define the following local polynomials Similarly to the discussion in the proof of Theorem 15 we see that kn, α|n (x [n] ) holds for every k ∈ S.But this implies the existence of an (Ω, G)-decomposition of q. □ Remark 33 (More general version of Theorem 32 (i)).(i) In [14, Theorem 5.3], the authors prove the existence of so-called semi-symmetric sos decompositions for general representations of finite groups, by using Schur's lemma on the Gram matrix.Theorem 32 (i) is weaker than that, as it only considers group actions that permute the tensor product spaces, but gives an elementary proof.(ii) There are also other characterizations of invariant sum-of-squares decompositions, like [11,Corollary 2.7].Our decompositions are really sums-of-square decompositions. To highlight the difference to our framework, let us consider a decomposition of the polynomial p = x 2 + y 2 + (x − y) 2 .According to Corollary 2.7 of Debus, Riener, p decomposes into where R G is the Reynolds operator applied to every entry of the matrix separately, i.e.R C 2 (x 2 ) = 1 2 (x 2 + y 2 ) and R C 2 (xy) = xy.In contrast, p factorizes according to Theorem 32 in our paper as From Theorem 32 it follows that: Corollary 34 (Invariant sos polynomials with free group action).Let Ω be a connected weighted simplicial complex with a free group action from G. Then every p ∈ P which is sos and G-invariant has an sos (Ω, G)-decomposition, i.e. sos-rank (Ω,G) (p) < ∞. Example 35 (Illustrating invariant sos decompositions).Consider again the polynomial from Example 12, which is sos and invariant with respect to the permutation of x and y.We have already seen that rank (∆,C 2 ) (p) = 2.By a similar argument as in Example 27, it can be shown that p is not separable with respect to the local sos cones. To obtain a sos decomposition we follow the proof of Theorem 32.We obtain S = {0, 1} × {0, 1} with G = C 2 permuting the entries of the tuples, and obtain a C 2 -invariant sos decomposition of p via the following family of polynomials: On the double edge ∆ we obtain an (∆, C 2 )-decomposition of the family via the following family of polynomials where the matrix notation denotes that the rows are indexed by α = 1, 2, 3 and the columns by β = 1, 2, 3.This shows that sos-rank (∆,C 2 ) (p) ≤ sos-rank (∆,C 2 ) (q) ≤ 3. On the single edge Σ 1 , a decomposition of q requires vectors a, b, c, d ∈ R d of length 4 √ 2, with a, b, c pairwise orthogonal, d orthogonal to b and c, and ⟨a, d⟩ = 1.This is provided by where ( ) α denotes a vector indexed by α.Since such vectors can only be found in dimension d ≥ 4, we obtain sos-rank We can also write p as a sum of symmetric squares: We now reset the variables S 0 = S 1 = {1, 2, 3}, S = S 1 × S 2 , as well as xy, and all other q k = 0.This gives rise to the C 2 -invariant family q = (q k ) k∈S that provides an sos decomposition of p with sos-rank (∆,C 2 ) (q) ≤ 3. But for the single edge, there does not exist a decomposition for the family q.This is because already q (2,2) = x + y does not admit an (Λ 1 , C 2 )-decomposition (without a minus sign).So Inequalities and separations between the ranks In this section, we study rank inequalities (Section 4.1), provide an upper bound for the separable rank (Section 4.2), and show separations between ranks (Section 4.3). 4.1. Inequalities between ranks.In this section, we show three relations between the introduced ranks (Proposition 39), which are similar to the statements established for tensor decompositions in [8,Proposition 29].For the inequality between sos and separable decompositions we will need to assume that (Ω, G) is factorizable: Definition 36 (Factorizable).Let Ω be a weighted simplicial complex with a group action from G. We say that (Ω, G) is factorizable if for each finite index set I the following system of equations admits a solution with all where Note that Equation (9) can be seen as a system of linear equations by taking the logarithm on the left and the right hand side. All examples of group actions on a weighted simplicial complex Ω considered in this paper are factorizable, as the following example shows. (i) If K α = 1 for every α ∈ I F , then C [i] β = 1 solves Equation ( 9).This in particular shows that (Ω, G) is factorizable whenever the action of G on the vertices [n] is free.In addition, this also implies that (Σ n , S n+1 ) is factorizable. (iii). Let p sos for β ∈ I F i and i ∈ [n] be local polynomials from a separable (Ω, G)decomposition of p.So there exist sos decompositions [i] ] (and we can clearly use the same sum length N for all i, β).We can in addition assume without loss of generality that ) holds for all i, β, k and g.Indeed, just consider the action of G on {i} × I F i given by g•(i, β) := (gi, g β), and fix for every orbit precisely one representative (i 1 , β 1 ), . . ., (i M , β M ).Then choose one sos decomposition for each p β ℓ and use the same along its orbit.This works since we have p ) for all i, β by assumption.Now since (Ω, G) is factorizable, we can choose some positive and G-invariant solution of Equation ( 9).Using the above representatives (i ℓ , β ℓ ) again, we now define : else where ℓ ∈ {1, . . ., M }, k ∈ {1, . . ., N } and β ∈ I F i .By definition, we have and hence q ((ℓ 0 ,k 0 ),...,(ℓn,kn)) := where S i = {1, . . ., M } × {1, . . ., N }.This family is also an sos decomposition of p, since ∀i: Here we have used Equation ( 9), as well as G-invariance of the C [i] β and the τ k,β .Finally, (iv) follows from (iii) and Corollary 26: 2. An upper bound for the separable rank.In this short section we provide an upper bound for the separable (Ω, G)-rank with respect to the number of local variables m i and the polynomial's local degree.For simplicity, we again assume that all local polynomial spaces use the same number of variables, m := m i = m j for i, j ∈ [n].For p ∈ P recall that the local degree of p, denoted deg loc (p), is the smallest integer d ∈ N such that where R[x] d is the space of polynomials in variables x of degree at most d. Proposition 40 (Upper bound for separable rank).Let p ∈ P be separable and G-invariant, and let Ω be a connected weighted simplicial complex with a free group action from G. Then for any separable cone. Proof.The result now follows from Corollary 26.□ 4.3.Separations.Here we will show separations between the ranks, which we will define shortly.Throughout this section we will consider separable decompositions only with respect to the local sos cones.We know from Proposition 39 that the separable rank upper bounds both the rank and sos-rank.Here we will show that a reverse inequality is impossible: in particular, there are no functions f, g : for all m ∈ N and polynomials p ∈ R[x [0] , x [1] ] with x [i] := (x m ).This is called a separation between sos-rank and sep-rank, or rank and sos-rank, respectively.We prove the separations by a reduction to matrix factorizations of entrywise nonnegative matrices, which themselves exhibit separations [12,15]. For this reason, we focus on the subspace of (n + 1)-quadratic forms in P and relate it with tensors.For T ∈ R m ⊗ • • • ⊗ R m we define the polynomial p T := m j 0 ,...,jn=1 [0] , . . ., x [n] ]. (10) There is a one-to-one correspondence between the tensor T and the polynomial p T .In addition, entrywise nonnegativity of T fully characterises the nonnegativity and the sos property of p T : Lemma 41 (Positivity correspondence between tensors and polynomials).The map (where p T is given in Equation ( 10)) is linear and injective.In addition, the following are equivalent: (i) T is entrywise nonnegative.(ii) p T is a sum of squares.(iii) p T is globally nonnegative (as a polynomial function). Proof.Linearity and injectivity are immediate (each entry of T clearly gives rise to a different monomial). The implications (i) ⇒ (ii) ⇒ (iii) are clear, since a nonnegative tensor T generates a sum of squares, since every sum of squares is globally nonnegative.For (iii) ⇒ (i) assume that T is not nonnegative, so there exist j 0 , . . ., j n such that T j 0 ,...,jn < 0. Then p(e j 0 , . . ., e jn ) = T j 0 ,...,jn < 0, which shows that p is not nonnegative.□ In order to "borrow" the separations of tensor decompositions to derive separations of polynomial decompositions, we now define decompositions of tensors, which were introduced in [8]. (i) An (Ω, G)-decomposition of T is given by families where T [i] α |n and T [gi] for all i ∈ [n] and β ∈ I F i .The minimal cardinality of I among all (Ω, G)-decompositions is called the (Ω, G)-rank of T , denoted rank (Ω,G) (T ).(ii) A nonnegative (Ω, G)-decomposition of T is an (Ω, G)-decomposition of T where all local vectors T [i] β have nonnegative entries.The corresponding rank is called the nonnegative (Ω, G)-rank of T , denoted nn-rank (Ω,G) (T ).(iii) A positive semidefinite (Ω, G)-decomposition of T consists of positive semidefinite matrices (indexed by β, β ′ ∈ I F i ) for i ∈ [n] and j ∈ {1, . . ., m} such that for all i, g, j, β, β ′ , and |n for all j 0 , . . ., j n .The smallest cardinality of I among all positive semidefinite (Ω, G)decompositions is called the positive semidefinite (Ω, G)-rank of T , denoted psd-rank (Ω,G) (T ). Note that in Ref. [8] all (Ω, G)-decompositions are defined over complex numbers, whereas here we use real decompositions.This is however irrelevant for the purposes of Proposition 43. Every notion of invariant decomposition of a tensor T can be associated to a notion of invariant decompositions of the corresponding polynomial p T , as we will show in the following proposition. Proposition 43 (Rank correspondence between tensors and polynomials).Let T ∈ R m ⊗ • • • ⊗ R m and the polynomial p T be given by Equation (10). Proof.(i).Let the families T [i] β β∈I F i provide an (Ω, G)-decomposition of T as in Definition 42 (i).Now consider the families where for a vector V ∈ R m the Ψ notation indicates Ψ V (x) := m j=1 V j x 2 j .It is immediate to see that these families provide an (Ω, G)-decomposition of p T , using the same index set I. Conversely, observe that every (Ω, G)-decomposition of p T consists without loss of generality of local polynomials of the form β ∈ R m .All other possible monomials will have to cancel out in the total product and sum, and can therefore be omitted.Thus the T [i] β give rise to an (Ω, G)-decomposition of T , again with the same index set I. Statement (ii) is proven exactly as (i), and using the fact that the local polynomials of an sos (Ω, G)-decomposition of p T must all be of degree 2, and thus have nonnegative coefficients at all the x For (iii) we start with an sos (Ω, G)-decomposition of p T , where every local polynomial q [i] k,β can (for degree reasons) be assumed to be of the form Now the matrices give rise to a positive semidefinite (Ω, G)-decomposition of T of the same rank as the initial decomposition.This can easily be seen by computing the coefficient of p T at each monomial (x jn ) 2 , and checking that it arises from the sos (Ω, G)-decomposition.For the reverse inequality, we assume that G acts freely on [n].We start with a positive semidefinite (Ω, G)-decomposition of T , i.e. . Since G acts freely on [n], we can just choose certain B [i] j and define the B [gi] j along the orbit by that formula.Now defining q [i] (j,k),β := B which is stronger than the symmetry of E [i] j given in a positive semidefinite (Ω, G)-decomposition.△ We now show that there is a separation between the ranks already for decompositions on the single edge.(Note that in the following corollary p m is a polynomial on the single edge). Corollary 45 (Rank separations on the single edge). Let 1 , . . ., x m ]. (i) There exists a sequence of polynomials (p m ) m∈N such that (ii) There exists a sequence of polynomials (p m ) m∈N such that rank Λ 1 (p m ) = 3 and fulfils (see [12,Example 5.17 (ii) is similar to (i), this time using the slack matrix of an m-gon (see [12,Example 5.14]).□ These statements imply that there cannot exist functions f, g : holds for all m ∈ N and all polynomials p ∈ R[x [0] , x [1] ] with x [i] := (x m ).This also holds true for polynomials of bounded degree, since deg(p m ) = 4 in the above construction. But this also immediately leads to the question of whether there are separations between the ranks of polynomials with a bounded number of variables and no bound on the degree.In this setting there does not exist a one-to-one correspondence between polynomials and Gram matrices (as that of Example 27).We believe that separations will again appear in the simplest setting and leave this question as a conjecture. Conjecture 46.There exist no functions f, g, h : N → N such that for all p ∈ R[x, y] (in particular, independently of the degree of p) where p is of course separable in (i) and (ii), and a sum of squares in (iii).The separable rank is again meant with respect to the local sos-cones. Approximate polynomial decompositions: Disappearance of separations In this section we study (Ω, G)-ranks of homogeneous polynomials in the approximate case.To this end, we will first show that approximations of polynomials can be related to approximations of matrices (Lemma 47), and will leverage this result together with those of [9] to obtain approximations for invariant separable polynomials (Theorem 49). To this end, we start by considering homogenized polynomials from P d .More specifically, restricting to deg loc (p) = d, we study approximations in the space where R[x] h d is the space of homogeneous polynomials of degree d, and each m ) is a vector of m + 1 variables (note that we have introduced additional variables x [i] 0 in contrast to P d ).Each homogeneous polynomial q ∈ P h d corresponds to some p ∈ P d by setting the variables x [0] 0 , . . ., x [n] 0 to 1. On the other hand, every p ∈ P d can be multi-homogenized by substituting for every local monomial in m d (x [i] ).For the rest of the section, we denote the basis of homogenized monomials by ) where each m h d (x [i] ) is the vector of all monomials x [i] α with |α| = d.We will also consider the Gram map where the product runs over the set [n], and where S m ⊆ R m+1 is the unit sphere with respect to the Euclidean norm.We will consider approximations of the polynomial p with respect to the maximal value attained among all a := (a [0] , . . ., a [n] ) ∈ S.More specifically, we define the infinity norm of p as ∥p∥ ∞ := sup a∈S |p(a [0] , . . ., a [n] )|. Lemma 47 (Norm of polynomial and of Gram matrix).Let a ∈ S. where σ max (M ) denotes the maximal singular value of M . Proof.For the first statement, it suffices to show ∥m h d (a [i] )∥ 2 ≤ 1 for each i ∈ [n], as the statement then follows from the multiplicativity of the 2-norm with respect to elementary tensors.We show it by induction over d.For d = 1 we have m d (a [i] ) = a [i] , hence there is nothing to show.For d ≥ 1 we have where we have used the induction hypothesis in the first inequality. For the second statement, we have where M [i] β ∈ M + d is a real positive semidefinite matrix for every i ∈ [n] and β ∈ I F i , and for every i ∈ [n] and g ∈ G.For a more detailed study of this decomposition we refer to [8]. To show the approximation result, we will exploit the following result from [9, Proposition 24] about approximate (Ω, G)-decompositions about normalized separable matrices. Proposition 48 (Approximate invariant decompositions [9]).Let Ω be a weighted simplicial complex with a free group action from G, and fix ε > 0. Let M ∈ M ⊗n+1 D be G-invariant and separable with tr(M ) ≤ 1.Then there exists a separable To guarantee that a given polynomial fulfils the normalization in Proposition 48 we introduce the following norm for p ∈ P Note that, by Remark 30, µ(p) is finite for all separable and G-invariant polynomials.Moreover, µ is homogeneous of degree 1, i.e. for all γ ≥ 0 we have µ(γp) = γµ(p), and since SEP n,D is convex we have We can finally present the main result of this section.Recall that the separable decomposition and rank refer to the local sos cones. Theorem 49 (Approximate separable invariant decomposition).Let Ω be a weighted simplicial complex with a free group action from G, and fix ε > 0. Further, let p ∈ P h d be separable and G-invariant.Then there exists q ∈ P h d such that ∥p − q∥ ∞ < ε and Proof.Since p is G-invariant and separable, there exists a separable and G-invariant matrix M such that p = G(M ) by Remark 30.Choose M so that tr(M ) is minimal among all representations.Then 1 tr(M ) M ∈ SEP n,D and µ(p) = tr(M ). By Proposition 48 there exists Now define q = µ(p)•G(N ).By Lemma 47 we have that ∥p−q∥ ∞ < ε, and since the Gram map applied to an (Ω, G)-decomposition of matrices leads to an (Ω, G)-decomposition of polynomials, we obtain that sep-rank (Ω,G) (q) ≤ sep-rank (Ω,G) (N ), which proves the statement.□ Theorem 49 provides an upper bound of sep-rank (Ω,G) which is (up to µ(p)) dimensionindependent.This implies that the separations between rank (Ω,G) , sos-rank (Ω,G) and sep-rank (Ω,G) disappear in the approximate case if the value of µ(p) is bounded.In general, however, µ(p) scales with deg loc (p) and the number of variables m. Similar approximation procedures can be applied to sos polynomials together with sos (Ω, G)decompositions, or arbitrary polynomials together with unconstrained (Ω, G)-decompositions.This can be accomplished by exploiting approximation results of (Ω, G)-decompositions for positive semidefinite matrices and Hermitian matrices [9].Together with the norm correspondence from Lemma 47, this would lead to approximations for all types of polynomial (Ω, G)-decompositions, that we decided not to work out in full generality here. An undecidable problem regarding unconstrained decompositions In Section 4 we have seen that the invariant sos decomposition and the invariant separable decompositions, which are inherently positive, are generally much more costly than the decomposition without any positivity constraints on the local elements.Here we will show that the invariant separable decomposition has in fact no local and computable certificate of positivity.We will reach this conclusion by proving that Problem 50 is undecidable. Given a collection of D 2 polynomials in Z[x], denoted (p α,β ) D α,β=1 , define p α 0 ,α 1 (x [0] ) • p α 1 ,α 2 (x [1] ) • • • p αn,α 0 (x [n] ) ∈ R[x [0] , . . ., x [n] ]. (11) Note that the summation indices are arranged in a circle Θ n , and that the local polynomials are independent of site, so that p n is invariant under the cyclic group C n .The previous expression is thus a (Θ n , C n )-decomposition of p n .So there does not exist an algorithm that can decide in finite time whether p n is sos or nonnegative for all n, given the local polynomials as input.(For an introduction to undecidability we refer for example to [2].)We will prove Theorem 51 by a reduction from the following undecidable problem: Theorem 52 (Undecidability of positivity for all system sizes [7]).Let T α,β ∈ Z m for α, β ∈ {1, . . ., D} be a collection of vectors.For n ≥ 0 define T n := D α 0 ,...,αn=1 For m, D ≥ 7, the following problem is undecidable: Is T n nonnegative for all n ∈ N? Proof of Theorem 51.Let T α,β ∈ Z m be a collection of vectors for α, β ∈ {1, . . ., D}.We apply the construction from Section 4.3 to obtain the collection of polynomials p α,β = m j=1 (T α,β ) j x 2 j and generate the polynomials p n ∈ Z[x [0] , . . ., x [n] ].It is obvious that p Tn = p n for all n, and from Lemma 41 we thus know that T n is nonnegative if and only if p n is a sum of squares/nonnegative.So decidability of Problem 50 (a) or (b) contradicts Theorem 52.□ We remark that Problem 50 remains undecidable if the input polynomials are in Q[x], since multiplying all polynomials by a positive constant does not change the positivity/sos property. It can also be shown that a bounded version of the questions of Problem 50-i.e.where n is fixed-result in an NP-hard problem [16]. Conclusions and Outlook In summary, we have defined and studied several decompositions of multivariate polynomials into local polynomials, each containing only a subset of variables.The variables are divided into blocks, and each local polynomial uses only one block.We describe a decomposition with a weighted simplicial complexes Ω, whose vertices describe the individual blocks, and facets the summation indices.For polynomials invariant under the permutation of blocks of variables, we have defined and studied an invariant decomposition.We have also defined an invariant decomposition with local positivity conditions, specifically, with the separable and sum of squares condition.Our approach is inspired by the tensor network approach from quantum information theory; in particular, the framework of this work was previously applied to tensor decompositions [8] and studied in the approximate case in Ref. [9].Specifically, we have defined invariant polynomial decompositions (Definition 9) and shown that every G-invariant polynomial admits an (Ω, G)-decomposition if G acts freely on Ω (Theorem 15), and that every group action can be made free by increasing the number of summation indices (Proposition 8).Moreover, if G is a blending group action, every G-invariant polynomial can be written as a difference of two (Ω, G)-decompositions (Theorem 20).We have also defined the separable (Ω, G)-decomposition (Definition 23), and sum of squares (Ω, G)-decomposition (Definition 31), and have shown that they exist if G acts freely on Ω (Theorem 24 and Corollary 34, respectively). In addition, we have shown that the (Ω, G)-rank of a polynomial can be upper bounded in terms of its separable and sos rank, and that the sos rank can often be upper bounded by its separable rank (Proposition 39).In the reverse direction such inequalities cannot exist, since there exists a sequence of polynomials with constant (Ω, G)-rank and a diverging sos or separable rank (Corollary 45).Yet, these separations are not robust with respect to approximations, due to the upper bound of the approximate separable invariant decomposition provided in Theorem 49.Finally, for decompositions on the circle with translational invariance, we have shown it is undecidable whether the global polynomial is sos or nonnegative for all system sizes (Theorem 51).This work has left two "immediate" open questions: Whether the rank separations also hold with respect to a bounded number of variables but unbounded degree (Conjecture 46), and whether there exist non-factorizable (Ω, G) structures (Question 38).A more general open question concerns the full characterization of the existence of invariant polynomial decompositions, as freeness of the group action only provides a sufficient condition.Our investigations indicate that it may also be necessary, but we were not able to prove it. A very interesting question is: What is the border rank of an (Ω, G)-decomposition?The border rank provides a complementary notion of approximation than the one considered here, and shows surprising features for tensors (instead of matrices).The (Ω, G)-framework is an invitation to generalise this study to tensor decompositions on Ω, possibly with invariance. Our existence theorems work for a given system size n.What can be said about all system sizes?Namely, if a family of objects (such as tensors or polynomials) is invariant for each system size, does it admit a uniform invariant decomposition?The undecidability result of Theorem 51 suggests that this question is very different from the one addressed in this paper, but certainly very interesting. In this case F differs from F since [i] j as local polynomials at each site i. β are positive multiples of p [gi] j for g ∈ G. Lemma 29 ( Gram matrix of invariant sos polynomials).Let p ∈ P with deg loc (p) ≤ 2d.The following are equivalent: (i) p is sos and G-invariant.(ii) There exists an M ∈ M ⊗n+1 D which is positive semidefinite and G-invariant such that G(M ) = p.Proof.(ii) ⇒ (i).If there exists such an M , since it is positive semidefinite, it has a rank decomposition is a conic combination of at most d+m d n+1 elementary products with factors from the local cones by Carathéodory's Theorem (see for example [3, Theorem 2.3]).From the proof of Theorem 14, we have that sep-rank Ω (p) ≤ d + m d n+1 . j leads to a sos (Ω, G)-decomposition of p T with sos-rank (Ω,G) (p T ) ≤ |I|.□ Remark 44 (The importance of being free).The proof of Proposition 43 (iii) does not work in reverse direction if we do not assume that G acts freely on [n].Assume there exists e ̸ = g ∈ G and i ∈ [n] such that gi = i.Then, the construction into a symmetric factorization B [i] ] and[8, Section 5] for details)rank Λ 1 (M m ) = 3, psd-rank Λ 1 (M m ) = 2,and nn-rank Λ 1 (M m ) ≥ log 2 (m) since all explicit examples are given as a real matrix factorization.Defining p m := p Mm and using Proposition 43 shows the statement. | ≤ sup y∈R r ∥y∥ 2 ≤1 |y t M y| = σ max (M ) ≤ ∥M ∥ 2 where r = D n+1 (where D = m+d d ) and the first inequality follows from the first statement.□ Recall that a separable matrix M ∈ M d ⊗• • •⊗M d attains a separable (Ω, G)-decomposition if it can be written as M h d .Denote the set of (sub-)normalized separable matrices by SEP n,D := M ∈ M ⊗n+1 D : M is separable and tr(M ) ≤ 1 and define µ(p) := inf {λ > 0 : ∃M ∈ SEP n,D and G-invariant such that p = λG(M )} . Problem 50 ( Decision problem about positivity of polynomials).Given positive integers m and D and a collection of polynomials (p α,β ) D α,β=1 ∈ Z[x] (where x denotes a vector of m variables (x 1 , . . ., x m )), (a) Is p n a sum of squares for all n ∈ N? (b) Is p n nonnegative for all n ∈ N? Theorem 51 (Undecidability of Problem 50).Problem 50 (a) and Problem 50 (b) are undecidable.This is true even if m, D ≥ 7 and if the polynomials are of the form p α,β (x) = m j=1 p α,β,j • x 2 j with p α,β,j ∈ Z for all α, β ∈ {1, . . ., D}. is well-known (and easy to see) that for deg loc (p) ≤ 2 we have p ∈ C sos if and only if there exists a positive semidefinite M ∈ M 2 ⊗ M 2 with G(M ) = p.Further, p ∈ C sep if and only if there exists an M ∈ M 2 ⊗ M 2 such that
2021-09-15T01:16:05.682Z
2021-09-14T00:00:00.000
{ "year": 2021, "sha1": "59c623686b0230149404ebf5127dd03371e7cfbe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.laa.2024.05.025", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "a17c164ce63f8007d45d0167c895b47f6a7de80a", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
232302623
pes2o/s2orc
v3-fos-license
Radiologist observations of computed tomography (CT) images predict treatment outcome in TB Portals, a real-world database of tuberculosis (TB) cases The TB Portals program provides a publicly accessible repository of TB case data containing multi-modal information such as case clinical characteristics, pathogen genomics, and radiomics. The real-world resource contains over 3400 TB cases, primarily drug resistant cases, and CT images with radiologist annotations are available for many of these cases. The breadth of data collected offers a patient-centric view into the etiology of the disease including the temporal context of the available imaging information. Here, we analyze a cohort of new TB cases with available radiologist observations of CTs taken around the time of initial registration of the case into the database and with available follow up to treatment outcome of cured or died. Follow up ranged from 5 weeks to a little over 2 years consistent with the longest treatment regimens for drug resistant TB and cases were registered within the years 2008 to 2019. The radiologist observations were incorporated into machine learning pipelines to test various class balancing strategies on the performance of predictive models. The modeling results support that the radiologist observations are predictive of treatment outcome. Moreover, inferential statistical analysis identifies markers of TB disease spread as having an association with poor treatment outcome including presence of radiologist observations in both lungs, swollen lymph nodes, multiple cavities, and large cavities. While the initial results are promising, further data collection is needed to incorporate methods to mitigate potential confounding such as including additional model covariates or matching cohorts on covariates of interest (e.g. demographics, BMI, comorbidity, TB subtype, etc.). Nonetheless, the preliminary results highlight the utility of the resource for hypothesis generation and exploration of potential biomarkers of TB disease severity and support these additional data collection efforts. Introduction TB is a global pandemic resulting in approximately 9 million new cases and 1.5 million deaths each year [1]. The emergence of drug resistance where up to~20% of TB isolates globally are estimated to be resistant to a major drug [2] threatens to exacerbate the pandemic especially the emergence of totally drug-resistant TB now endemic in specific countries. Out of cases that are not totally drug resistant, drug resistance varies from mono resistant to a first line drug to extensively drug resistant (XDR) to isoniazid and rifampin, as well as any fluoroquinolone and one or more of three injectable second-line drugs (i.e., amikacin, kanamycin, or capreomycin). Drug resistance is associated with poorer outcomes and higher costs of care compared to drug sensitive TB with treatment success at~55% globally and Multi-or Extensively Drug-resistant TB (M/XDR-TB) having a cost of care up to 25 times that of drug sensitive cases [3,4]. CT imaging is routinely collected during the management of TB to assess patient disease status [5]. Moreover, the use of mobile radiology can improve detection and screening of TB cases in harder to reach populations [6]. Such distributed approaches support distant diagnosis and remote monitoring of disease severity through the analysis of the resulting data via machine learning and other emerging approaches. Radiologist observations are the gold standard reference upon which CT images have been interpreted for clinical insights and actionable information historically [7]. These observations contain pertinent insights to inform patient risk and may have less of a barrier to interpretation than emerging approaches such as deep learning since they are often captured in a common, clinical vernacular. Prior research has demonstrated the utility of radiologist observations from images for assessing patient risk. For example, CT scans were predictive of treatment outcome [8,9], bilateral lung involvement in active TB showed higher risk of underlying diabetes millitis [10], and pulmonary TB patients with chest CT findings of cavity, consolidation, bronchiectasis, upper lobe involvement, multiple lobe involvement, and lymphadenopathy indicated a higher risk for smear-positive TB [11]. The Office of Cyber Infrastructure and Computational Biology established the TB Portals program as an international collaboration to support TB data sharing and data science facilitating the biomedical research community's efforts to understand the real-world impact of TB [12]. The TB Portals program contains a publicly available repository of TB case data capturing multi-modal information such as case clinical characteristics, pathogen genomics, and radiomics that can provide a unique understanding of TB disease etiology over time. As of November 2020, the TB Portals resource contains over 3400 TB cases, primarily drug resistant cases, many of which contain associated CT images with radiologist annotations. While other clinical image resources exist with large numbers of images, TB Portals offers a patient-centric resource that captures the temporal context of each case associated with the CT images including drug resistance status of the case, the drugs administered, and the pathogen identified. External collaborators can request data access through a data use agreement (DUA) and download publicly shared data supporting reproducibility and open-science. In this study, we sought to leverage the available radiologist observations for CT images in the TB Portals repository to assess their utility for predicting patient treatment outcome independent of other case characteristics or data modalities the resource provides. We examined the available radiologist observations from CTs close to the initial registration of the case into the database and identified the most important variables that are predictive of treatment outcome. We used the quarterly updated published data available to external collaborators from October 2020 to create a cohort of new cases of TB having the following inclusion and exclusion criteria: an initial annotated CT record within 60 days of the first sample recorded in the database, a treatment outcome of "cured" or "died", and follow up from CT record to treatment outcome greater than 0 weeks. This cohort was used for a retrospective, case-control study assessing presence of various radiologist observations towards risk of treatment outcome of died. As we observed~10% of treatment outcomes resulting in "died", we compared class balancing approaches to increase the representation of these clinically relevant cases and assessed impact on the performance of the predictive models to detect this outcome. We also generated inferential statistics on the risk of outcome of death associated with these radiologist observations. Since the TB Portals constitutes real-world data, it can be difficult to decouple the risks with other underlying characteristics of the cases. Nonetheless, we believe that the findings from this study identify radiological signals that may indicate a problematic case or biomarkers that could inform clinical trial design as markers of disease severity. Moreover, these observations confirm prior findings showing the association of cavitary disease with poor treatment outcomes. Cohort selection New cases of TB with available CT images and treatment outcome of "cured" or "died" were identified in R using publicly available data from quarter 3 of 2020 that is available to external collaborators after signing a DUA (see Data Availability methods section). The external data files were downloaded via aspera service and loaded in R. The exclusion/inclusion criteria were applied using coding logic that can be found in the following GitHub repo (https:// github.com/niaid/tbportals.ct.analysis.2020) as a drake workflow for reproducibility. We identified a cohort of 371 new cases of TB with an available CT annotation and a treatment outcome of "cured" or "died". Application of subsequent inclusion/exclusion criteria including first available CT with radiologist annotation, follow up to the ending of the treatment period of greater than 0 weeks, and CT date within 60 days of registration reduced the number of cases to 253. 228 cases had an outcome of cured and 25 cases had an outcome of died. Case characteristics were compared by treatment outcome in S1 Table. Data preprocessing for benchmarking model performance The cohort contained radiologist observations with either no variance between cured or died groups or only one or zero cases in a particular factor level within a comparison group as shown in S2 Table; therefore, we removed any annotations with limited variation or recoded covariates incorporating the feedback from a TB disease expert in order to increase statistical power within the subgroups. Specifically, bodysite_coding_cd variable combined Left lung and Right Lung categories into One Lung category; lungcavitysize variable combined 10-25mm and Less than 10mm categories into the LTE 25mm; affectlevel variable combined Lower Lobus, Medium and Lower Lobbi, Upper and Lower Lobbi, and Upper and Medium Lobbi into Lower or medium category; and totalcavernum combined 1 cavity and 2 cavities variables into LTE to 2 cavities variable. The radiologist observations after initial preprocessing demonstrated statistically significant differences in observations between cases according to treatment outcome as shown in S3 Table. Moreover, correlations between covariates suggested associations that reflected clinical observation of disease severity and indicated potential predictive capability as seen in S2 Fig. Dropping or refactoring of variables were completed before running the rest of the data preprocessing steps, which were incorporated into an Mlr3 [13] pipeline for an unbiased assessment of the subsequent preprocessing steps on performance via 5-fold cross-validation. The subsequent preprocessing steps included top 5 features selection via mutual information, encoding of features to binary indicator format, removal of any zero variance encoded features within a cross-validation split, random sampling to replace any missing data, standardization of factors that were missing levels in a particular split. Class balancing involved the use of Mlr3's default class balancing where the majority and minority class were brought to an even proportion through a combination of upsampling and downsampling or the SMOTE algorithm for synthetic generation of minority class examples. MLR3 implementation of the SMOTE algorithm uses numerical data and can lead to synthetic data having intermediary values between 0 and 1 for the set of binary features used. Synthetic data created by SMOTE was rounded to 0 or 1 to avoid data leakage where the model can learn to identify synthetic data and its connection to the outcome of died. Benchmarking model performance Mlr3 R package was used to generate a pipeline of preprocessing steps and downstream machine learning algorithms for performance benchmarking. Data was split 75% and 25% into a training and validation set respectively for benchmarking and validation of prediction performance respectively. For binary classification, pipelines were benchmarked with or without class balancing to increase the representation of the rarer class of "died" constituting onlỹ 10% of cases. For binary classification, model performance was compared to a featureless model that predicted the class with the most observations in the training split or a random selection in case of a tie. The selection of binary classifier models assessed included a featureless model (https://mlr3.mlr-org.com/reference/mlr_learners_classif.featureless.html), logistic regression (https://mlr3learners.mlr-org.com/reference/mlr_learners_classif.log_reg.html), weighted k-nearest neighbors (https://mlr3learners.mlr-org.com/reference/mlr_learners_ classif.kknn.html), multinomial log-linear learner via neural networks (https://mlr3learners. mlr-org.com/reference/mlr_learners_classif.multinom.html), and random forest (https:// mlr3learners.mlr-org.com/reference/mlr_learners_classif.ranger.html). For time-to-event benchmarking, the time variable of weeks from CT to treatment outcome was included to model the time to death of the right-censored data. Censoring of cured patients occurred at the treatment period end date. Three survival models were tested including Kaplan-Meier estimator (https://mlr3proba.mlr-org.com/reference/mlr_learners_surv.kaplan.html), cox proportional hazards (https://mlr3proba.mlr-org.com/reference/mlr_learners_surv.coxph.html), and decision tree (https://mlr3proba.mlr-org.com/reference/mlr_learners_surv.rpart.html). Harrell's C-statistic was used to assess survival model performance and multiple measures were used to assess binary classifier performance. Calculation of inferential statistics Odds ratios and hazard ratios were calculated using all available data and the R package finalfit. To select the top 5 most important features for multivariate modeling, mutual information feature selection was applied on the entire dataset. All covariates were tested using univariate modeling while only the top 5 features were included in the multivariate models. Multiple imputation using 5 independent imputations and standard parameters in the mice R package was performed to generate the multiply imputed multivariate estimates. The proportional hazards assumption was tested using the cox.zph function and confirmed as shown in S4 Table. Top 5 features by mutual information showing any collinearity via variance inflation factor were dropped from the final multivariate model (e.g. total number of cavities and cavity size which share a "No cavities" level that is perfectly correlated). Kaplan-Meier curves Kaplan-Meier curves for covariates were generated using the survminer R package. All plots included a table of observations at each time point to reflect censoring and number of available cases at each time point. Survival probability is plotted along with the 95% confidence intervals. Data availability and code The TB portals requires all users of the data to abide by a DUA before access to the underlying clinical data is provided and the data can be requested at the following URL (https://tbportals. niaid.nih.gov/download-data). Therefore, this study provides the code to reproduce the analysis without the underlying raw data (https://github.com/niaid/tbportals.ct.analysis.2020) in compliance with the DUA. To rerun the analysis, interested parties can request data access by completing the DUA and then place the downloaded clinical data files to the subdirectory of the data folder as provided in the GitHub repo instructions. To add reproducibility, the list of patient and condition identifiers are provided in S5 Table so that those interested in assessing the specific cohort are able to do so after completion of required DUA irrespective of future growth in the database. Inferential statistics associated with poor outcome The main objective of the study was to understand whether radiologist observations of CT images within TB Portals, independent of other data connected with the case, contained any features associated with risk of poor treatment outcome. The results from our analysis included statistically significant risk factors that are identified with poor outcome. We modeled the radiologist observations by both univariate and multivariate logistic regression focusing on the top 5 important features by mutual information with the treatment outcome. Cavity size and number of cavities were selected by mutual information which was interesting as cavities were associated with established disease and disease severity [14,15] and cavitary disease has been reported previously to be associated with poor treatment outcome in clinical trials [8,9]. Nonetheless, both cavitary features showed a significant level of collinearity that can adversely affect modeling. We dropped total number of cavities from the multivariate model to prevent the observed collinearity affecting estimates. Some radiologist observations contained missing values so we generated multiply imputed data for multivariate modeling to assess impact on estimates. Odds ratio estimates for univariate, multivariate, and multiply imputed multivariate logistic regression were shown (Table 1). In general, cases with observations indicating TB disease spread showed higher odds to develop treatment outcome of died compared to cases without these observations. These biomarkers of disease spread included whether observations were present in both lungs (bodysite_coding_cd), presence of swollen lymph nodes (limfoadenopatia), and whether large cavities were observed greater than 25mm in size (lungcavitysize). To incorporate the temporal aspects of each CT with treatment outcome, we also generated hazard ratio estimates using cox regression and noted similar findings to risks identified in logistic regression models (Table 2). Observations associated with disease spread and activity showed higher hazard ratios for a treatment outcome of died. These included whether observations were present in both lungs and whether large cavities were observed greater than 25mm in size. We noted statistically significant univariate or multiple imputation multivariate risks for lungcavitysize; however, the multivariate model risk did not show statistical significance. The multivariate model uses complete case information for the set of records and variables. Due to combinations of missingness across variables, this decreases the available numbers of complete cases leading to the observed differences in statistical significance for the lungcavitysize variable. Multiple imputation suggests that lungcavitysize would show significant differences controlling for the other features included in the multivariate model but only additional data collection will be able to confirm this in complete cases. To assess survival probabilities over time, we plotted Kaplan-Meier curves for covariates in which statistically significant differences in odds ratios or hazards ratios were noted such as presence of radiologist annotation in both lungs, presence of swollen lymph nodes, and lung cavity size. Radiologist observations associated with both lungs showed statistically significant differences in the survival curves by log-rank test (Fig 1). The probability of survival for those cases with radiologist observations involving both lungs were lower than those cases with radiologist observations in one lung. Observation of swollen lymph nodes also showed statistically significant differences in the survival curves with cases involving swollen lymph nodes having a decreased probability of survival (Fig 2). Lastly, we observed a larger decrease in survival probability over time in the KM curves where radiologist noted large cavities greater than 25mm in size (Fig 3) although the 95% confidence intervals overlapped suggesting that additional data collection is warranted to increase confidence. Altogether the KM survival curves support the finding from the inferential estimates of the logistic and cox regression models. Biomarkers of disease spread and activity are associated with statistically significant decreased survival probability over time in this cohort. Assessing predictive performance of machine learning models A major goal of the TB Portals program is to improve the underlying data as well as assess analytical approaches that advance knowledge of TB. Mlr3 is an ecosystem of R packages the provide flexible pipelines for a mix and match approach to machine learning similar to the scikitlearn module in python. This philosophy fit our approach as data wrangling steps were done in R and we needed to compare various preprocessing on model performance in an unbiased manner. Furthermore, Mlr3 provides a featureless classifier that only predicts the majority class and reflects a model with limited utility as a control. 25 of 253 cases had the outcome of "died" and we sought to predict this rarer, clinically relevant outcome. We hypothesized that class balancing would improve model performance and tested this hypothesis by comparing model performance of binary classifiers with and without class balancing by fivefold cross validation (Table 3). While no class balancing had higher overall accuracy and sensitivity (optimization led to prediction similar to a featureless model where only cured outcome is predicted), class balancing improved model performance to detect the clinically relevant outcome of died. One can observe the increased performance after incorporation of class balancing or SMOTE preprocessing steps through the relative stability of AUC metric with concordant increases in the balanced accuracy, matthews correlation coefficients, and specificity. Since the relative dates of the initial CTs with available radiologist observations and the treatment end dates associated with each treatment outcome were available from the TB portals data, we modeled the time-to-event from the initial CT to the treatment end date and assessed model performance by the Harrell's C metric. To do this, the number of weeks from the initial CT with radiologist annotation to treatment period end were calculated and a variety of time-to-event algorithms benchmarked using Mlr3 ( Table 4). The cox proportional hazards and tree based time-to-event models demonstrated better performance compared to the Kaplan-Meier (KM) model which takes the survival probability over time of at risk cases. The Kaplan-Meir curve can be considered as a control for model performance where predictive models should perform better than the 0.5 Harrell's C score of the KM model. The benchmarks from both binary classification and time-to-event analysis establish that the CT annotations contain features that can predict treatment outcome better than control models for the training set and suggest that such predictive performance might translate to unobserved data with similar features. To assess whether the observed training performance translates to performance on similar unobserved data, we held-out a set of 25% of the data constituting the validation set. Mlr3 facilitates nested resampling strategies used in the benchmarking, which should provide an accurate estimate of model performance including preprocessing such as class balancing. The validation set was used to test this theory in practice. Binary classification models trained on the entire 75% of the training dataset used for benchmarking were used for prediction on the 25% held-out validation data. Model predictions were assessed using the same metrics as for training benchmarking ( Table 5). The validation model metrics fall into the ranges observed in the training indicating that benchmarking identified performance estimates indicative of actual performance on unobserved data. Class balancing provided improvements to the detection and prediction of an outcome of died either through Mlr3's default class balancing approach or the use of the SMOTE algorithm. For survival models, model performance on the validation set also showed Harrell's C scores falling within estimate ranges from the benchmarking results on the training data (Table 6). Altogether, the validation and benchmarking results establish that CT annotations from TB portals are predictive of treatment outcomes and set a reference upon which models incorporating these features can be improved upon henceforth. Nonetheless, these findings need to be considered hypothesis-generating rather than suggesting that actionable steps be taken clinically for patients meeting these criteria given that other observed or unobserved factors could be contributing to the findings. Discussion CT imaging of the lung provides an important modality for identifying biomarkers of TB severity and progression as pulmonary abnormalities are a common disease manifestation [16]. The TB portals provides CT images of lungs associated with TB cases along with patientcentric, temporal information that helps to put the image in the context of the real-world clinical journey. We leveraged TB portals data to identify CT images with associated radiologist observations to assess the utility of these observations independently of other case attributes towards risk of poor treatment outcome. While CT images and radiologist observations have been used previously to assess patient treatment outcome [8,9] or response [17], the analysis was done in the context of a clinical trial or study that was limited by the available sample size. The use of real-world data sources such as TB portals can facilitate exploration of the predictors of poor treatment outcome in real-world settings and additional questions can be addressed as more cases are added over time. Real-world evidence can inform clinical practice by exploring the potential of lung CT images as clinical end points or markers of disease severity. Here we demonstrate that for new TB cases, the radiologist observations associated with CT images taken within 60 days of the initial case registration into the database contain statistically significant risk factors associated with poor treatment outcome. We chose to analyze cured versus died outcomes because they represent the boundaries of the available treatment end points deemed beneficial or adverse from a clinical perspective. We reasoned that such edge cases may contain the greatest differences in radiological signatures with which to assess machine learning models. Nonetheless, this approach has a limitation in that it cannot be used to predict intermediary treatment end points such as failure that Various model performance metrics such as classification accuracy, AUC, balanced accuracy (bacc), Brier score (bbrier), Matthews correlation coefficient (mcc), sensitivity, and specificity are shown after model prediction on the 25% held out validation data set. Preprocessing refers to whether the pipeline included a class balancing step, SMOTE, or no class balancing. Base learner refers to the type of machine learning model used in the pipeline including featureless (only predict most abundant class or random class in case of a tie), log reg (logistic regression), multinom (multinomial log-linear learner via neural networks), ranger (random forest), or kknn (weighted k-nearest neighbor). Metrics of performance are calculated at a probability threshold of 0.5 for determining cured versus died outcome. https://doi.org/10.1371/journal.pone.0247906.t005 fall between the two extremes. Moreover, given differences in treatment efficiencies due to the recommended treatment plans for sensitive or various drug resistant TB subtypes, it would have been beneficial to model the TB subtype as well but the numbers of available cases with the relevant treatment outcomes did not support this approach. Therefore, this analysis did not incorporate TB subtype differences within the modeling meaning it is possible certain aspects of the temporal response to treatment may be affecting model estimates (especially in the time-to-event models). We attempt to limit this potential by selecting CTs around the time of registration to decrease the potential of treatment to affect CT observations. We also observe that the proportions of TB cases by subtype is not statistically significant between those with a treatment outcome of died versus those with a treatment outcome of cured suggesting that the impacts would be modest. As more data is collected increasing the number of cases with the outcomes of interest, it would be interesting to include the subtype of TB as a random effect for instance. While previous analyses have leveraged TB Portals data to predict treatment outcome using machine learning approaches [18,19], they predicted multiple treatment outcomes that may be challenging for machine learning approaches to delineate (e.g. cured versus failure versus died). Moreover, previous approaches leveraged the entire TB portals case record which include information that is not available at clinically relevant time points such as around the time of the initial diagnosis. The number of CT images or X-ray images taken over the course of the case is an example of information which is only known at the end of treatment. Models generated using all case characteristics may identify such variables as important despite these being of limited clinical utility. For example, poor treatment outcome may be associated with a greater number of medical images simply due to the desire of clinicians to monitor disease progression and treatment response especially in the riskiest cases. Models incorporating these variables may miss other salient variables of clinical relevance. Lastly, prior attempts at analyzing TB portals data do not account for the class imbalancing that can arise despite this being a common issue with predicting biological outcomes. We observed class imbalance in our analysis as~10% of selected cases had an outcome of died. This imbalance can adversely affect machine learning algorithms as optimization may select a model that defaults to predicting the most represented class in order to maximize the objective function [20]. Many approaches have been developed including development of machine learning algorithms that can handle class imbalances, sampling techniques to increase the representation of the rarer observations, and techniques that put a higher cost on misclassification of the class of interest. We address the impact of class imbalance by leveraging Mlr3 approaches for handling class imbalance that can be wrapped in a machine learning pipeline for an unbiased assessment on model performance. We observe that not accounting for class balancing led to a high classification accuracy albeit with little difference in performance compared to the featureless model that only predicts the majority class. Such a model would be of limited clinical utility in that less represented outcomes would often be missed. Class balancing is one approach to address this and increase the performance of machine learning models for predicting these rarer, clinically relevant outcomes. We focus on using radiologist observations of chest CT images at a clinically important time point (close to initial registration of the case into the database) independent of other case characteristics to assess the data's utility. By focusing on initial time around registration for new cases with a treatment outcome of cured and died and accounting for class imbalancing, we show that radiologist observations are predictive of treatment outcome within the cohort. We identify markers of disease progression and severity including involvement of both lungs, swollen lymph nodes, multiple cavities, and large cavities which are associated with active TB and demonstrate higher risk of poor treatment outcome via inferential statistics. Cavitation in particular has been shown to be associated with a higher baseline load of MTB bacteria [21] and poorer treatment response [8,9,17,22]. As TB portals collects real-world data, we cannot rule out confounding issues such as selection of new cases that were caught later in disease progression, observed differences amongst the subgroups (e.g. drug resistance subtype mentioned prior), and other unobserved variables that may explain these risk profiles. For instance, radiologists independently review CT images by country site and there could be differences in how each approaches the annotations. Nevertheless, for this analysis the majority of the observations were from Belarus suggesting such impacts would be minimal. Collecting additional data to control for these differences by including them in our models as additional covariates or using matching techniques to ensure similar cases characteristics are potential approaches to mitigate potential confounding. Our initial results offer a rationale for these additional data collection efforts given the promising signals we detected amongst the identified outcomes. Lastly, deep learning and artificial intelligence (AI) are being used extensively for medical image processing to label and annotate features for diagnostic and prognostic purposes. For example, AI approaches have recently been reported to exceed the capability of a radiologist for distinguishing TB from non-TB using chest radiographs. Nevertheless, radiologist observations of medical images are considered the "gold-standard" reference upon which to support AI development [7]. The TB portals database contains reference data that can help to advance AI by providing a radiologist evaluated ground-truth for comparison. By analyzing radiologist observations, we identify potential lung biomarkers that could be considered priorities for automated identification by AI since these biomarkers are most associated with treatment outcome in our cohort. AI could then generate automated features upon which machine learning methods can be applied, risk scores developed, or manual annotations compared. We are cautiously optimistic about the potential of these real-world biomarkers given our best knowledge of the case although we acknowledge the potential impact of other measured or unmeasured variables. Collecting more data that can increase our understanding of the case may be able to improve our confidence. For instance, if we collected medical history at registration, we may be able to better characterize a new case removing any patients with a long history of respiratory symptoms, which suggests significant progression of disease or perhaps a repeat case. The TB portals program is a community resource and is open to collaboration and feedback from researchers to improve the data, tools, and services provided. N = 202). Cases from the cohort that are not missing any features of interest were compared for correlations between covariates and the dependent variable (event). Positive correlations are shown in blue and negative correlations in red. The correlations between event and covariates indicate associations that follow clinical manifestation of disease such as involvement of both lungs, cavity size, number of cavities, and presence of swollen lymph nodes. Glossary: Affectlevel-location of affected lung area; affectpleura-changes in the pleura; bodysite_coding_cd-which lung is the observation located; bronchialobstruction-bronchial obstruction syndrome disorders, dissemination-Diffuse pulmonary nodules detected; limfoadenopatia-greater than 10 mm is considered the upper limit for normal nodes (short transverse diameter); lungcapacitydecrease -reduced lung volumes; lungcavitysize-size of lung cavity; nodalcalcinatum-Nodi Calcinatum detected; plevritis-pleural effusion detected; pneumothorax-Pneumothorax detected; posttbresiduals-Post-tuberculosis changes in the lung; processprevalence-prevalence of process in number of segments; totalcavernum-number of cavities; thromboembolismpulmonaryartery-Thromboembolism Of The Pulmonary Artery detected; anomalymediastinumvesselsdevelop-Anomaly Of Mediastinum Vessels Develop detected; shadowpattern-shadowpattern of nodule, node, or infiltrate; affectedsegments-segments of lung that are affected; accumulationcontrast-amount of contrast accumulated. (TIF) S1 Table. Case characteristics of the cohort (N = 253). Case characteristics were compared by treatment outcome. P-values were calculated for continuous variables (age_of_onset and bmi) using analysis of variance test. P-values for categorical variables (registration_date, gender, country, and type_of_resistance) were calculated using Chi-squared test. (XLSX) S2 Table. Comparison of radiologist observations prior to preprocessing. Radiologist observations prior to preprocessing for machine learning were compared by treatment outcome. P-values were calculated for continuous variables using analysis of variance test. P-values for categorical variables were calculated using Chi-squared test. The following variables were dropped from further analysis: Anomalymediastinumvesselsdevelop, shadowpattern, affectlevel, thromboembolismpulmonaryartery, anomalylungdevelop, and accumulationcontrast. The following variables were refactored (S3 Table) to recombine levels: Lungcavitysize, affectlevel, totalcavernum. Glossary: Affectlevel-location of affected lung area; affectpleurachanges in the pleura; bodysite_coding_cd-which lung is the observation located; bronchialobstruction-bronchial obstruction syndrome disorders, dissemination-Diffuse pulmonary nodules detected; limfoadenopatia-greater than 10 mm is considered the upper limit for normal nodes (short transverse diameter); lungcapacitydecrease-reduced lung volumes; lungcavitysize-size of lung cavity; nodalcalcinatum-Nodi Calcinatum detected; plevritis-pleural effusion detected; pneumothorax-Pneumothorax detected; posttbresiduals-Post-tuberculosis changes in the lung; processprevalence-prevalence of process in number of segments; totalcavernum-number of cavities; thromboembolismpulmonaryartery-Thromboembolism Of The Pulmonary Artery detected; anomalymediastinumvesselsdevelop-Anomaly Of Mediastinum Vessels Develop detected; shadowpattern-shadowpattern of nodule, node, or infiltrate; affectedsegments-segments of lung that are affected; accumulationcontrast-amount of contrast accumulated. (XLSX) S3 Table. CT radiologist annotations observed in the cohort after preprocessing. Radiologist annotations were compared by treatment outcome. P-values for categorical variables were calculated using Chi-squared test. Glossary: Affectlevel-location of affected lung area; affectpleura-changes in the pleura; bodysite_coding_cd-which lung is the observation located; bronchialobstruction-bronchial obstruction syndrome disorders, dissemination-Diffuse pulmonary nodules detected; limfoadenopatia-greater than 10 mm is considered the upper limit for normal nodes (short transverse diameter); lungcapacitydecrease-reduced lung volumes; lungcavitysize-size of lung cavity; nodalcalcinatum-Nodi Calcinatum detected; plevritis-pleural effusion detected; pneumothorax-Pneumothorax detected; posttbresiduals-Post-tuberculosis changes in the lung; processprevalence-prevalence of process in number of segments; totalcavernum-number of cavities; thromboembolismpulmonaryartery-Thromboembolism Of The Pulmonary Artery detected; anomalymediastinumvesselsdevelop-Anomaly Of Mediastinum Vessels Develop detected; shadowpattern-shadowpattern of nodule, node, or infiltrate; affectedsegments-segments of lung that are affected; accumulationcontrast-amount of contrast accumulated. (XLSX) S4 Table. Patient and condition ids for the cohort used for this analysis. A table of patient and condition ids is provided for the de-identified records that were used for this analysis. (XLSX)
2021-03-22T17:18:32.876Z
2021-03-17T00:00:00.000
{ "year": 2021, "sha1": "2555d5fc914da78d4ec4fd6032006a1b2052fcef", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0247906&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7ce2efec01c624d63da66ef5f0a7d8e7a93568e", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
18879794
pes2o/s2orc
v3-fos-license
Clinical review: Thyroid hormone replacement in children after cardiac surgery – is it worth a try? Cardiac surgery using cardiopulmonary bypass produces a generalized systemic inflammatory response, resulting in increased postoperative morbidity and mortality. Under these circumstances, a typical pattern of thyroid abnormalities is seen in the absence of primary disease, defined as sick euthyroid syndrome (SES). The presence of postoperative SES mainly in small children and neonates exposed to long bypass times and the pharmacological profile of thyroid hormones and their effects on the cardiovascular physiology make supplementation therapy an attractive treatment option to improve postoperative morbidity and mortality. Many studies have been performed with conflicting results. In this article, we review the important literature on the development of SES in paediatric postoperative cardiac patients, analyse the existing information on thyroid hormone replacement therapy in this patient group and try to summarize the findings for a recommendation. Introduction During systemic illness, especially after cardiac surgery using cardiopulmonary bypass (CBP), abnormalities in the circulating thyroid hormone levels are found in the absence of primary thyroid disease; this is collectively called the sick euthyroid syndrome (SES). Some argue that it is unclear if the clinical picture of SES is an adaptive process, a marker of the severity of illness or even if treatment is warranted in these patients. The many effects of thyroid hormones on the cardiovascular system have been described in detail elsewhere [1][2][3]. The biological actions of thyroid hormones on the cardiovascular system make these hormones attractive as a potential treatment option in the management of patients after cardiac surgery. We review the actual literature on the development of SES in children after cardiac surgery and discuss the relevant literature on hormone replacement. Finally, a critical appraisal of the potential effects of hormone replacement and the studies performed is sought. Sick euthyroid syndrome It is well known that several severe diseases can cause abnormalities in the circulating thyroid hormone levels in the absence of primary thyroid disease (i.e., non-thyroidal illness or SES) [4]. The most common pattern is a decrease in total and unbound triiodothyronin (T3) with normal levels of thyroid stimulating hormone (TSH) and thyroxin (T4). This is classified as SES type 1 (SES-1) or low-T3 syndrome. The de-ionidation from T4 to T3 via peripheral (hepatic) enzymes (inhibition of 5′deionidase, a selenoenzyme [5,6]) is impaired, leading to a decrease of T3 and an increase in reverse T3 that is biologically inactive [7]. Inflammatory cytokines have been linked to the development of SES [8] and the levels of cytokines seem to influence the severity of SES [9,10]. Elevated serum levels of steroids as part of a stress response may influence the de-ionidase activity and TSH and T3 response in SES [8,[11][12][13]. Additionally, tissue-specific thyroid hormone bioactivity is reduced during cellular hypoxia and contributes to the low T(3) syndrome of severe illness [14]. In general, the severity of illness is correlated to the severity of SES [15][16][17]. Very sick patients may show a dramatic fall in total T3 and T4 levels; this state is called the low-T4 syndrome or SES type 2 (SES-2) and has a poor prognosis [18,19]. T4 metabolism may further be influenced by a decrease in thyroid binding globulin levels [20]. Patients with low or undetectable TSH show increased morbidity and mortality [15,21,22]. Additionally, the response of TSH to thyroid releasing hormone (TRH) is impaired in SES [23]. Prognostic impact of thyroid hormones on outcome In addition to the results discussed above, SES does have a significant impact on outcome and survival. In 1995, Rothwell and Lawler [24] used thyroid hormone levels to predict outcome in adult intensive care patients and showed that an endocrine prognostic index based on intensive care unit admission measurements of these hormone levels is a superior discriminator of patient outcome than the APACHE II score. Similar results were obtained earlier [25], as well as by Jarek and colleagues in 1993 [26] and Koh and colleagues in 1996 [27] and was confirmed by Chinga-Alayo and colleagues in 2005 [28]. In their study with 113 patients, the addition of thyroid hormone levels to the APACHE score improved the prediction of mortality [28]. Similar results were reported by Iervasi and colleagues [29], who assessed prospectively the role of thyroid hormones in the prognosis of patients with heart disease. In their cohort of 573 consecutive patients, low levels of free T3 were found to be the highest independent predictor of death, especially in cardiac patients. Parle and colleagues [30] presented a large 10 year follow-up cohort study of 1,191 patients and were able to correlate a single measurement of low TSH in individuals aged 60 years and older with increased mortality from all causes and in particular mortality due to circulatory and cardiovascular diseases. Thus, the degree of SES seems to have significant influence on a patient's outcome under various conditions. In summary, there is significant evidence that SES plays an important role in children in various conditions; whereas SES-1 is related to good outcome and mild to moderate illness, SES-2 is related to severe illness and poor outcome. Cardiac operations and the systemic inflammatory response in children It is well known that cardiac surgery and CBP leads to a generalized systematic inflammatory response syndrome (SIRS), resulting in increased postoperative morbidity and mortality and organ failure [46,47]. Some of the main clinical features of postoperative SIRS are hemodynamic impairment, known as low cardiac output syndrome, capillary leak and fluid retention. SIRS is characterized by increased postoperative leucocyte counts, leucocyte activation, oxidative stress and release of cytokines such as tumor necrosis factor alpha and IL-6 and IL-8. Various pharmacological techniques are used to modify or minimize this response, including the use of high dose steroids [48]. Other techniques applied routinely are hypothermia, the use of heparin bonded circuits and oxygenators, intraoperative continuous hemofiltration or conventional ultrafiltration, postoperative modified ultrafiltration, leucocyte filtration, and the postoperative use of peritoneal dialysis to remove inflammatory cytokines and their impact on postoperative fluid balance [48][49][50]. Finally, catecholamines (namely dopamine) and other drugs such as milrinone are used to support the circulation in low cardiac output syndrome [51]. In summary, CBP induced SIRS combines many risk factors contributing to the development of SES as outlined above and has significant impact on the postoperative course in paediatric patients. Paediatric sick euthyroid syndrome after cardiac surgery Cardiac surgery with or without cardiopulmonary bypass induces a marked and persistent depression of circulating thyroid hormones during the postoperative period in both adults and children [52][53][54][55][56][57]. Allen and colleagues [58] demonstrated SES in 12 postoperative cardiac children in 1989 regardless of the procedure complexity. Bartkowski and colleagues [54] showed that when a larger amount of T3 is removed by ultrafiltration, patients show a prolonged recovery. Murzi and colleagues [59] demonstrated in 14 patients a prolonged decrease in thyroid hormones for five to seven days. Belgorosky and colleagues [60] demonstrated similar effects in 20 prepubertal children undergoing cardiac surgery. Saatvedt and Lindberg [61] demonstrated a significant inverse correlation between T3 levels 24 and 48 hours postoperatively and total accumulated IL-6, and also between the percentage decrease in T3 concentrations and total accumulated IL-6. Bettendorf and colleagues [53] showed in 139 patients a significant decrease in plasma thyroid hormone levels consistent with SES-2 and low TSH levels. In those patients with plasma T3 levels less then 0.6 nmol/l (n = 52), the period of mechanical ventilation and intensive care treatment was significantly prolonged. Neonates exposed to bypass and hypothermia uniformly show a pattern of SES-2 [62]; prolonged SES was demonstrated in older patients after a Fontan procedure [63]. The magnitude of the fall in serum T3 predicts greater therapeutic requirements in the postoperative period, especially in neonates [64]. Lynch and colleagues [65] reported five cases of hypothyroidism possibly secondary to loss of thyroid binding globulin from prolonged chest tube drainage. Peak serum levels of IL-6 were linked to the lowest T3 levels in 16 children after cardiac surgery [66]; the authors of this study postulated that treatments directed to diminish the rise in pro-inflammatory cytokines may prove effective in preventing postoperative SES. Ririe and colleagues [67] found no significant impact of deep hypothermic cardiocirculatory arrest on free T4, free T3 and TSH levels in children at day 1 and 2 after corrective surgery but this did lead to an increase of TSH while on bypass. The concentration of plasma selenium in children undergoing cardiopulmonary bypass decreases significantly, resulting in diminished deiodinase activity and a subsequent reduction in the conversion of T4 to T3 [68]. Free T3 and selenium serum concentrations were correlated to the time spent in intensive care. Mitchell and colleagues [69] showed a correlation between low T3 and T4 levels and survival in 10 infants of less than 5 kg body weight. In the two patients that died in this small series, no increase in T3 and T4 or TSH was found after a trough was reached at 48 to 72 hours after surgery. Plumpton and Haas finally demonstrated that younger children (less than three months of age) with longer CBP time (greater than 120 minutes) showed prolonged ventilation after CBP and lower free T3 levels [52] and concluded that thyroid hormone replacement therapy in this high-risk group is warranted. In conclusion, all children submitted to cardiac surgery with or without cardiopulmonary bypass show a persistent pattern of SES; in many patients, SES-2 with low T3 and T4 levels and a low TSH status is demonstrated and there is a close correlation between the age of the patients bypass time, postoperative morbidity and the degree of SES [58]. The profound decrease in thyroid hormones is thought to be of sufficient magnitude to affect cardiac function [70]. Other confounding factors Dopamine and thyroid function Dopamine is often used for treatment of low cardiac output syndrome. Dopamine directly inhibits anterior pituitary function through inhibitory dopamine receptors, resulting in diminished TSH release [71]. The intravenous administration of dopamine in healthy volunteers produced a reduction in serum prolactin, TSH, luteinizing hormone and follicle stimulating hormone while stimulating growth hormone release; TSH showed a sustained inhibition [72]. Additionally, dopamine lowers both basal and TRH-mediated TSH release [73]. This effect was even more sustained in patients with critical illness [74]. The dopamine-induced or aggravated pituitary dysfunction in critical illness warrants caution with prolonged infusion of this catecholamine, particularly in early life [75]. The administration of dopamine was correlated with the permanent suppression of TSH in children with meningococcal shock presenting with severe SES-2 [37]. In newborns, dopamine was found to suppress prolactin, growth hormone, and thyrotropin secretion consistently, and in children, dopamine suppressed prolactin and thyrotropin secretion, and a rebound release started within 20 minutes after dopamine withdrawal [45]. Thus, dopamine infusion induces or aggravates partial hypopituitarism and SES in critically ill infants and children. Iodinated antiseptics in cardiac surgery Infants may absorb significant quantities of iodine in iodinated topical antiseptics transcutaneously [76,77]. Premature and pre-term infants have been shown to absorb iodine when treated repeatedly with antiseptics such as povidone-iodine [78][79][80][81]; this patient group is specifically susceptible to iodine-induced hypothyroidism [82], the so called Wolff-Chiakoff effect [83]. This effect is detectable when compared with non-iodine skin disinfectant (chlorhexidine) [84]. Children with delayed sternal closure exposed to povidoneiodine for sternal wound protection display a more profound thyroid depression in the immediate postoperative period and significant iodine absorption [85]. In only one study did irrigation with povidone-iodine solutions for deep sternal wound infection not cause significant alteration in thyroid function in children [86]. Amiodarone Amiodarone is a highly effective antiarrhythmic agent for supraventricular and ventricular arrhythmias, especially in the early postoperative setting [87]. The drug is known to affect thyroid homeostasis [88] by competitive inhibition of 5′monodeiodinase, which converts T4 to T3 and reverse T3 to 3,3′-diodothyronine (T2), and also by the direct effects of its high iodine content (37% by weight) [89]; it is also structurally similar to the thyroid hormones [90]. The incidence of thyroid dysfunction in children is well reported [91] and hypothyroidisms as well as hyperthyroidism are reported with varying incidence rates, ranging from about 1% up to 24% [92][93][94][95][96]. The incidence and severity of side effects seem to be correlated with age and the dose used, with younger patients exposed to higher doses at increased risk [96,97]. Thus, the use of amiodarone in the early postoperative setting may contribute to the development of thyroid dysfunction, including SES. Thyroid hormone replacement after cardiac surgery The rationale of thyroid hormone replacement/ treatment A vast literature is available on the changes of thyroid function during non-thyroidal illness or SES in adults. Therapy Available online http://ccforum.com/content/10/3/213 with T3 has been suggested by many authors but is controversial. In SES-1 and SES-2, additional tissue-specific mechanisms are involved in the reduced supply of bioactive thyroid hormone and replacement of T3 can reverse these findings [98,99]. T3 administration is associated with improved hemodynamics, reduced peripheral vascular resistance, increased cardiac output and other effects, suggesting the potential utility of thyroid hormone replacement [100,101]. In patients after coronary artery bypass graft (CABG) surgery, an inverse correlation was found between days of post-operative hospitalisation and the slope of the recovery of T4 to T3 conversion [102]. Recently, Kokkonen and colleagues [103] demonstrated a strong association between atrial fibrillation and the low-T3 status. T3 replacement was shown to reduce the rate of arrhythmias and may be cardio-protective [104]. Novitzky and colleagues [105,106] performed two smaller randomised studies in 1989 using T3 supplementation and showed a significantly reduced need for conventional inotropic agents and diuretics as well as improved stroke volume, cardiac output, reduced systemic and pulmonary vascular resistances and survival. Klemperer and colleagues [107] administered T3 in a randomised placebo controlled study in 142 high-risk patients undergoing coronary artery bypass surgery; they showed a significant increase in cardiac output and a decrease in systemic vascular resistance. Vavouranakis and colleagues [108] showed that T3 administration lessened the need for pharmacological vasodilator therapy, but may increase heart rate. Sirlak and colleagues [109] pre-treated patients for planned CABG surgery seven days preoperatively and found postoperative lower catecholamine requirements and a better cardiac output. Finally Mullis-Jansson [110] and colleagues showed in another similar study that parenteral T3 led to improved postoperative function, reduced the need for inotropic agents and mechanical devices, decreased the incidence of myocardial ischaemia and decreased the incidence of atrial fibrillation and pacemaker therapy. Clinical treatment of children with thyroid hormones after cardiac surgery Based on the findings after cardiac surgery and the pharmacological profile of thyroid hormones, it has been postulated that thyroid hormone replacement in infants may reduce postoperative morbidity and mortality [55]. The half-life of intravenous T3 in children is approximately one-third of that reported for adults and can be calculated at about 7 hours [111]. Thyronin treatment (T3) was used by Carrel and colleagues in seven children with severe low cardiac output syndrome in whom conventional treatment had failed [112]. All children showed metabolic acidosis and those with pulmonary hypertension received nitric oxide. Two patients died (one due to intractable right heart failure and one after cerebral embolism and who received left ventricular assist device) but the other five showed a continuous improvement in hemodynamics within the following 48 to 96 hours. Bialkowsky [113] showed a beneficial effect of T3 supplementation after CBP in children, including significant vasodilatation. Chowdhury and colleagues [114] initially reported a case series in 1999 of six children with low postoperative T3 levels. In these children, T3 treatment decreased the systemic vascular resistance by more than 25%, increased cardiac output by more than 20%, resolved the existing metabolic acidosis (base excess > 0) and reverted junctional rhythm to sinus rhythm in 3/3 patients. The same group later showed in a prospective trial that T3 levels are more likely to fall in children after cardiac surgery and that the magnitude of the fall in serum T3 predicts greater therapeutic requirements in the postoperative period, especially in neonates [64]. Mackie and colleagues [115] performed a randomised, double-blind placebo controlled trial of T3 treatment in a selective group of 42 patients undergoing a Norwood procedure or a two-ventricle repair of interrupted aortic arch and ventricular septum defect. In this high risk group of patients, T3 supplementation proved to be safe and resulted in a higher systolic blood pressure and a more rapid achievement of negative fluid balance. Cardiac index was not significantly improved. Fluid balance, however, is managed in many centres worldwide by the use of peritoneal dialysis and so the beneficial effects may be negligible [116]. Portman and colleagues [117] performed a small study with 14 patients and showed that T3 replacement prevented circulating T3 deficiencies and elevated heart rate without a concomitant decrease in systemic blood pressure, thus indicating increased cardiac output. Myocardial oxygen consumption improves with an elevation of peak systolic pressure and T3 repletion may thus enhance cardiac function reserve. Potential side effects of thyroid hormone replacement The acute application of thyroid hormone may have unexpected side effects based on the physiological profile of the hormones. Subclinical thyrotoxicosis may be associated with changes in cardiac performance and morphology; these may include increased heart rate, increased left ventricular mass index, increased cardiac contractility, diastolic dysfunction, and the induction of ectopic atrial beats or arrhythmias [118]. In adult patients undergoing coronary artery surgery, the intravenous infusion of T3 (0.8 µg/kg followed by 0.12 µg/kg/h for 6 hours) did not change hemodynamic variables or inotropic drug requirements [119]. No significant differences were detected in the incidence of arrhythmia after T3 administration despite higher postoperative cardiac index and lower systemic vascular resistance [104,105,107,108,120]. Intravenous T3 (0.4 µg/kg bolus plus 0.1 µg/kg infusion) was administered over a 6 hour period without side effects in 170 patients undergoing elective coronary artery bypass grafting and resulted in a lower incidence of pacemaker dependence (14% versus 25%, P = 0.013) without side effects [110]. The oral administration of T3 (125 µg/day orally for 7 days preoperatively and from the first postoperative day until discharge) was without side effects in CABG patients [109]. T3 was well tolerated without episodes of ischemia or clinical arrhythmia in patients with advanced heart failure [121]. Finally, an intravenous bolus of 1 µg/kg T3 followed by continuous perfusion at 0.06 µg/kg/h was performed without haemodynamic impairment in 52 consecutive adult cadaveric organ donors [122]. In pre-term infants less than 28 weeks of gestational age, a single injection of T3 (0.5 µg/kg) given 22 to 26 hours after birth only leads to a two day increase of T3 levels and did not have negative effects on the cardiovascular system [123]. T4 administration reduced vasopressor needs in children with cessation of neurological function and hemodynamic instability; no side effects were seen [124]. After a mean bolus dosage of 2 ± 1.5 µg/h of T3, followed by a continuous infusion of 0.4 ± 0.3 µg/h for a mean duration of 48 ± 12 h, no side effects were demonstrated in a cohort of adult and paediatric patients suffering from severe low cardiac output [112]. Again, no side effects were found in 54 adult and seven paediatric patients suffering from severe low cardiac output in different clinical conditions with a mean bolus dosage of 2 ± 1.5 µg/h of T3 followed by a continuous infusion of 0.4 ± 0.3 µg/h for a mean duration of 48 ± 12 h [64,114]. In children, a once daily infusion of T3 (2 µg/kg bodyweight on day 1 after surgery and 1 µg/kg bodyweight on subsequent postoperative days up to 12 days after surgery) proved to be safe without side effects [125]; the cardiac index, however, improved significantly. The normalization of serum T3 levels in other studies was reflected in a marked decrease in the requirement for inotropic support, conversion to normal sinus rhythm, and progressively improving clinical course without clinically adverse effects [55,113]. In a cohort of children undergoing the modified Fontan procedure, the patients received intravenous T3 at dosages of 0.4, 0.6, and 0.8 µg/kg; no side effects were reported [111]. T3 (0.4 µg/kg) immediately before the start of CBP and again with myocardial reperfusion led to transient elevation in heart rate without a concomitant decrease in systemic blood pressure in infants less than 1 year old undergoing ventricular septal defect or tetralogy of Fallot repair [117]. When using a continuous infusion of T3 (0.05 µg/kg/h) in neonates undergoing aortic arch reconstruction, the study drug was discontinued prematurely in two children because of hypertension (n = 1) and ectopic atrial tachycardia (n = 1); heart rate and diastolic blood pressure, however, were not influenced by T3 supplementation, but systolic blood pressure was higher in the T3 group (P < 0.001). No serious adverse events were attributed to T3 administration [115]. In summary, the administration of T3 to adults and children of various ages after cardiac surgery as well as in various other conditions of critical illness proved to be safe and well tolerated; no side effects have been demonstrated so far. Conclusion The modern treatment of children with congenital heart defects provides worldwide excellent postoperative care with short ventilation times, short length of stay and low mortality and morbidity in the majority of clinical circumstances. Nevertheless, clinically significant SES can be detected, especially in neonates and children with long bypass times. At present, existing studies on treating SES in children have had relatively small subject numbers as well as age and diagnosis heterogeneity, thereby limiting the ability to determine significant clinical effects. Thus, to demonstrate a significant clinical effect of T3 supplementation, large numbers of patients are needed and the study must include patients at specific risk for SES and low cardiac output syndrome [52]. Treatment protocols in these patients, however, often include in the routine management peritoneal dialysis, inotropic support and afterload reduction as well as open chest strategies for a defined number of days; thus, common outcome parameters such as hours of ventilation, use of catecholamines, blood pressure, urine output, and so on may prove difficult to assess [116]. The Triiodothyronine for Infants and Children Undergoing Cardiopulmonary Bypass (TRICC) study is a multicenter, randomised, clinical trial designed to determine safety and efficacy of T3 supplementation in 200 children less than 2 years of age undergoing surgical procedures for congenital heart disease. Duration of mechanical ventilation after completion of cardiopulmonary bypass is the primary clinical outcome parameter and the study also follows multiple secondary clinical and hemodynamic parameters [126]. Based on the assumptions above, even the results of this study may fail to establish the routine administration of T3 to correct SES in children after cardiac surgery. In summary, children after cardiac surgery are at specific risk to develop a clinically important SES peri-operatively. Despite clear evidence from the studies available, the demonstrated beneficial effects and the clear lack of negative effects make the prophylactic supplementation of T3 a desirable treatment option, especially in high-risk groups.
2014-10-01T00:00:00.000Z
2006-05-23T00:00:00.000
{ "year": 2006, "sha1": "381265a47b187227e5af515887de947d081cc8b6", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc4924", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d26283875944ac27c41e9d5efdac3ac3f19087c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
134764659
pes2o/s2orc
v3-fos-license
Analysis of the Development of DC Distribution Network with Renewable Energy and Flexible Energy Storage In order to meet the increasing demand for DC load of electric vehicles, information equipment and semiconductor lighting systems in today’s increasingly urbanized distribution networks, and to prevent the deterioration of environmental problem, a large number of intermittent, unstable renewable energy are integrated into the DC distribution network. According to the characteristics of renewable energy, the mixed energy storage mode can be used to realize the separation of power generation and electricity utilization in the space and time. It is reasonably planned to realize the safety, economy, high quality and reliability of the DC distribution in the future. Introduction In recent years, with the continuous expansion of the scale of the alternating current (AC) system, its security and stability issues have become increasingly prominent [1]. It is no longer realistic to only rely on traditional installation stability devices to solve this problem. Coupled with the worsening environmental problems, there is a growing demand for renewable energy sources [2,3]. A large number of renewable energy with gap and instability will be connected to the distribution network on a large scale, which will bring severe challenges to the AC distribution network. Therefore, the traditional AC distribution network should be converted to direct current (DC) mode. DC distribution network technology has become a hot issue in the international power industry, which is pregnant with great innovation and emerging industry development opportunities [4,5]. In this paper, the advantages of dc distribution network are introduced, and the renewable energy generation can be introduced into dc distribution network, so as to realize the robust development of dc distribution network under hybrid energy storage mode. Characteristics and advantages of DC distribution network Compared to the AC power supply system, DC power distribution network has some unique advantages: high power capacity, low line loss, the user-side power quality, good transient stability and convenient way of flexible renewable energy into DC distribution network and so on. . Analysis of power supply capacity of DC distribution network With the rapid development of the city, the load electricity is increasing and the power distribution line is needed to carry more capacity. In addition, the development of the city has led to the rising value of land, and the expropriation of the corridors of the new distribution lines will be paid more, so it should be tried to transport more capacity on the existing corridors [6]. The rated voltage of the existing line is VAC, the rated line current is IAC, and the power factor cos φ =0.9; then the rated power of the existing line 3 c os If the DC distribution network adopts a bipolar structure with a rated DC voltage VDC and a rated current IDC, the transmission rated power of the line is . Therefore, at the same insulation level, the same wire crosssection and current density, Reliability of DC distribution network If the DC distribution network adopts a bipolar system, when one of the poles fails, the other pole can continue to transmit power for the load. Compared with the ac distribution network, the technical difficulty of connecting the energy storage equipment such as batteries and super capacitors into the DC distribution network is relatively low. Therefore, the DC distribution network fault crossing ability and power supply reliability are higher [7]. The feasibility of energy saving and DC distribution to households In fact, a large number of household appliances now use DC in the AC distribution network, the corresponding rectifying circuit module needs to be configured. However, if the DC distribution network is used to directly provide DC to the home users, the converters can be omitted, and the conversion times from AC to DC can be reduced. It can reduce the loss of the AC to DC exchange and reduce the cost of equipment manufacturing [8]. Convenient access to DC distribution network of clean energy and energy storage equipment The large-scale distributed network-connected of new energy sources such as wind energy and solar energy has become a trend. The DC generated by photovoltaic cells is random and intermittent, so it needs to configure the corresponding converter and energy storage device, and it needs to adopt complex control strategy to realize AC grid-connected [9]. For example, wind power is generated by a random fluctuation of AC energy that requires AC/DC/AC converter and some appropriate energy storage devices and can be incorporated into the AC network by complex control. All kinds of energy storage devices, such as batteries and super capacitors, only store electrical energy in the form of direct current. It is necessary to use the bi-directional DC/AC converter and complex control in the AC power grid. However, if the DC distribution network power supply mode, whether new energy distributed grid-connected, or energy storage device interface and control technology, is much simpler. Network structure analysis of DC distribution network The topological structure of DC distribution network needs the following characteristics [10]: 1) The DC network must be able to connect with the large power grid, and the grid-connected converter must have a bidirectional flow of power to facilitate the transmission of excess energy from the distributed generation (DG) to the power grid. 2) The DC distribution system must be able to provide a relatively constant DC voltage for the DC load. 3) DC distribution system must have high safety reliability. Renewable energy power generation Renewable energy power generation includes hydroelectric power generation, wind power generation, biomass power generation (including direct burning of agricultural and forestry wastes and gasification power generation, waste incineration and landfill gas generation, biogas power generation), solar power generation, geothermal power generation and marine power generation [11] . Among them, 2010-2016 annual global renewable energy power generation progress comparison is shown in. From table 1, in recent years, the use of renewable energy sources for power generation is becoming higher and higher, while renewable energy power generation in the entire power grid to play an increasingly important role. It can be foreseen in the future that the renewable energy power generation technology and scale will continue to innovate, and the construction of the DC distribution network will become more and more perfect. Energy Storage Configuration and Dispatching Scheme for Renewable Energy In view of the randomness and intermittence characteristics of renewable energy power generation, energy storage units with a certain capacity should be allocated to stabilize the power fluctuation of renewable energy, and the scheduling scheme should be coordinated with the load [12]. When the node Pi of the load increases (or decreases), the output active power PDG of the distributed power supply increases (or decreases). According to the situation of load forecasting in the region, the power output of the energy storage equipment is adjusted for the distributed power supply, and the change of the power output at different times is basically the same as that of the node. If the output of the distributed power supply is always less than the node load's, that is, Pi-PDG is always greater than zero, the distributed power supply and the node load can be equivalent to a basically constant load Pi-PDG; if the distributed power supply output is always greater than the node load's, that is, is always less than zero, the distributed power supply and the node load can be equivalent to a basic constant output PDG-Pi. The scheme can improve the line loss and voltage distribution of the DC distribution network, and it is also possible to avoid the unpredictable situation of the injection power of the node. Energy Storage Analysis in DC Distribution Network After the energy storage devices are connected to the DC distribution network, the power generation and the electricity utilization are separated from the space and time [13]. The power is no longer the real-time transmission, the electricity utilization and the power generation are no longer to maintain a balanced relationship in real time. The influence on traditional distribution network is summarized as the following: 1) Peak shaving and valley filling An energy storage device can store excess electrical energy as a load during a power dip, and provide electrical power as a power supply at peak power consumption. 2) Suppress the oscillation of the grid Theoretically, the power balance of the system can be maintained in any case as long as the installed energy storage capacity is large enough and the response is fast enough. 3) Improve the quality of electricity Large capacity energy storage technology can provide spare, FM, peaking, phase modulation and so on, which not only improves the power quality of the distribution network, but also improves the stability of the system voltage. 4) Reduce costs. The energy storage system can improve the utilization of equipment in power generation and transmission and distribution links, thus reducing the cost of power supply and power grid construction. Energy storage technology Currently in the DC distribution network in the application of more mature energy storage technologies are: battery energy storage, super capacitor, flywheel energy storage, pump storage, compressed air storage and so on. With the constant mention of renewable energy, large-scale development of energy storage technology is an inevitable development trend; but the premise is that the research direction is zero pollution, low cost and long life. Battery storage In the distribution network, the battery energy storage is the most widely used, the most mature technology, with relatively large energy capacity [14]. Generally speaking, batteries mainly include the following: liquid flow batteries, lithium batteries, lead-acid batteries and sodium sulfur batteries. They are now widely used in distribution systems. However, the battery volume is relatively large, its life is short, it is very large in the charge and discharge process by the impact of environmental temperature. The frequent charge and discharge will seriously affect the service life of the battery; the scrapped battery will pollute the environment to a certain extent. Super capacitor storage Super capacitors use special materials to make electrolytes and electrodes, and the mechanism is the electrochemical double layer theory [15]. The capacity of the capacitor can reach 20~1000 times of the ordinary capacitors'. Super capacitors can generally be divided into Faraday quasi capacitor and double layer capacitor. Among them, the double layer capacitor is widely used in the power system. The super capacitor has the characteristics of high energy density, fast charge and discharge, long cycle life, small amount of maintenance, high reliability, multi load for smooth and short time, high power electric energy quality peak power applications. But super capacitors are expensive and have high instantaneous power, but they are generally suitable for long-term storage because of their small capacity. Superconducting storage Superconducting Magnetic Energy Storage System (SMES) uses the superconducting coil to store the magnetic field energy and, if necessary, returns the stored magnetic field back to the power grid [16]. Flywheel storage The flywheel energy storage uses an electric motor to drive the flywheel to rotate at high speed, stores the electrical energy in the form of mechanical energy, and the flywheel drives the generator to generate electricity when necessary [17]. The advantage is that there is no frictional loss, a long life, no impact on the environment, almost no maintenance; the disadvantage is that the power density is relatively low, it's not as super-capacitor's as the rapid release of its stored energy, and expensive to ensure high system security. It's commonly used in wind power generation system to improve the power output quality. Analysis of Mixed Energy Storage System Model From the above analysis, it can be seen that a single energy storage device has its unavoidable disadvantages. In recent years, the allocation and control strategy of the mixed energy storage system in DC distribution network has aroused widespread concern [18]. The mixed energy storage systems generally use small capacity, high life, high power ratio energy storage systems to match large capacity, low cycle times, relatively high energy ratio, low power ratio energy storage system. The former is more typical such as super capacitor energy storage, flywheel energy storage, etc.; the latter is such as batteries and other chemical battery energy storage. According to the time characteristic, the mixed energy storage system classifies the power fluctuation of the DC distribution network to obtain the system stability and economy, which is obviously superior to the single energy storage's. Conclusion By analyzing the characteristics of DC distribution network and its network architecture, DC distribution network can meet the large scale, distributed renewable energy power generation and gridconnected power generation. At the same time, in order to avoid the uncertainty and the intermittence of renewable energy, the mixed energy storage technology is used to achieve flexible access to renewable energy power generation, and the stable and reliable operation DC power distribution network is achieved. Through the access to DC distribution network of renewable resources and flexible mixed mode of energy storage management, the development of the future DC distribution network will be greatly promoted and the global urbanization process will be accelerated.
2019-04-27T13:11:03.824Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "23aed165b90db081c36d4878b09d393206baeb34", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1087/4/042015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "06519f5f8083de78d6585a5d7262a650e165b437", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
7903580
pes2o/s2orc
v3-fos-license
CHES-1-like, the ortholog of a non-obstructive azoospermia-associated gene, blocks germline stem cell differentiation by upregulating Dpp expression in Drosophila testis Azoospermia is a high risk factor for testicular germ cell tumors, whose underlying molecular mechanisms remain unknown. In a genome-wide association study to identify novel loci associated with human non-obstructive azoospermia (NOA), we uncovered a single nucleotide polymorphism (rs1887102, P=2.60 ×10−7) in a human gene FOXN3. FOXN3 is an evolutionarily conserved gene. We used Drosophila melanogaster as a model system to test whether CHES-1-like, the Drosophila FOXN3 ortholog, is required for male fertility. CHES-1-like knockout flies are viable and fertile, and show no defects in spermatogenesis. However, ectopic expression of CHES-1-like in germ cells significantly reduced male fertility. With CHES-1-like overexpression, spermatogonia fail to differentiate after four rounds of mitotic division, but continue to divide to form tumor like structures. In these testes, expression levels of differentiation factor, Bam, were reduced, but the expression region of Bam was expanded. Further reduced Bam expression in CHES-1-like expressing testes exhibited enhanced tumor-like structure formation. The expression of daughters against dpp (dad), a downstream gene of dpp signaling, was upregulated by CHES-1-like expression in testes. We found that CHES-1-like could directly bind to the dpp promoter. We propose a model that CHES-1-like overexpression in germ cells activates dpp expression, inhibits spermatocyte differentiation, and finally leads to germ cell tumors. INTRODUCTION Testicular germ cell tumors (TGCTs) are the most common cancer among young men in industrialized countries [1]. TGCTs are thought to be derived from germ cell lineage cells that are blocked in differentiation and maturation [1]. The causative genetic aberrations were rarely identified. Genome-wide association study (GWAS) studies revealed polymorphic loci linked to the KIT/ KITLG [2,3], RAS [4], and steroid signaling pathways [5,6]. However, the molecular mechanisms underlining TGCTs remain poorly understood. Drosophila and human testes share many features of spermatogenesis [7,8]. Many mutants of homologous human and Drosophila genes exhibit similar testicular phenotypes. The adult fly testis is a blind tube that opens into the seminal vesicle and ejaculatory duct [8]. The apical tip of the tube is a cluster of somatic cells called hub cells. Eight to ten germ line stem cells (GSCs) are tightly associated with the hub cells, and each is enveloped by two cyst-stem cells (CySCs). Each GSC divides asymmetrically to maintain one cell associated with the hub as a GSC, and another to leave the niche and become a primary spermatogonial cell. Spermatogonial cells Research Paper undergo four rounds mitosis before further differentiation, and then enter meiosis and mature into spermatids [9]. The self-renewal and differentiation of early germ cells in flies are tightly controlled [9]. Similar to humans, flies also develop testis tumors when germ cells fail to differentiate and over-proliferate [10]. Janus kinase-signal transducer and activator of transcription (JAK-STAT) and bone morphogenetic protein (BMP) signaling are critical for GSC maintenance [8,9]. Malfunction of these two pathways could lead to testis tumors in flies. Hub cells secrete Unpaired (Upd) to bind receptor Dormless on GSCs and CySCs, which activates JAK-STAT signaling and maintains germline and somatic stem cell selfrenewal [11,12]. The ectopic expression of Upd in GSCs results in testis tumors with a massive accumulation of undifferentiated GSC-like cells [11,12]. Two BMP-like molecules, Dpp and Gbb, expressed in hub and cyst cells are required for GSC maintenance [13][14][15]. Dpp and Gbb are received by GSCs, where they repress the expression of the differentiation factor, Bagof-marbles (Bam) [13][14][15]. Bam and its regulator, Benign gonial cell neoplasm (Bgcn), are required for restricting proliferation of mitotically amplifying spermatogonia [16,17]. Mutations in bam or bgcn lead to testis tumors with extensive proliferation of undifferentiated germ cells [18,19]. Since BMP signaling could repress bam expression, ectopic expression of dpp in germ cells leads to reduced bam expression and the formation of tumor-like structures in testis [13,15]. Despite its important functions in fly spermatogenesis, BMP signaling is also required in testis development and spermatogenesis in mammalian systems [20]. Aberrant BMP signaling was reported in human samples with TGCTs [21]. Therefore, investigation of germ cell differentiation in flies might provide insight into potential mechanisms for human TGCTs. Our previous work has successfully used Drosophila testis as a model system to evaluate the possible loci associated with a severe symptom of male infertility: nonobstructive azoospermia (NOA) [22,23]. We found two loci near DMRT2 and DMRT3, two genes encoding a DM domain containing a transcription factor, were associated with NOA [22]. Interestingly, DMRT1, the paralog of DMRT2/3, was found to be associated with human TGCTs [24,25]. Studies showed that the risk of TGCTs was increased in azoospermia or subfertile patients [1,[26][27][28]. Therefore, genes associated with azoospermia might also modulate the risk of TGCTs. In the same GWAS study [23,29], we uncovered that a single nucleotide polymorphism (SNP) in the human FOXN3 gene is associated with NOA. FOXN3 is evolutionary conserved. As indicated in Ensembl database, fly gene CHES-1-like is the ortholog of both human FOXN3 and FOXN2, which is a one to multiple orthologous relationship. To evaluate the functional relevance of the GWAS study, we assessed the function of CHES-1-like in fly spermatogenesis. CHES-1-like mutant male flies were viable and fertile. We found that CHES-1-like is not required for GSC maintenance or other spermatogenesis processes in fly testes. However, ectopic expression of CHES-1-like in germ cells significantly reduced male fertility. When CHES-1-like was overexpressed, spermatogonia failed to differentiate after four rounds of mitotic division, but continued to divide to form tumor-like structures. We found that CHES-1-like could activate dpp expression and block spermatocyte differentiation. Our results suggest that NOA-associated SNPs could be a potential modulator of testis tumor development. Loss of CHES-1-like does not cause spermatogenesis defects In our previous NOA GWAS screen [23,29], one SNP (rs1887102, P=2.60 ×10 -7 ) in the human gene, FOXN3, was found to be associated with NOA ( Figure 1A). FOXN3 is an evolutionarily conserved gene. In Drosophila, CHES-1-like is the ortholog of FOXN3. To evaluate whether this loci is functionally relevant to spermatogenesis, we tested the function of CHES-1-like in fly testes. We knockdown CHES-1-like expression in germ cells of fly testes (Nos>CHES-1-like RNAi) and did not observe obvious defects (Supplementary Figure 1). We generated CHES-1-like deletion mutants using Cas9mediated mutagenesis. We recovered multiple lines with different indels confirmed by PCR and sequencing ( Figure 1B, 1C). Both the hemizygous mutant male flies and homozygous mutant female flies were viable and fertile ( Figure 1D). We further examined the testes of CHES-1-like mutants by immunostaining with antibodies recognizing hub cells, germ cells, and cyst cells ( Figure 1E, 1F). The patterns of all cell types were identical to the wild type controls, indicating that loss of CHES-1-like does not affect spermatogenesis in flies. Ectopic expression of CHES-1-like in germ cells induces testis tumor formation Since CHES-1-like loss of function did not result in spermatogenesis defects, we decided to examine whether ectopic expression of CHES-1-like could lead to testis malfunction. We generated UAS-CHES-1-like transgenic flies, and crossed these flies with different Gal4 lines expressing specifically in germ cells (Nos-Gal4 and Bam-Gal4; Figure 2A Fertility was significantly reduced in Nos>CHES-1-like (N=132) and Bam>CHES-1-like (N=91) male flies ( Figure 2B). DNA staining in the testis tail revealed that www.impactjournals.com/oncotarget clusters of elongated spermatids were greatly reduced in Nos>CHES-1-like testes compared to normal testes ( Figure 2C-2F). An increase in the small cells resembling GSCs and spermatogonia in the apical region of both Nos>CHES-1-like and Bam>CHES-1-like testes was observed by phase contrast microscopy ( Figure 2H1, 2I1, 2J1). Cysts with an overproliferation of small cells were also observed in the distal region of the Nos>CHES-1-like testis tips ( Figure 2G, 2H1, 2I1, 2J1). We further examined Nos>CHES-1-like and Bam>CHES-1-like testes with various cellular markers ( Figure 2H-2J). DNA and Vasa staining confirmed that the region of GSCs and spermatogonial cells were expanded ( Figure 2H2, 2I2, 2J2). In the Nos>CHES-1-like testes, cysts packed with small cells form tumor-like structures in which Vasa staining is lost ( Figure 2I2). We used the 1B1 antibody to label fusome, a membrane structure that connects sibling germ cells. Most fusomes observed in the apical tip of Nos>CHES-1-like and Bam>CHES-1-like testes were small and branched ( Figure 2H4, 2I4, 2J4), suggesting that the majority of cells were interconnected and resembled proliferating spermatogonia. This indicates that the spermatogonial cells failed to cease mitotic division after four rounds, and continually divided to form tumor-like structures. Indeed, using the cell proliferation marker phospho-Histone H3 (pH3) antibody to label dividing germ cells, pH3 could be observed only at the tip of wild type testes ( Figure 2H5). However, ectopic pH3 labeling was observed in some cysts distal from the tip of the testes with ectopic CHES-1-like expression in germ cells ( Figure 2I5, 2J5). We also used FasIII to label hub cells, and Eya and DE-cadherin to label cyst cells. There was no dramatic change in the number and morphology of either cell type ( Figure 2H-2J and Supplemental Figure 1). We further investigated Tj-Gal4-driven CHES-1-like expression in cyst cells and Upd-Gal4-driven CHES-1-like expressionin hub cells, and examined the resulting testes with various markers. There was no obvious difference between wild type testes and the testes with CHES-1-like expression in cyst (Supplementary Figure 2) and hub cells (Supplementary Figure 3). CHES-1-like inhibits spermatocyte differentiation through suppressing Bam expression The expression of Bam is required for spermatogonial cells to exit the mitotic cell cycle and begin differentiation [19]. It is likely due to the disruption of Bam signaling that spermatogonia were unable to cease mitosis in CHES-1-like germ cell ectopically expressing testes. To test this, we analyzed Bam expression patterns in these testes. In wild type animals, Bam is expressed primarily in the transient amplifying (TA) spermatogonia, a strip of cells near the apical tip of the testes ( Figure 3A). The expression level of Bam was greatly reduced in both Nos>CHES-1-like ( Figure 3B) and Bam>CHES-1like testes ( Figure 3C). However, the expression region of Bam was expanded ( Figure 3F1-3J1, 3F2-3J2). We also used Bam-GFP reporter to analyze bam expression patterns. In Nos>CHES-1-like testes, GFP expression levels were reduced and the expression regions were expanded (Supplementary Figure 4). The reduction of GFP staining is obvious but less dramatic than the reduction of Bam staining in Nos>CHES-1-like testis ( Figure 3B, Supplementary Figure 4), which is likely due to that not only the transcriptional levels but also the protein levels of Bam is affected by CHES-1-like expression. It has been reported that around half of male germ cells lacking one copy of bam will complete one or more extra TA divisions before differentiation [30]. Removing one copy of bam further reduced the fertility of male flies with CHES-1-like overexpressing in germ cells ( Figure 4A). DNA staining in the testis tail revealed that clusters of elongated spermatids were further reduced in these testes compared to the testes with CHES-1-like overexpressing alone ( Figure 4B and 4C1-4E3). Bam expression in these testes was reduced to an undetectable level ( Figure 3D, 3E). There was an increase in aberrant tumor-like cysts packed with over-proliferated small cells ( Figure 4F-4I). These results suggest that CHES-1-like likely suppresses spermatocyte differentiation by down-regulating Bam expression. CHES-1-like activates TGF-β signaling by upregulating dpp expression In testis, Bam expression is repressed by BMP signaling. Therefore, CHES-1-like-mediated down regulation of Bam expression might reflect the activation of BMP signaling. Indeed, the BMP signaling downstream gene daughters against dpp (dad) was expressed at low levels in GSCs and spermatogonial cells in wild type testes ( Figure 5A1-5A6). Expression levels of Dad-lacZ dramatically increased in the germ cells expressing CHES-1-like ( Figure 5B1-5B6). However, ectopic expression of CHES-1-like in early cyst cells with Tj-Gal4 did not increase Dad-lacZ expression in cyst cells ( Figure 5C1-5C6). The upregulation of Dad expression and downregulation of Bam expression indicates that CHES-1-like may ectopically activate BMP signaling. BMP downstream gene expression is mediated by phosphor-Mad (p-Mad) and its cofactor, Medea. Since CHES-1-like is a FOX domain-containing protein, a potential transcriptional factor, we first tested whether it could interact with Mad to regulate Dad and Bam expression. However, we did not detect any interaction between CHES-1-like and Mad (Supplementary Figure 5). Because ectopic expression of CHES-1-like in germ cells mimicked the phenotype of Dpp overexpression in germ cells [13,15], we tested whether CHES-1-like could directly upregulate dpp expression. We used MS1096-Gal4-driven CHES-1-like expression in the wing imaginal disc porches and examined Dpp-lacZ expression patterns ( Figure 5D-5G). In the wild type wing discs, Dpp-lacZ appeared as a thin strip at the anterior/posterior (A/P) boundary ( Figure 5D1, 5D2). When CHES-1-like was overexpressed, the expression region of Dpp-lacZ was greatly expanded (Figure 5E1-5E3). As a result, the adult wings of the MS1096>CHES-1-like flies were deformed ( Figure 5F1-5G2). However, another gene ptc does not change its expression patterns in the wing porches when CHES-1-like is overexpressed (Supplementary Figure 6), indicating that the changes of Dpp-LacZ expression patterns is not due to the morphological changes of the wing discs. To test whether CHES-1-like directly binds to the dpp promoter to activate Dpp expression, we used a chromatin immunoprecipitation (ChIP) assay to detect the interaction between CHES-1-like and the dpp promoter. We expressed HA-tagged CHES-1-like in Drosophila S2 cells, and immunoprecipitated the protein-DNA complex after cross-linking. Indeed, CHES-1-like was able to interact with the dpp promoter region ( Figure 5H-5J). DISCUSSION In this study, we used Drosophila testis as a model system to test the functional relevance of a NOAassociated human gene, FOXN3. Although loss of CHES-1-like, the fly ortholog of FOXN3, did not result in spermatogenesis defects, the ectopic expression of CHES-1-like in germ cells blocked spermatocyte differentiation and induced tumor-like structure formation. Our study has revealed CHES-1-like as a novel BMP signaling regulator to promote expression of the BMP ligand Dpp. In flies, CHES-1-like, together with another fork head transcription factor Jumu, governs cardiac progenitor cell division and specification by regulating Polo kinase activity, as well as the expression of fibroblast growth factor and Wnt signaling pathway receptors [31,32]. Inactivation of FOXN3 in Xenopus and mice led to craniofacial defects and was sometimes lethal [33,34]. It has been well established that BMP signaling is required for the differentiation and proliferation of osteoblasts of the mammalian skull [35]. Interestingly, the expression levels of the BMP pathway ligands BMP2, HA antibody could pull down dpp promoter region. (J) Quantification of (I). www.impactjournals.com/oncotarget BMP4, and BMP7 were greatly reduced in the FOXN3 knockout animals [34]. In our study, we found that CHES-1-like binds to the dpp promoter and activated Dpp expression, suggesting that CHES-1-like might directly regulate BMP ligand expression. Dpp is critical for fly development [36,37]. However, loss of CHES-1-like did not result in any obvious developmental defect. It is possible that there is redundant molecules or pathways could compensate the effects caused by the loss of CHES-1-like. Jumu is a good candidate since it is playing redundant roles during fly heart development. However, ectopic expressing Jumu in germ cells (Nos>UAS-Jumu) does not have any defects (Supplementary Figure 1), indicating Jumu might not play similar roles as CHES-1like in testes. The function of FOXN3 has never been linked to testis tumors or male sterility. The data from human protein atlas (http://www.proteinatlas.org/) revealed that the protein levels of FOXN3 were upregulated in most cancer tissues. A recent study in a cancer cell line indicated that CHES1/FOXN3 decreases cell proliferation by repressing PIM2 and protein biosynthesis, suggesting that FOXN3 is a potential tumor suppressor [38]. However, we found that overexpression of CHES-1-like in fly testis could prevent early germ cell differentiation and lead to tumor-like structure formation. This paradox might arise because of the complex and context-dependent physiological functions of BMP signaling [39]. Therefore, whether FOXN3 promotes or inhibits tumor formation might be determined by the tissue context. In this study, we found that the ortholog of a NOAassociated gene could modulate testis tumor formation. It is striking since infertility is a high risk factor of testis tumors. The misregulation of FOXN3 could contribute to both spermatogenesis defects and tumor genesis. Further investigation is necessary to dissect the role of FOXN3 in TGCTs. CRISPR/Cas9-mediated genome editing CHES-1-like mutant was generated by an optimized CRISPR / Cas9-mediated genome editing method as described before [40]. Two sgRNAs targeting CHES-1-like exon regions were designed to generate an about 1.2kb DNA fragment deletion. Genomic DNA PCR and sequencing were used to confirm the deletions. Light and phase-contrast microscopy Fly testes were dissected in 1x phosphate-buffered saline (PBS) and washed several times. Testes were observed on slides by a phase-contrast microscope after gently squashing them with a cover slip. For an overall view of wing morphology, adult wings were observed directly under light microscope. Chromatin immunoprecipitation (ChIP) assay Formaldehyde cross-linking and chromatin immunoprecipitation (ChIP) assays of S2 cells were performed using a protocol as described before [42]. S2 cells transfected with HA-CHES-1-like were subjected for CHIP assay. Chromatin was sonicated on ice to obtain DNA fragments of appropriate size among100-1000 bp. Twenty percent of total supernatant was used as a total input control. Following removal of bound proteins, immunoprecipitated DNA was subjected to PCR.
2018-04-03T00:05:51.199Z
2016-06-02T00:00:00.000
{ "year": 2016, "sha1": "ecf495741c36821cc6d348f8278355bcab91cd79", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=30713&path[]=9789", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecf495741c36821cc6d348f8278355bcab91cd79", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246944064
pes2o/s2orc
v3-fos-license
Bioactive compounds of Punica granatum L. wastes by high performance liquid chromatography analysis Abstract The massive pomaces of Punica granatum L. exhibit a challenging losses exposure difficulty for the processing industries. The resent study was aimed to investigate the bioactive compounds of pomace extracts to introduce it to different industries such as pharmaceutical, food, medicinal, agricultural etcetera for optimum use. Four different extracts were prepared and the phenolic compounds were quantified using HPLC-DAD. Different amounts of phenolic compounds were detected in the samples including gallic acid, catechin, ellagic acid, rosmarinic acid, hesperidin, p-coumaric acid and chlorogenic acid. Gallic acid was major compound in all studied extracts of pomaces, with the maximum amount belonging to water extract (at 60 °C). The average amount of gallic acid detected in water extract (at 60 °C) of Punica granatum L. was 11.25 mg g−1 dry weight, while it was 3.24 3.02 and 1.09 mg g−1 dry weight for the extracts obtained by distilled water, methanol and methanol 80%, respectively. Graphical Abstract Introduction One of the family Lythraceae, subfamily Punicoideae which has been used as an edible fruit since antiquity, is pomegranate (Punica granatum L.). It is a native plant that greatly dispensed in south of Iran. Pomegranate is a source of antioxidants because of the presence of phenolic and tannin compounds (Loren et al. 2005). It is used in medicinal, food and cosmetic formulations (Finkel and Holbrook 2000) and can be introduced as a well source of antioxidants (Singh et al. 2001). From epochal to recent times, different parts of pomegranate have been used for different objects such as in diets (e.g. juices, jams, jellies, dressings, marinating, and wine), or as religious symbolism (e.g. righteousness, fullness, fertility, abundance), or for its medicinal values. High nutrient composition such as, oxalic acid, potassium, folate and vitamins E, C, B 6 and A is well demonstrated in pomegranate peels (Al Rawahi et al. 2014). Generally, low aromatic intensity is the main characteristic of pomegranate fruit (Wang et al. 2013). A hydroquinone pyridinium alkaloid from the leaves (Schmidt et al. 2005), punigratane, as a pyrrolidine alkaloid from the rind and with its efflux inhibition activity (Rafiq et al. 2016), tricetin 4 0 -O-b-glucopyranoside as a flavone glucoside and four ellagitannins and flavones (tricetin, luteolin, ellagic acid, and granatin B) from the flowers (Wu and Tian, 2019) of pomegranate were isolated. Moreover, antioxidant, antimicrobial, antidiabetic potential, antiparasitic activities and a-glucosidase and maltase inhibitory effects of pomegranate leaf, rind and flower extracts have been illustrated (El Dine et al. 2014;Rahmani et al. 2017;El Deeb et al. 2021). Previous studies have been on different parts of the healthy fruit of the plant after harvest without the fruit being mechanically pressed by the juicer. No study was observed under our conditions on the rest of the plant which is abandoned and discarded in factories after the dewatering step. As most parts of the fruit including exocarp, endocarp, pulp, stems and seeds that are not edible, it is not as popular as other family members. They are discarded as wastes in the environment. On the other hand, the value of these wastes is not well known. So, the investigation of phenolic compounds of pomace of pomegranate by HPLC-DAD was our aim to be able to introduce it to various industries for more use and application. HPLC validation The R 2 quantities from calibration curves of standard phenolic compounds were in the span from 0.985 to 0.999 which confirmed the linearity of the method. The RSD values for the accuracy studies were below 2.0%. The HPLC method was precise in the quantitative analysis of phenolic compounds. Phenolic composition The variance analysis illustrated significant differences in phenolic compounds among the different extracts in P. granatum (P < 0.01; Table S1). Our findings illustrated that gallic acid was the abundant phenolic compound in all studied pomegranate pomace extracts (PPEs) and it was agreed with other previous studies. According to the previous reports gallic acid is the main phenolic compounds in peel extract of pomegranate. The pomaces included the peels, also. Gallic acid and ellagic acid may be the compounds responsible for P. granatum anti-inflammatory effect. Hydrolyzable tannins have a polyhdric alcohol at their core, the hydroxyl groups of which are partially, or fully, esterified with either gallic acid or ellagic acid. They may have long chains of gallic acid coming from the central glucose core. On hydrolysis with acid or enzymes, the hydrolyzable tannins break down into their constituent phenolic acids and carbohydrates. As a result, Al Rawahi et al. (2014) was reported major phenolic compounds in P. granatum peel extract cultivated in Oman as gallic acid, illogic acid, punicalin, and punicalagin. The phenolic acids such as ellagic acid, gallic acid, chlorogenic acid, caffeic acid, vanillic acid, ferulic acids, trans-2hydrocinnamic acid and quercetin in pomegranate are identified (Bassiri-Jahromi and Doostkam, 2019). In our study the greatest content of gallic acid was determined in water extract at 60 C in the water bath. Phenolic compounds have a considerable structural diversity, characterized by the hydroxyl groups on aromatic rings. According to the number of phenol rings and the structural elements that bind rings to one another, such compounds are grouped and classified as simple phenols, phenolic acids, flavonoids, xanthones, stilbenes, and lignans. Phenolic compounds in P. granatum pomace in our study included 2 hydroxybenzoic acids and as hydrolyzable tannins (gallic acid and ellagic acid), 3 hydroxycinnamic acids (rosmarinic, p-coumaric and chlorogenic acids), one flavanon glycoside (hesperidin) and one flavan-3-ol (catechin). During the industrial extraction process, the tannins pass to juice while the high antioxidant capacity of pomegranate is attributed mainly to these compounds. At four PPEs in our present study, gallic acid and ellagic acid, were detected and identified. Caffeic acid was not found in our study. Caffeic acid, is biosynthesized by hydroxylation of coumaroyl ester of quinic acid (esterified through a side chain alcohol). This hydroxylation produces the caffeic acid ester of shikimic acid, which converts to chlorogenic acid. We did not find caffeic acid but the esters of it, rosmarinic acid and chlorogenic acid, were identified in all samples. Also, this can depend on the type of sample. The cultivar, genotypes, extraction methods, etc have a higher influence on the phenolic content. Different pomegranate cultivars had different polyphenol compositions. It is considerably associated with many factors such as cultivar type, growing region, maturity, cultivation, climate, edaphic condition, and storage situation. Rosmarinic acid exhibits antioxidant and anti-inflammatory effects and has recently been shown to protect neurons in vitro against oxygen-glucose deprivation. Experimental See supplementary material. Conclusions Different amounts of phenolic compounds were detected in the samples including gallic acid, catechin, ellagic acid, rosmarinic acid, hesperidin, p-coumaric acid and chlorogenic acid. Gallic acid was major compound in all studied extracts of pomaces, with the maximum amount belonging to water extract at 60 C in the water bath. According to the findings of this study, the pomaces of P. granatum are natural sources of phenolic compounds. The verified P. granatum pomace and its related bioactive components like flavonoid and phenolic compounds can have a powerful potential as a novel device for inhibiting different human diseases and a chemo prohibitive. Several beneficial effects are reported for these phenolic compounds, including antioxidant, anti-inflammatory, and antineoplastic properties. These compounds have been reported to have therapeutic activities in gastrointestinal, neuropsychological, metabolic, and cardiovascular disorders (Lin et al. 2018). It is estimated that total production amounts to around 3 million tons of pomegranate are produced in the world, annually, of which Iran produces approximately 28%, annual production of pomegranate has been recorded as 10,866,300 tons. After pressing of fruits for juice or oil, the solid remains are pomace. It includes the stems, seeds, pulp and skins of the fruit. During the Punica juice processing, about 40 to 50 percent of the products were retained. It is possible to keep the waste of these crops one hundred thousands of tons were estimated (Animal Science Research Institute of Iran (ASRI), 2015). Whereas desirable utilization of agricultural and food wastes such as pomaces of P. granatum will reduce costs and environmental hazards in which results from their disposal and remaining in the environment, our studies are continuing to investigate and introduce pomaces of P. granatum as strong natural resources and we are going to use these pomaces for other goals.
2022-02-19T06:24:39.838Z
2022-02-17T00:00:00.000
{ "year": 2022, "sha1": "b8b4c5dd2fc6adb679f79a885c6a5b285c3e9805", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Bioactive_compounds_of_i_Punica_granatum_i_L_wastes_by_high_performance_liquid_chromatography_analysis/19189670/1/files/34095941.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "a5040908c967b5173fff579c6fd37208734b3cc5", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199447906
pes2o/s2orc
v3-fos-license
Peripapillary Region Perfusion and Retinal Nerve Fiber Layer Thickness Abnormalities in Diabetic Retinopathy Assessed by OCT Angiography Purpose To quantify peripapillary region perfusion and retinal nerve fiber layer (RNFL) thickness abnormalities in different stages of diabetic retinopathy (DR) using optical coherence tomography angiography (OCTA). Methods Seventy-two eyes of 72 patients with diabetes were included as follows: 23 with no DR (No DR), 24 with mild-to-moderate nonproliferative DR (mild DR), 25 with severe nonproliferative to proliferative DR (severe DR), and 26 age-matched healthy controls. All eyes underwent a 4.5 × 4.5-mm rectangle scan centered on the optic nerve head. Vessel densities and RNFL thickness for the peripapillary area were calculated. Results A statistically significant decrease in vessel density was found in the peripapillary region with increased DR severity (all P < 0.001). There were significant correlations between DR severity and vessel density in the peripapillary region (P < 0.001), but not between DR severity and RNFL thickness (P > 0.05). There was a significantly positive correlation between vessel density and RNFL thickness of the peripapillary region in the mild DR group (r = 0.726, P < 0.001) but not in the no DR group (r = 0.008, P = 0.973) or the severe DR group (r = 0.281, P = 0.173). Conclusions Vessel density in the peripapillary region correlated significantly with DR severity, decreasing with DR aggravation. There was no obvious correlation observed between RNFL thickness and DR severity. Translational Relevance Vessel density in the peripapillary region, assessed by OCTA technology, can be potentially useful for analyzing and monitoring retinal nerve changes in DR patients. Introduction Diabetic retinopathy (DR) is a leading cause of acquired visual impairment and is increasingly becoming one of the world's most significant public health challenges. 1,2 Numerous substantial advances have been made in understanding the disease during the past few decades. 3,4 However, the pathogenesis of DR remains unclear. For many years, DR has been considered a type of microvascular problem. Recently, both neural and microvascular factors have been associated with DR. 5 Furthermore, retinal neurodegeneration has been found to have a significant role in the pathogenesis of DR, including apoptosis of retinal neuronal cells and peripapillary retinal nerve fiber layer (RNFL) thinning. 6 The RNFL is composed of retinal ganglion cell axons and makes up the innermost neural layer of the retina. 7 It has been proven that the nutritional demands of the RNFL are likely to be partially satisfied by radial peripapillary capillaries (RPCs). 8 Both histologic and clinical studies suggest that RPCs play an important role in the RNFL arcuate fiber area. 8,9 Many pathological changes, such as the Bjerrum scotoma, cotton wool spot, intraretinal hemorrhage, and ischemic optic neuropathy, have nerve fiber defects consistent with the distribution of RPCs. [10][11][12] Examining the changes of the RNFL and RPCs can provide an improved clinical understanding of neurodegeneration during the different stages of DR. However, there is little quantitative information regarding RPC microcirculation in diabetic patients. A recent noninvasive imaging technique, optical coherence tomography angiography (OCTA), shows repeatability and reproducibility in vessel density measurements of RPCs and thickness measurements of the RNFL. 13,14 In the present study, OCTA was used to quantitatively analyze the changes in the vessel density of RPCs and RNFL thickness in the optic nerve head of DR patients in different stages, and their correlation with DR severity. Study and Patients This cross-sectional study was performed at the First Affiliated Hospital of Anhui Medical University and followed the tenets of the Declaration of Helsinki. This study was approved by the institutional review board at the First Affiliated Hospital of Anhui Medical University. Inclusion criteria were healthy eyes, eyes of patients with diabetes without DR, and eyes with DR, based on clinical assessment by retinal specialists. The exclusion criteria were eyes with opaque media that precluded fundus examination, glaucoma, a high refractive error (.6 diopters), uveitis, other retinal diseases, and ocular trauma. We also excluded eyes with evidence of optic disc neovascularization, optic disc edema, or those that had OCTA images with a scan quality index (SQI) of less than 5. Ocular Examination Each patient underwent a series of ocular examinations, including a biomicroscopy examination of the fundus, OCTA, color fundus photography, and intraocular pressure measurement using a noncontact tonometer. Each eye was graded using the Early Treatment Diabetic Retinopathy Study classification. 15 Based on the DR grade, which was determined by fundus photography and examinations, the less serious eye in one patient was chosen. When the two eyes of one patient had the same DR grade, the eye with a higher SQI was chosen. The eyes of the patients with diabetes were divided into the following three groups according to the DR grade: a diabetes mellitus without DR (no DR) group, a mild-to-moderate nonproliferative DR (mild DR) group, and a severe nonproliferative to proliferative DR (severe DR) group. Optical Coherence Tomography Angiography Imaging and Image Processing OCTA imaging of the optic disc was performed by using an AngioVue OCTA system (Optovue, Inc., Fremont, CA). A 4.5 3 4.5-mm rectangle scan centered on the optic nerve head was performed. The newly developed, built-in AngioAnalytics software (version 2017.1.0.151; Optovue, Inc.) was used to evaluate vessel density and RNFL thickness. The software defines the peripapillary region as a 1.0-mm wide round annulus extending from the optic disc boundary (Fig. 1A). The peripapillary vessels were analyzed in superficial retinal layers from the RPC segment that extends from the inner limiting membrane to the nerve fiber layer (Fig. 1B); peripapillary vessel density was defined as the percentage of the area occupied by the vessels in the peripapillary region. The software calculated the vessel density for the peripapillary area (Fig. 1C). Simultaneously, the average RNFL thickness for the peripapillary area was recorded (Fig. 1C). Color maps were also used to show the vessel density ( Fig. 1D) and RNFL thickness (Fig. 1E) immediately. Error in automatic segmentation sometimes occurred; in these cases, we manually corrected the entire scan volume. Statistical Analysis SPSS software for Windows, version 21.0 (IBM Corp., Armonk, NY), was used for statistical analysis. Normality of data was assessed using the Shapiro-Wilk test. All data are shown as the mean 6 standard deviation (SD), median and interquartile range (IQR, 25th-75th percentile), or percentages if appropriate. Differences in the data were assessed using the t-test or analysis of variance (ANOVA). Correlations between the OCTA parameters and DR severity were examined by using the Kendall's tau correlation coefficient; DR severity was defined as a continuous or categoric variable. A Spearman rank correlation analysis was performed to determine the relationships between vessel density and RNFL thickness. A P value ,0.05 was considered statistically significant. Patient Characteristics The study included 26 healthy, age-matched controls (26 eyes) and 72 patients with diabetes (72 eyes). The 72 eyes of patients with diabetes were divided into the following three groups based on the DR grade: 23 eyes had no DR, 24 eyes had mild DR, and 25 eyes had severe DR. All demographic data, general clinical characteristics, and SQI values of the images are shown in Table 1. Figure 2 shows a representative sample of vessel density in the peripapillary region with increasing DR severity. The average RPC vessel density for the peripapillary area is reported in Table 2 for each group. A statistically significant decrease in vessel density was found in the peripapillary region among the four groups (P , 0.001) ( Table 2). Using the Kendall's tau correlation coefficient analysis, significant correlations between DR severity and vessel density in the peripapillary region were observed (P , 0.001) ( Table 3). When post hoc multiple comparisons were performed, statistically significant decreases in the vessel density were observed in the mild severe DR groups when compared with the control group ( vessel density were observed in the severe DR group when compared with the no DR group (Table 2, Fig. 3A). Furthermore, a statistically significant decrease in vessel density was observed in the severe DR group when compared with the mild DR group (Table 2, Fig. 3A). No significant difference in the vessel density was observed between the control group and the no DR group (Table 2, Fig. 3A). Figure 4 shows a representative sample of RNFL thickness in the peripapillary region with increasing DR severity. The average RNFL thicknesses in the peripapillary area are reported in Table 2. No significant difference in RNFL thickness was found in the peripapillary region between the 4 groups (P . 0.05) ( Table 2). Using Kendall's tau correlation coefficient analysis, no significant correlation between DR severity and RNFL thickness was found in the peripapillary region (P . 0.05) ( Table 3). Using the student's t-test, statistically significant decreases in the RNFL thickness were observed in the no DR group when compared with the control group (P , 0.05) (Fig. 3B). No significant difference in RNFL thickness was found in the peripallary region between the no DR, mild DR, and severe DR groups (all P . 0.05) (Fig. 3B). Correlations Between Vessel Density and Retinal Nerve Fiber Layer Thickness With Increasing Diabetic Retinopathy Severity The results of the Spearman correlation analysis of vessel density and RNFL thickness of the entire peripapillary region among the different groups are shown in Figures 5A through 5C. Generally, there was a significant positive correlation between vessel density and RNFL thickness of the entire peripapillary region in the mild DR group (r ¼ 0.726, P , 0.001) (Fig. 5B). However, no statistically significant associations were found between vessel density and RNFL thickness of the peripapillary region in the no DR group (r ¼ 0.008, P ¼ 0.973) and the severe DR group (r ¼ 0.281, P ¼ 0.173) (Figs. 5A, 5C). Discussion In this study, we evaluated the correlation between microvascular and RNFL thickness changes in the peripapillary region and disease severity in eyes with DR via OCTA. The RPCs are a unique capillary plexus within the inner aspect of the RNFL. 16,17 The high energy demands placed upon the nonmyelinated axons located in the RNFL make it highly vulnerable to injury from ischemic insults. 18,19 Previous studies have demonstrated an association between RPC changes and RNFL loss in glaucoma. [20][21][22] Given the involvement of RPCs in the structural and functional changes in conditions, such as glaucoma, cotton wool spots, Bjerrum scotoma, and ischemic optic neuropathy, 10-12 studies involving patients with diabetes to investigate how RPCs and the RNFL may be influenced in such circumstances may offer further insights. Quantitative evaluation of RPCs was limited by the difficulties of visualizing the vessels on conventional fluorescein or indocyanine green angiography as a result of choroidal circulation. 23,24 OCTA provides a noninvasive and fast approach to peripapillary region perfusion analysis 13,24 as it is a transition from structural to functional imaging. Many studies have shown that OCTA allows clear capillary definition around the optic papilla 9,20-22 and is highly repeatable in follow-up among operators. 13 Currently, there is very limited quantitative data available regarding peripapillary region microcirculation in diabetic patients. We found a significant decrease in vessel density of RPC in the peripapillary region in patients with DR, which was similar to the results of a recent publication by Vujosevic et al. 25 ; however, Vujosevic et al. 25 did not report the data of eyes with moderate or severe NPDR or PDR. In our study, we found a significant correlation between DR severity and vessel density in the peripapillary region, which differed from the findings of Vujosevic et al. 25 and Cao et al. 26 who observed a decrease in vessel density of RPC in the peripapillary region in patients with DM but without DR. These results may have differed, due to the difference in demographics or the use of different OCTA technologies. Previous studies have demonstrated RNFL thinning in patients in the early stages of diabetes 27,28 as we observed in the RNFL of diabetic patients without DR in our study. The lack of significant difference in RNFL thickness in the peripapillary region between the no DR, mild DR, and severe DR groups observed here could possibly be attributed to the structural changes in the retinal tissue in the peripapillary region caused by intracellular and extracellular edema, hemorrhage, exudation, or glial fibrillary degeneration around the optic papilla. 29,30 Additionally, most previous studies used OCT to analyze RNFL thickness changes at early stages of DR. 27,28 Notably, we did not find a correlation between RNFL thickness changes in the peripapillary region and disease severity in eyes with DR via OCTA. It might be a clinical challenge to evaluate RNFL impairment by assessing the thickness in patients with severe DR, further emphasized by our correlation analysis findings between the vessel density and the RNFL thickness in the context of increasing DR severity. A statistically significant decrease in the RNFL thickness was observed in the no DR group when compared with the control group, which was similar to previously published results. 27,28 However, no significant difference in the vessel density was observed between the control group and the no DR group. Interestingly, there was a significant positive correlation between vessel density and RNFL thickness of the peripapillary region in the mild DR group. It demonstrated that the decreased RPC perfusion caused the impairment of nutrition supply to the RNFL to be impaired, which might have a profound influence on the activity and metabolism of the nonmyelinated axons located in the RNFL. The decrease of vessel density in the peripapillary region can be attributed to microvascular impairment. During diabetes progression, structural alterations of capillaries occurs, including vascular basement membrane thickening, endothelium dysfunction, and pericyte apoptosis, and these changes would cause a reduction in blood flow and capillary occlusion. 31,32 These structural alterations of capillaries also occur in small blood vessels in the peripapillary region. 33 When the RPC perfusion decreased further, structural changes in the RNFL, such as intracellular and extracellular edema and hemorrhage, caused RNFL thickness to increase when assessed by OCTA. This may explain why no statistically significant associations were found between vessel density and RNFL thickness of the peripapillary region in the severe DR group. A limitation of this study is that it is difficult to deduce the causal relationship between microcirculation changes in the peripapillary region and neurodegenerative changes from this cross-sectional study. In future studies, an improvement would be to make longitudinal observations to analyze the relationship between optic papilla blood circulation changes and RNFL layer thickness changes in DR patients throughout disease progression. Another limitation includes the absence of vessel skeletonization, which can remove the influence of vessel size on retinal perfusion measurements. 25 Moreover, due to severe macular edema in some patients with severe DR, eyes with diabetic macular edema (where there is swelling of the RNFL) are prone to segmentation errors. This may have led to inaccurate vessel density measure-ments, despite accounting for this by manually correcting the entire scan volume. In conclusion, vessel density of RPCs for the peripapillary region was closely related to DR severity and decreased with DR progression. Nevertheless, there were no obvious correlations observed between changes in RNFL thickness and DR severity, suggesting that careful monitoring of RPC vessel density in the peripapillary region might reveal neurodegeneration during clinical stages of DR. These findings may provide valuable insights, thus heightening our understanding, and offer a new direction for further investigation into microvasculature and neurodegeneration in DR.
2019-08-08T13:13:51.312Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "14c42697b678d1aa6a4ed24713dd8c0b5aa6ae89", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/tvst.8.4.14", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14c42697b678d1aa6a4ed24713dd8c0b5aa6ae89", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49234799
pes2o/s2orc
v3-fos-license
The Application Research of Intelligent Quality Control Based on Failure Mode and Effects Analysis Knowledge Management SPC provides statistical techniques and graphical displays control charts to enable the quality of conformance to be monitored and special causes of process variability to be eliminated(Montgomery D.C. 2005). Combining with computer technology and the automatic control technology, SPC has realized the product quality on-line control. With online SPC system, a large amount of manufacturing process quality data can be collected and stored in SPC database. These data can be visited via the operation in SPC database, and many kinds of quality reports can be generated for specific using. However, this technology has not been all well applied in enterprises(Venkatesan G., 2003). In practical manufacturing process, after the operators getting the analyzed results from SPC system, they need to make decision to eliminate the fluctuation in manufacturing process. But it is largely dependent on the personal knowledge and experience level of the operator on-site. Under this specific situation, the decision results are uncertain, and it may also lead to the quality problems can’t be solved in time. So, it is very important to find appropriate methods to support the operators’ decision making, especially the techniques which can facilitate the operators make right decision in time. Considering the existing limitations discussed above, this research is prompted by the need to have a technique that will enable the intelligent acquisition of Failure Mode and Effects Analysis (FMEA) knowledge for the decision making in the quality control process. The study specifically looks at the intelligent knowledge searching and reasoning in FMEA repository, which can facilitate the operators decision making. In this chapter, FMEA knowledge sharing in the manufacturing process quality control will be discussed after the brief review of FMEA, and an application system named IQCS has been developed to manage the FMEA knowledge for the manufacturing process quality continual improvement. Introduction SPC provides statistical techniques and graphical displays control charts to enable the quality of conformance to be monitored and special causes of process variability to be eliminated (Montgomery D.C. 2005).Combining with computer technology and the automatic control technology, SPC has realized the product quality on-line control.With online SPC system, a large amount of manufacturing process quality data can be collected and stored in SPC database.These data can be visited via the operation in SPC database, and many kinds of quality reports can be generated for specific using.However, this technology has not been all well applied in enterprises (Venkatesan G., 2003).In practical manufacturing process, after the operators getting the analyzed results from SPC system, they need to make decision to eliminate the fluctuation in manufacturing process.But it is largely dependent on the personal knowledge and experience level of the operator on-site.Under this specific situation, the decision results are uncertain, and it may also lead to the quality problems can't be solved in time.So, it is very important to find appropriate methods to support the operators' decision making, especially the techniques which can facilitate the operators make right decision in time.Considering the existing limitations discussed above, this research is prompted by the need to have a technique that will enable the intelligent acquisition of Failure Mode and Effects Analysis (FMEA) knowledge for the decision making in the quality control process.The study specifically looks at the intelligent knowledge searching and reasoning in FMEA repository, which can facilitate the operators decision making.In this chapter, FMEA knowledge sharing in the manufacturing process quality control will be discussed after the brief review of FMEA, and an application system named IQCS has been developed to manage the FMEA knowledge for the manufacturing process quality continual improvement. The origin and development of FMEA FMEA is a technique that identifies the potential failure modes of a product or a process and the effects of the failures, and assesses the criticality of these effects on the product functionality.According to BS 5760 Part 5, "FMEA is a method of reliability analysis intended to identify failures, which have consequences affecting the functioning of a system within the limits of a given application, thus enabling proofs for action to be set."(British Standards Institution, 1991) It provides basic information for reliability prediction, product and process design (Sheng-Hsien Teng & Shin-Yann Ho, 1996).In the early 1950s, Grumman Corporation in U.S. applied the FMEA firstly and got very well results (Hai ZHANG & Zhi-bin ZHOU, 2007).Since then, FMEA obtained widespread applications in the military system design, such as the aviation, astronautics, ships, weapon etc.Under the leadership of Ford Company, Big Three auto makers started to introduce FMEA in the product quality improvement (Daimler Chrysler Co., Ford Co. & General Motor Co., 2008).Then, FMEA had been widely applied in vehicle safety assurance.With the application value of FMEA had been widely recognized, FMEA has been applied in more and more other fields, such as machinery, medical equipment, nuclear power, food safety assurance etc.After a long time development and improvement, FMEA has become a necessary reliability analysis work which must be completed in system development (Stamatis D.H. 2003).Traditionally, FMEA is used in hard copy or spreadsheet format to capture the potential problem of a design or process.The knowledge captured is aimed for reuse.However, as the knowledge in the FMEA grows, it gets harder and harder to be reused.The implementation of a highly manual FMEA system is a difficult task.FMEA is found to be not user friendly, hard to understand and not very flexible.As a result, many companies use FMEA merely to satisfy the contractual requirements of their customers (Dale B.G., & Shaw P., 1996).Users always find FMEA is a "tedious and time-consuming activity" (Price Chris, Pugh David R., Wilson Myra S., & Snooke Neal, 1995).It is especially true when FMEA is used in complex systems with multiple functions (Hai ZHANG, & Zhi-bin ZHOU, 2007). The significance of FMEA knowledge acquisition in process quality control With the increasing complexity of modern manufacturing system, there are more and more uncertain factors in manufacturing process, and the difficulties of process reliability analyzing increased greatly.Meanwhile, the quality problem dealing and analyzing are restricted enormously by the personal knowledge and experience level of the on-site operators.FMEA knowledge resources, which have been accumulated in the earlier periods, can not only facilitate the comprehensive decision making, but also improve the quality problem analyzing process.So, it is significant for the manufacturing process continual improvement to utilize the FMEA knowledge resources. The structure and orinciple of IQCS As shown in Fig. 1, the structure model of the IQCS mainly contains three tiers: 1. Function tier: It makes the real-time data acquisition and analysis in the manufacturing process.It can output the statistical analysis results in the form of quality report, and input the data of manufacturing process into the SPC system database.2. Data tier: It implements the FMEA process, which is conducted by the experts coming from different area, such as design, manufacture and quality management etc.The knowledge and experience of experts will be extracted by the "Brain Storming" activity. Then, the FMEA results will be transformed into FMEA knowledge according with specific method, and put into FMEA repository. Fig. 1.The principle of IQCS based on FMEA repository 3. Knowledge tier: It is a database that is designed to store the data collected by SPC system, and the knowledge will be stored in the FMEA repository which is designed with specific structure.In the practical manufacturing process, the function tier of SPC system will do the collecting and analysis of the real-time process data, then, the statistical analysis results will be output according to the users' specific requests, and the manufacturing process data will be put into the SPC system database.The function tier of FMEA is to complete the FMEA process.The interaction tier, which includes SPC system database and FMEA Repository, is responsibe for the dynamic interaction between FMEA and SPC system.There are two tasks: the first one is to extract information dynamically from SPC system for FMEA process, and the other one is to visit FMEA repository to provide decision supporting for quality control and system adjusting in the manufacturing process.So, it is very important to establish the interaction mechanism of the interaction tier. The operation of IQCS system. In manufacturing process, the function of IQCS is realized by the operation of FMEA and SPC system through data mining/ information extracting process and dealing with problems and decision process.The data mining/ information extracting process collects manufacturing process data dynamically from SPC system, and these data will be provided for the experts to conduct FMEA process.The results of FMEA will be transformed and put into FMEA repository, provide real-time decision supporting for operators to analyze the quality problems, and adjust manufacturing system in time.In this process, the most important keys are Data mining/Information extracting, FMEA knowledge transformation, and the interaction between SPC and FMEA. Data processing and FMEA knowledge extracting. To make full use of the large amount of quality data and messages in manufacturing process in SPC system database, the main function of Data mining/FMEA Information Extracting is to develop specialized algorithm of knowledge discovery and data mining based on Knowledge Discovery in Databases. FMEA knowledge transformation based on ontology Because of the diversity and complexity of FMEA knowledge, object-oriented knowledge expressive method, process model method and predicate logic method, frame described method and production rule method now often used all have disadvantages in some extent.These methods can not express the meaning of FMEA knowledge precisely.Ontology has accurate form of expression and explicit semantic which can definite the relationships between concepts, concept and object, also objects.This useful expressive form reduces the misunderstandings of relationship of concept and logic, and makes the share and reuse of knowledge possible.It is the important theory evidence of FMEA knowledge system based on ontology. The expressive method based on ontology can depict basic knowledge system of certain domain through normal description of concept, term and the relationship between them.The method can not only depict the object-hierarchical model, such as organization, resource, product, but also denote nonfigurative affair, such as faith, target, plan, activities. Using ontology to denote knowledge can make FMEA knowledge-frame legible.FMEA knowledge may be organized systematic and constructional through knowledge-modeling based on ontology.The well-organized model can promote analysis and answer of quality.Meanwhile, it is useful to share and reuse the knowledge inner corporation even between corporations in the same domain. The interaction between SPC system and FMEA repository In order to combine FMEA with SPC dynamically, the internal relations between SPC system and FMEA repository must be established combining with specific manufacturing process, and knowledge discovery in FMEA is driven by the results of SPC system.The coordination mechanism of database and repository is introduced in the transformation of tacit knowledge which mainly come from the experience of experts(You-zhi XU, Dao-Ping WANG, & Bing-ru Yang, 2008).Based on the learning form above references, the IQCS system operation process was developed.Firstly, the structured data of product key quality characteristics value are founded according to the manufacturing process, secondly, intelligent searching and reasoning process in FMEA repository will be started through the association of product and its corresponding characteristic value, finally the FMEA knowledge responded to the key quality characteristic value will be attained. An illustrative example Of IQCS Combining with specific manufacturing quality control process, the quality data extracting and the abnormal quality problem analysis process in IQCS are as follows: www.intechopen.com The Application Research of Intelligent Quality Control Based on Failure Mode and Effects Analysis Knowledge Management 379 Fig. 2. A specific quality control process based on FMEA repository and SPC system Step 1.The operator extracts the quality data according to the key quality characteristics. The regular data will be put into the SPC database, if the irregular data appear, then switch to step2; Step 2. The system will remind the on-site operator with a pop-up message and start to the searching in FMEA repository at the same time. Step 3.After finding out the corresponding FMEA knowledge in the FMEA repository, the results will be sent back to the operator through system interface to support the operators' decision making.The specific flow chart of this process is shown in Fig. 2. Conclusion In order to solve the problems of analyzing quality problems in manufacturing process and eliminate the uncertainty in manufacturing system adjustment decision making, this paper presents an approach which can transform the tacit knowledge in FMEA process effectively through a FMEA repository based on ontology.Combined FMEA with SPC system, the intelligent searching and reasoning can be realized through the cooperation of FMEA repository and SPC database.And FMEA repository based on ontology can realize the FMEA knowledge sharing and reusing through the intelligent reasoning technology.Based on the theoretic study, an application system IQCS has been developed for specific manufacturing process.Its application shows that it can not only improve the efficiency of quality control, but also prevent the potential quality problem.Thus, the independence of quality fluctuation in manufacturing process will be enhanced.This research provides an effective way for decision making of process quality improvement and the manufacturing system adjusting.Based on the current study, further research will focus on enable techniques development which can facilitate the dynamic coordination of FMEA repository and SPC system in manufacturing process.
2018-06-16T04:40:31.320Z
2011-04-26T00:00:00.000
{ "year": 2011, "sha1": "7d220041fd313b7c3ff36b2c5c2ac3aacc6e358c", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/14851", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "58150038f9c2438a5c89f72f066f2b685e5e22bc", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
168273855
pes2o/s2orc
v3-fos-license
LESSONS FOR NEW ZEALAND LIFELINES ORGANISATIONS FROM THE 17 JANUARY 1995 GREAT HANSHIN EARTHQUAKE This report outlines the observations and findings with regard to lifelines and other infrastructural items from each of the various New Zealand post-earthquake visits to Kobe subsequent to the NZNSEE reconnaissance team visit. The preliminary assessments on lifelines aspects made in the NZNSEE reconnaissance team report are developed further. Lessons and recommendations for New Zealand are presented. In addition to lifelin_es aspects, observations are a~so made on the political decision-making process, subsequent economic trends and temporary housmg and emergency management issues. EDITORIAL Towards the end of last year, one of my colleagues who was on leave in the United States sent back a cutting from the New York Times (8 August 1995) containing an article by Sandra Blakeslee entitled "Hopes for predicting earthquakes once so bright, are growing dim". The article was based on the views of a number of U.S. seismologists and geophysicists and describes how the science of earthquake prediction has fallen on hard times with some experts viewing earthquakes as a classic example of a chaotic system. Many U.S. seismologists are now reported to think that earthquakes are inherently unpredictable. They say that the search for ways to warn people days, hours, or minutes before an earthquake appears to be futile. Currently, a third of the US$100 million now spent on earthquake research and reducing hazards in the U.S. is spent on ways to construct safer buildings, bridges and highways. The_ rest is spent on basic research on understanding earthquakes and' looking for ways to predict them. If earthquakes cannot be predicted, it is not surprising that some U.S. scientists think that much more of the money should be spent on reducing the hazards. Whatever the outcome, the article reports that earthquake prediction has undergone a reversal of fortune from the belief of many scientists during the 1960s through to the mid-1980s that it was possible. Two scientific models drove this optimism. One, called dilatancy theory, is similar to what happens when people at a beach step on wet sand and the sand dries out around their feet. It was thought that the same phenomena would occur in earthquake faults before their failure, as stressed rocks deformed in a characteristic way and released water that could be detected. A second related idea, called the seismic gap hypothesis, says that earthquakes tend to repeat along known fault zones. After an earthquake, stress is released over timeperhaps several hundred years -· strain reaccumulates and the fault is destined to break again in a more or less characteristic pattern. The likelihood of earthquakes reoccurring along several segments of the San Andreas fault have been predicted using this latter model. Instruments were set up along one segment of the fault near Parkfield, about halfway between San Francisco and Los Angeles, where the seismic gap hypothesis predicted another earthquake should occur. The instruments are designed to find precursors, like subtle motions in the earth's crust, so that people can be warned shortly before an earthquake strikes. Unfortunately, the predictions have not been going as planned for the Parkfield earthquake is overdue, having been predicted to occur before 1992! Also, the most recent damaging earthquakes in North America and Japan -Loma Prieta, Northridge and Kobe -struck without any precursory signals. More importantly, there seems to have been a shift in thinking about the dynamics of earthquakes. The idea that big stresses build up along fault zones and then have to be released in a characteristic manner is no longer considered to make as much sense as it once did. Since actual measurements deep in the ground indicate that stresses within faults are actually very weak, the mystery is why earthquakes occur at all with such small stresses. Apparently, clues to this mystery are now being found in the new science of chaos and complexity with earthquakes being a classic example of a chaotic system. In this view, the earth's crust is prone to constant shifting, particularly along fault lines. Tiny earthquakes are occurring all the time. In major seismic regions of the world, thousands of earthquakes may be detected each year, though most are not felt. However, for reasons not understood, some of these small earthquakes do not stop. Local rock conditions or other geologic factors allow a magnitude one earthquake to expand into a magnitude two earthquake involving a larger region. Less commonly, a magnitude two earthquake develops into or sets off a magnitude three earthquake, and so on. According to this view, a big earthquake can be thought off as a small one that has run away. The problem in earthquake prediction lies in being able to predict in sufficient detail just which of the small earthquakes will become large. However, not all scientists share this pessimism and a number of geophysicists have recently proposed an earthquake model that might help to determine which earthquakes could become large. This involves measuring the slow, steady slip that occurs along fault before they undergo high speed slipping that gives rise to an earthquake. One of the problems is how to detect slow slippage (so slow that it doesn't generate seismic waves) over sufficiently large areas. According to the article, the recent large earthquakes in California and Japan have had the effect of bringing the debate about earthquake prediction to public notice rather than letting it remain in .more academic circles. Now some people are calling for the Government to stop spending money on earthquake research and put it all into reducing earthquake hazards. As a seismologist at the United States Geological Survey says, that would be foolish as faults could interact in complex ways and not understanding them could be more costly in the long run. Nevertheless, there are suggestions that more could be spent on engineering research as there is serious under-investment in our knowledge of how buildings work in earthquakes, how the ground moves and how buildings react. The recent large earthquakes have shown that tall structures .in our cities are likely to suffer severe damage. After the Northridge earthquake for instance, it was found that about 100 steel-framed buildings were damaged, and they had been considered to be one of the better type of earthquake resistant structure. As the Californian Seismic Safety Commission says in its report on the Northridge earthquake: "Despite our codes and world-renowned expertise, too many of our buildings and other structures remain vulnerable to earthquake damage. There are significant weaknesses in the way we exercise land use planning Jaws and design and construct buildings and lifelines". One solution offered is for a moratorium on building any new structures over six stories high in the Los Angeles area until engineers know how to build tall structures that can withstand moderate and severe shaking! If nothing else, it looks as though the debate about earthquake prediction and how best to spend funds to improve seismic safety will go on for some time yet, not only in the United States but also in New Zealand, Japan and other countries in the highly seismic regions of the world. 1 Background At 5.46 am on 17 January 1995 the Hyogo-Ken Nanbu (Great Hanshin) earthquake struck the Kansai region of Japan. More than 5,500 people were killed and 26,800 seriously injured. Approximately 300,000 people were made homeless, with 234,000 in Kobe City itself. Kobe is Japan's sixth largest city, with a population of 1.5 million people. The Hanshin district is the second most important industrial area in Japan after the Tokyo -Yokohama area. This earthquake is clearly of interest to New Zealand and is of particular relevance to Wellington due to geographical and geological similarities. Although of lesser magnitude than that portrayed for the Wellington Fault Event earthquake, the concentration of damage to a narrow but long strip is considered to represent a comparable effect. There are also parallels with other areas in New Zealand, as an earthquake of this scale was not anticipated in this part of Japan, despite the national context of high seismicity. In addition to the severe damage suffered by many buildings, this earthquake caused major damage and disruption to lifelines on a much wider scale than other recent international events. Accordingly, this earthquake presented a unique opportunity for lifelines managers, engineers and researchers from New Zealand to learn key lessons experienced by their Japanese counterparts. Information has been keenly sought in relation to the likely extent of damage that the various lifelines in New Zealand could anticipate, the effectiveness of recent mitigation measures and issues associated with the response and recovery phases. Scope of This Report The commitment of New Zealand to learning and applying the lessons from the Great Hanshin earthquake has been clearly demonstrated by the people and groups who have visited Kobe in the months following the event. These have included: • The immediate post-earthquake reconnaissance mission by a 13-person party organised by the New Zealand National Society for Earthquake Engineering and headed by Professor R. Park, with David Spurr being responsible for compiling the material on lifelines [Park et al., 1995). A subsequent follow-up visit was made in July by Ian Billings, a member of that team, looking at the recovery of the transportation networks [Billings, 1995). • The study of the disaster response and recovery startup undertaken from 12 -24 March 1995 by Neil Britton and Rachel Scott of the Wellington City Council Emergency Management Office [Britton and Scott, 1995). Thirty-two presentations were made on the full range of issues from detail on physical mitigation to mathematical techniques for analysing lifeline system operations and interaction. Proceedings from the workshop are available; a summary of the papers has been given by Hopkins [1995]. Given the proximity to Kobe, post-workshop tours of the affected city enabled the opportunity to learn more of how the key lifeline organisations coped with the earthquake damage and managed their response and recovery phases. The workshop was hosted by the Public Works Research Institute of Japan. New Zealand Lifelines Study Tour This week-long tour to Kobe was undertaken from 27 August to 1 September 1995. Arrangements for this visit were made through the Mayor's Office of Kobe City. The following people from New Zealand participated in this tour: • Peter Leslie, Divisional Manager Utility Services, Wellington Regional Council • Councillor Ernie Gates, Wellington Regional Council • Graeme Hughson, ( Acknowledgements There has been a considerable financial commitment toward the information contained in this report from a number of organisations. The major contributors include: Earthquake Commission Wellington Regional Council Wellington City Council New Zealand Gas Association Wellington Earthquake Lifelines Group Christchurch Engineering Lifelines Project Centre for Advanced Engineering Hutt City Council The considerable effort made by the contributors to this and previous reports on the Kobe earthquake, much of it invariably outside normal working hours, is also worthy of note. This commitment of time, money and effort by organisations and individuals alike is gratefully acknowledged, with the collective value being reflected in the following pages. The generosity in terms of time and sharing of information shown by the Japanese in hosting the New Zealand groups and individuals must also be acknowledged. Particular mention should be made of the Mayor's Office of the City of Kobe and the Public Works Research Institute of Japan. The willing assistance provided by Mr Kiyoyuki Kanemitsu, Director of the International Department of the Mayor's Office, proved invaluable. [Alexander Howden Group Ud, 1995) FIGURE 2.2 The approximate extent of MM9 and MMJO intensities [Alexander Howden Group Ltd, 1995] 1.2 m horizontally. While the fault runs through beneath Kobe City, no surface fault ruptures were found in the Kobe area due to the depth of relatively soft alluvial materials present at the foot of the Rokko mountains. FIGURE 2.1 The Hanshin district showing the earthquake epicentre Liquefiable sands and sandy silts are present under all the reclamations and former beach areas. The total thickness of reclamation fill is up to 15 m. [Takeda and Ueno, 1995) The strongest recorded acceleration was 0.82g measured at the Kobe marine weather station. The period of strong shaking was approximately 20 seconds. Summary of the Effects on Individual Utility Services The overall response of key utility services in terms of the rate of service restoration is summarised in Figure 2.4. The number of days taken to achieve 100% service restoration takes account of the number of destroyed buildings. Water Supply Even with the high levels of preparedness and mitigation undertaken by Kobe City Waterworks, the level of damage was considerable, with an estimated NZ$500 million damage being sustained. This was the first time in Japan that there had been a complete water outage. It took 11 days to restore water to 50% of customers and 72 days to achieve 100% restoration. Wastewater Services Widespread damage to Kobe's sewerage and stormwater drainage infrastructural assets was experienced, with an estimated damage cost of NZ$670 million. The full extent of this damage will take more than a year to determine. The planned two year works programme for full recovery is reliant upon very high levels of funding support from the national government. Transportation Networks The loss of elevated sections of the Hanshin Expressways and the railway structures that were constructed prior to the early 1970's severely restricted transportation capability. The cost to repair the Hanshin Expressway and to upgrade to current earthquake standards is estimated at NZ$7. 6 billion. While rail services were restored within 160 days, it will take nearly two years to renew damaged parts of the elevated expressway network, causing continuing disruption to local travel patterns and the economy. Pon Facilities The loss of virtually all of the port facilities has had a considerable adverse impact on the regional economy. Port reconstruction costs are estimated at NZ$10 billion and all repairs are programmed to be completed 2 years after the earthquake. Most of the working berths at the port were seriously affected by the earthquake and were forced to close until temporary or permanent repairs could be carried out. Large areas of reclamation suffered settlement, and rotation of the wharf face caisson structures caused severe damage to all of the container cranes in the port. The port has moved with great rapidity to reconstruct its facilities. While some cargoes continue to be diverted through other ports, the trade through the Port of Kobe had recovered to 63 % of its pre-earthquake volumes within 7 months of the earthquake. Gas Supply The Osaka Gas Company supplies a total of 5.6 million customers in the greater Osaka region. After the earthquake, 860,000 customers in Kobe and neighbouring cities were without gas. It took the company almost three months to repair the damaged network and restore supply to those whose premises remained intact. The considerable amount of repair work required made up the largest part of the company's stated losses due to the earthquake of NZ$3.2 billion. Electrical Networks The power supply network in the Kobe area suffered significant damage, with 25% of the 11.7 million customers in the Osaka region losing power. Approximately half of the transformer substations within the highest intensity zone were damaged. These had all been constructed prior to 1965. While service was fully restored within a week of the earthquake, the estimated cost of rebuilding all necessary damaged facilities is NZ$3.8 billion. Mitigation and preparedness measures previously implemented were considered to have been particularly effective. In addition to a system design featuring a high degree of redundancy and flexibility, participation in a nationwide mutual aid scheme brought significant assistance with personnel and materials. Telecommunications Facilities In addition to the loss of 285,000 lines, network capacity limitations were experienced in the days following the earthquake. The vulnerability of overhead lines was again highlighted. While underground cables sustained much less damage than overhead lines, the importance of having flexible connections at building interfaces was shown. This earthquake has also emphasised the considerable amount of time required to check the condition of underground ducts after a major earthquake. Mitigation work carried out following previous major earthquakes has focused on underground facilities, and the Kobe earthquake has demonstrated the value of this work. Network Description Since its modern water supply system was introduced in 1900, the City of Kobe has relied upon the Yodogawa river system for about three quarters of the approximately 830,000 cubic metres of water consumed each day. The water supplied from the Yodogawa river is the responsibility of the Hanshin Water Supply Authority (a Public Corporation set up by four cities including Kobe to be the bulk supplier). The water is supplied through two distribution tunnels. Kobe obtains the remaining quarter from its own water sources and a small amount from a water supply project operated by the Hyogo Prefecture. In principle, the water distribution system is a gravity flow system. It has a stratified distribution system with four pressure zones and reservoirs at 119 locations. The principal waterworks facilities are shown in Table 3 .1. The network is shown in Figure 3 .1 [Kobe City Waterworks Bureau, 1994). Overview The total estimated damage to the water supply system is estimated at NZ$500 million. The main damage was to the distribution mains (between 50 mm diameter and 900 mm diameter) and the service mains (smaller than 75 mm diameter). As at 30 April 1995 over 14,000 leaks had been repaired under public roads and 58,000 leaks on private residential land. The numerous breaks Jed to the complete loss of water in many areas. The resulting no water or low pressure situation made finding leaks difficult. There was relatively little damage to the main purification plants and reservoirs. This was due to their location on good ground conditions (i.e. away from the liquefaction and high intensity ground shaking areas) and robust design. The other significant item of damage was the collapse of the sixth floor of the City of Kobe's No. 2 Office Building. This was the head office of the Waterworks Bureau and its loss had considerable implications for the restoration phase. Fortunately the Okuhirano Purification Plant, which is the main operational control centre for the water supply system, suffered minimal damage. A full description of the damage supplied by the City of Kobe Water engineers [Kobe City Waterworks Bureau, 1995;Matsushito, 1995] is set out in the following sections. • Dams There are small cracks in two dam bodies. • Aqueduct Some collapses and cracks were seen in Sengari Aqueduct Tunnel. • Purification Plant The Uegahara Purification Plant is the most severely damaged of the seven purification plants in Kobe City. This damage occurred to the slow sand filter, rapid sedimentation basin, wash water tank and wastewater treatment facility. At the Motoyama Purification Plant there were some cracks in the wash water tanks and leaks in various plant pipelines. • Water Transmission Tunnels and Pipes No failures were evident in two water tunnels which convey water to Kobe City. However, there were some leaks in the pipelines to Jumonji and Konan service reservoirs. Some collapses of concrete structures were seen in Motoyama and Kumochi water pipeline tunnels. • Service Reservoirs At the Egeyama Service Reservoir cracks between the old structure and new structure resulted in all the stored water being lost. To date no failures have been evident at the remaining 118 service reservoir locations. • Other The buckling of the steel tower at the Okuhirano Control Centre, which is used for telemetry and telecontrol functions, did not affect operations. Pipelines Pipelines suffered the most severe damage from this earthquake. The number of distribution pipeline failures reached over 1600 as at the end of April. As Table 3 .2 shows, the largest cause of failure is joint separation (64%). The rest are pipe breaks (21 %) and pipe fitting failures (16%). Besides these pipe failures, Kobe engineers comment that there are many leaks which are still to be found by leakage survey teams. The water pipelines to the artificial islands of Port Island and Rokko Island were severed. The damage to the bridges was a major barrier against the quick recovery of water service to these areas. Wells The effects of the Kobe earthquake on wells drilled into an aquifer was able to be studied from experience at the nearby Municipality of Akachi. Akachi's water supply is from an underground aquifer. 60 wells have been drilled 180 metres deep. There are 4 associated purification plants. Although Akachi is just to the north west of the epicenter and closer to it than the majority of Kobe City, it is understood that Akachi is not founded on the same poorly compacted soils nor directly above the fault which moved. None of the wells or purification plants had any problem. Kobe City had one private well under the Municipal building and that continued to supply satisfactorily after the earthquake even though that was right in the zone of the greatest damage. In Akachi, there was originally some concern in that the level of the groundwater rose by up to 5 metres but apparently this soon settled back to nonnal. There was also a deep spring which increased its flow by about three times but in August this was back to about 1.5 -2 times the flow before the earthquake. The volume of water increased considerably for 10 hours after the earthquake. In the Christchurch Engineering Lifelines project, a worldwide literature search failed to reveal any reports on the way in which water wells have performed under earthquake loadings and it is very reassuring to know of the Akachi experience. Disruption to the supply of water from underground aquifers is most likely to occur as a result of differential movement between the wellheads and other structures rather than from damage to the wells themselves. Service Connections Till the end of April, the number of service connection repairs reached 12,827 in roads and 58,408 in private property making a total of 71,235. Most were due to pipe breaks and joint separation following the collapse of houses and road deformation. This large number of failures was the main reason for the rapid water loss from the distribution pipelines. Overall The degree of preparedness and extent of mitigation measures was impressive. This is due to Japan as a country being very conscious of the need to be "disaster prepared". This is in part due to the relatively frequent occurrence of typhoons. Each year a day is set aside for national disaster preparedness and all organisations visited had annual disaster preparedness exercises. Relatively frequent typhoons also provide an opportunity to test disaster plans, albeit on a much smaller scale than the Kobe earthquake. In addition, high quality "earthquake proof" materials are used throughout the water works system. These include special earthquake proof pipe joints which typically allow about 60 mm elongation and 3 ° rotation. These measures reduced the damage and accordingly assisted the restoration. Overall, Kobe's mitigation and preparedness measures are rated highly effective although there was still massive disruption to the water supply. This has to be expected with the best plans and mitigation measures. The success is really how quickly the water supply can be restored. Pre-earthquake policy Post-earthquake policy (2) Flow rate exceeds the standard value. Alarm III: -80 gal The valves do not close Kobe City Earthquake Policy Kobe City had a comprehensive Earthquake Policy in place at the time of the earthquake. This was outlined in the "little red book" which is given to all staff members. Unfortunately it is not available in English. The policy applying to water supply is set out in Figure 3.3. As can be seen it is split into pre earthquake measures and post earthquake measures. The pre earthquake measures are: The system is shown diagramatically in Figure 3.4. 33 of the reservoir sites have effectively been designated emergency drinking water sites. At these sites 2 reservoirs are provided. Depending on the level of the earthquake and/or the level of outflow (see Figure 3.4), the valve on one reservoir is closed. The aim is that 3 litres/person/day for 7 days is stored for all the people who will be supplied with emergency drinking water from that particular reservoir. The layout plan for the Emergency Shut-off Valve System is based on people not having to walk further than 2 kilometres for emergency water. The plan is similar to the system developed by the Wellington Earthquake Lifelines Group. However, the Wellington plan only goes as far as identifying locations where water is likely to be relatively soon after an earthquake if shutoff valves are installed. The construction of Kobe's Emergency Shut-off Valve System commenced in 1985, and 21 valves had been installed at the time of the January 17 earthquake. All but two operated successfully. (ii) Replacement of Aged Pipes Recognising the importance of pipeline material type as part of its earthquake preparedness, Kobe has had a programme for the last 30 years of replacing all pipes with ductile iron. All new areas are also required to be laid with ductile iron. It was interesting that ductile iron pipes are even inserted inside existing cast iron pipes. At the time of the earthquake 86% of the pipe network was ductile iron, 3 % steel and 11 % is in what are considered to be unsatisfactory materials such as cast iron and PVC. (iii) Communication System Kobe City's communication system for operating the water supply network is a dedicated radio system which is independent of the telephone system. Although more expensive than utilising the telephone system, it was unaffected by the earthquake. This was invaluable during the response period particularly on the first morning when there were problems with the telephone system. (iv) Mutual Aid Agreements Kobe City had agreements with seven other cities to assist in an emergency situation such as after an earthquake. These apply to all functions including water supply. They were most helpful in the Kobe earthquake situation. 11 The post earthquake measures shown in Figure 3.3 for Emergency Water Supply Provision and temporary repairs are described in Section 3.4. Overall Overall Kobe's response and recovery must be judged as a success. Initially all water was cut and it took 11 days to restore 50% of consumers and 72 days to restore 100% (refer to Figure 2:4). The restoration process is set out in Figure 3.5. The large amount of assistance from outside Kobe greatly contributed to the rapid response and recovery. Emergency Water Supply Kobe City Officials described the emergency water supply service as follows: "Because this was the first time water had been cut off in an entire city, supplying emergency water was an extremely timeconsuming activity. Although we were confident we could supply enough water for immediate needs, we were unable to make full use of our fleet of water trucks because of traffic congestion and other problems. This hindered the emergency supply project. In the evening of January 17 (the day of the earthquake), we started emergency water deliveries; mainly to 170 primary schools serving as evacuation centers. At the peak of this project, we were using 432 trucks on loan from 92 other cities. Water supply boats from the Marine Seif-defense Force and the Maritime Safety Agency also helped with the project". It is interesting that water was provided both from the emergency reservoirs and after a few days by ship into the Port of Kobe. Fire Fighting The rapid loss of water through breaks in the mains meant that firemen faced a futile task. Attempts were made to pump water from the sea for fire fighting, but a lack of traffic control leading to vehicles running over hoses meant that this was not successful. Mutual Aid At the peak there were 734 workers from 43 cities repairing pipes under the streets and another 272 workers from 53 cities working on pipelines on private residential property. This support was arranged through the Japanese Waterworks Association and was invaluable in repairing the 71,000 leaks by the end of April. Information gained from Kobe officials sets out the various questions received from residents in the period after the earthquake as well as the steps taken to inform the public of progress with restoration. This will be most helpful in further developing response plans in New Zealand. Plans for the Future By March, Kobe City Waterworks Bureau had prepared "Guidelines Jor Earthquake Resistant Wate,works in Kobe City" as well as its associated "Working Plan". These will be translated into English. The essential features of the plan described below are shown in Figure 3.6. • For the first 3 days after an earthquake, provide 3 litres/person/day • 10 days after the earthquake, 20 litres/person/day • 21 days after the earthquake, 100 litres/person/day • Construct an additional large transmission main through urban districts with associated facilities so that people can be supplied with water from this main immediately after the earthquake. Interdependence Issues A number of interdependence issues were highlighted during this earthquake for water supply. • Roading Disruption to roads caused difficulties for repair teams travelling about the city, for materials being transported to the city and for emergency water tankers supplying water. • Port The port was used to supply some of the emergency water as well as some restoration materials. • Telecommunications As noted previously the communication system for operating Kobe's water supply system is largely independent of the telephone network. However, with the Waterworks Bureau offices being destroyed in the earthquake, there were considerable constraints with only one telephone line and one fax line being allocated to Waterworks for the first 2 weeks after the earthquake. • Electricity Electricity did not appear to be a problem for Waterworks as it was restored very quickly. In addition, most waterworks' facilities requiring power are interconnected on a two independent lines basis. Lessons for New Zealand • Response Planning The importance of high quality response planning was again highlighted. Major New Zealand cities, and in particular Wellington, could not rely on such rapid and extensive assistance that Kobe had. This makes it even more important that response planning and mitigation measures are of a high standard and that annual exercises and audits are held. • Personnel Response Time Approximately 50% of the waterworks staff managed to get to work on the first day of the earthquake. This needs to be taken into account in response planning. • Detailed Mitigation Measures The following detailed measures have been identified taking into account lessons learnt from both Kobe and Northridge (1994) and applying them to New Zealand situations. Treatment Plants & Pumping Stations -Bolt Down Equipment etc. -this includes all pipes and machinery as well as internal fittings and furniture. -Provide Flexible Joints Where Appropriate -while it is important to bolt down equipment, pipes, machinery etc., it is also important to provide for earthquake induced movement through flexible joints etc. -Provide Standby Power -have all essential features backed up by standby power which is tested regularly by those who will be operating it in an earthquake situation. Reservoirs -Upgrade older existing reservoirs to be seismically resistant as necessary, providing flexible joints where appropriate, e.g. at inlet and outlet pipes. -Investigate feasibility of installing two tanks or dividing single tank to provide emergency water supply which can be cut off and retained while at the same time also providing water for fire fighting purposes. -Install cut off valves which are activated by flows typical of a large downstream break. Pipe Reticulations -Bolt or tie down all above ground reticulations. -Provide for flexibility and movement where appropriate. In Kobe, flexible joints for connecting the water supply from a water main to a bui_lding have been developed. -Have programmes to replace earthquake deficient materials such as cast iron pipes etc. • Preparedness -Have appropriate spares strategically located. -Have plan records in strategic places. -Have suitable communication systems in place. This may include arrangements to ensure priority to cellular network. Communication systems should also be compatible with those of mutual aid organisations. -Develop mutual aid agreements with companies and organisations that are likely to be able to help. -Develop media plans and test. -Recognise that Utility staff will be fully occupied with inspection and repair. This means there will need to be plans for others to undertake distribution of potable water to those needing it. In Kobe, the Waterworks Bureau did both but in Wellington present plans are that Civil Defence would look after Emergency Water distribution during the response and recovery period. The interface between water supply managers and emergency managers requires further consideration. • People Aspects -Place emphasis on preparing for and looking after personnel in the earthquake situation. This includes providing adequate food, potable water, shelter, and hygiene facilities for those remaining at work for extended periods. It is also important to provide means by which workers who are likely to be away from their families for long periods can communicate regularly with them. In the Kobe situation the provision of food and basic hygiene facilities was marginal in the first few days after the earthquake for people working 16 hours per day and only going home once every 3 days. -"People want to be part of the solution not part of the problem". In accordance with this objective, the response plans need to be prepared with an emphasis on staff knowing they have the authority to get on and repair key facilities rather than having to get management approval at each stage. Management also needs to be careful not to interfere with staff who will be best at undertaking repairs. -Educate the public at large that for at least the first three days and may be for up to a week after a major earthquake, the water supply will be at best unreliable and they should be planning to look after themselves. Network Description Sewerage and stormwater drainage services are managed by the Sewage Works Bureau of the City of Kobe. Kobe is separated into two general areas by the Rokko Mountain range. The area lying on the coastal side is the densely populated, heavily urbanised 'old' city which was hardest hit by the earthquake. The inland and northern sides of the range was developed to accommodate post-war growth. The is divided into eight treatment areas, two of which are connected to regional sewerage schemes administered by the South Hyogo Prefecture (Regional Council). Statistics of the uH""''1'" infrastructure are as follows: The City of Kobe maintains the sewerage system to the inlet side of the inspection chamber located at each property boundary. Damage Assessment Sewerage and stormwater drainage assets were heavily impacted by the earthquake. The total estimated cost of the damage sustained is NZ$670M, representing 8% of the replacement value of the systems. As would be expected, the severity of damage was closely related to the high risk geological zones; namely the liquefaction prone reclaimed islands and port area, and the narrow coastal strip of alluvial soils running along the base of the Rokko mountains. Damage was also higher along the paths of old streams, possibly due to the higher water Sewers and Stormwater Drains Knowledge of the full extent of damage is incomplete and it is anticipated that many problems will come to light in time. The reclaimed islands generally settled uniformly and there was much less damage than could have been expected. There were no known instances of pipes floating. The typical nature of damage was: -Pipe faults: joint opening, circumferential cracks and multiple fractures. -Tunnel: Minor longitudinal and circumferential cracks in bored tunnels on reclaimed island. -Stonnwater outlets: Dislocation of pipes due to lateral spreading of stream embankments and sea walls (particularly in the port area). -Stonnwater culvert: Tilting and fracture of side wall slabs. -Collapse of stone masonry stream bank walls. The impact of the damage was a 20% increase in dry weather sewage flows. Concrete and plastic pipes generally survived well, and the majority of damage occurred with earthenware pipes. Although a detailed analysis of damage is still lacking, some useful macro statistics are available. Based on the results of investigations completed up to July 1995, damage ratios were higher for small diameter pipes (4%) than main drains (2.4%), probably because they are laid at less depth and involve a much higher percentage of earthenware pipes. The incidence of urgent repairs city wide was 2. 6 /km for sewers and O. 7 /km for stormwater, with the ratio being as high as 17 /km for sewers in the most severely damaged district. Pump Stations Twenty of the 24 stations were damaged, six being out of commission immediately after the earthquake. Total damage is estimated at NZ$25M to $30M, most of which was to replace or repair machinery. Sheared driveshafts and water damaged electric motors were common failures. The number of external pipe connections damaged was not available. Treatment Plants Three of the 7 treatment plants and suffered partial or full loss of function: Higashinda (100% loss of function), Chubu (50%) and Seibu (20%). These stations, which provide 63 % of Kobe's treatment capacity, are all located on reclaimed land, as was the heavily (63 % loss of function) damaged Tobu sludge incineration plant on Rokko Island. There was up to 1 metre of ground settlement at these plants. Three of the undamaged plants were located outside the main damage area and the fourth is on the man-made Port Island where there was 500-600mm of uniform ground settlement. Effectiveness of Mitigation & Preparedness Measures Kobe was unprepared for an earthquake of this intensity, with planning based around scenarios for typhoons and smaller quakes. The mutual support agreements with other major cities proved invaluable and the recovery of wastewater services was not restricted by a lack of manpower, materials or plant. Support was received from 30 cities and it is intended to expand the number and scope of these agreements. The value of dynamically consolidating reclaimed land beneath structures was proven by the lack of damage to the treatment plant on Port Island. Although staff mobilisation plans were in place, congestion of the traffic network and the residential damage suffered by many staff meant nearly half were unavailable on the first day. There was no evidence of upgrading work to mitigate the vulnerability of drainage assets to earthquake damage. The City efficiently got on with the recovery despite the lack of pre-planning, and the effectiveness of their efforts can be judged by the availability of pumping stations and a reasonable functional network when water services were restored (other than treatment plants). Response and Recovery Aspects The Bureau set up an emergency organisation on the day of the quake to tackle the task of damage assessment and restoration. Resources were initially applied as follows: The Sewage Works Bureau has permanent staff of 120. Assistance was received from local and central government, and a peak total of 4000 staff was achieved (2,000 design/ office and 2,000 field staff). The damage was sti.Jdied in three stages: • Emergency study: On the day of the quake an inspection of treatment plants and pumping stations was undertaken by night shift staff and those who managed to show up early on the day of the quake to assess the availability of function and security of chemicals, digestor gas and city gas. Staff then started restoration of function as best they could. The emergency study for drains focused on sewer mains. • Primary study: Conducted for about one week after the earthquake to understand the broad scope of the damage. Several groups of engineers checked concrete structures, machinery and plant. For drains the primary inspections were designed to determine the range of the secondary study, and it centered on a visual inspection from manholes. • Secondary study: This phase, which is still underway, was to assess the overall picture of damage and to all assets and to initiate design for rehabilitation work. The scope of the secondary study for drains was a primary area of 4,120 hectares covering the man made islands and the heavily damaged coastal strip. The first inspection phase was again visual from manholes, followed by CCTV inspection of drains exceeding a set damage rating criteria. Sixty kilometres of CCTV inspection was completed in the first six months, utilising 37 cameras which achieved an average of 200m /day. Aerial photos of surface damage, which were available the day after the quake, and road distortions were useful indicators of underground damage. The Bureau did not have a GIS capability. Investigations were initially hampered because the few passable roads were flooded with traffic that effectively paralysed the traffic network. The recovery was effective despite the lack of pre-planning, and staff do not seem to be contemplating any significant effort towards preparing new emergency procedures. Priority was given to restoring the main components of the sewerage system, and fortunately there was low rainfall after the earthquake. Sewerage recovery was aided by the lack of water supply, and no major sewage overflows occurred. All treatment plants and pumping stations were operational by 1 May and were available as sewage flows resumed as the water supply was restored. Full scale rehabilitation is ongoing; pump stations and treatment plants will be completed by March 1996 with the exception of the Seibu (March 1997) and Higashinada (March 1998) plants. In the first six months of the recovery, 530 pipe repairs and 1,400 manhole repairs were undertaken. Damaged sewers on bridges were temporarily restored within in 10 days. However the full extent of damage is not yet understood, and there is concern at the potential for cavities to form where pipes have separated. An intensive programme to inspect lateral junctions and connection points was undertaken, and 4,300 (3.5%) faults were found and repaired out of 120,000 inspections. The great majority of junctions remain unrepaired, and 2 year plan is being prepared to fix these now. Interdependence Issues The delay in the resumption of water services assisted efforts to repair sewers. Close liaison was required with the Water Bureau as much damage to sewerage services became apparent as water services were restored. The Public Works Bureau called regular meetings of key utility operators to give briefings and coordinate emergency repair plans. Agreements were reached that road reconstruction should be delayed where possible until underground services have been reconstructed and that formal approvals for trenching could be waived for emergency work. Coordination of works programmes were particularly required where: -Roading: Repair of sewers located on failed structures (bridges and motorways). -Port: Repair of stormwater drains damaged by lateral spreading and movement of sea walls in the port area need to be coordinated with port reconstruction. The initial response was heavily restricted by traffic congestion, and the loss of the main highway links is still hindering the delivery of materials and movement around the city. The cellular phone network was not reliably available on the first day due to overloading by heavy private use, and field communciations were limited to hand-held radio telephones. Sewer pipes over bridges appeared to fare well. These pipes are mainly steel, with polyethylene being used more extensively now. -Provide robust flexible joints at the interface of pipes and structures/ machinery. -Where structure is be built on poor ground, special attention is to be paid to soil consolidation and the foundations. -Securely fix pipes, mechanical and electrical equipment, furniture, computers, etc. General -Wellington and New Zealand generally must be concerned about the effects of liquefaction on sewerage and stormwater drainage systems. The generally uniform settlement in the man-made islands at Kobe is most unlikely in New Zealand soils and older reclamations. In addition most new Zealand sites, and Wellington in particular, have older systems with a higher proportion of brittle pipes. -The main stormwater drains in the central business district and older section of Wellington are particularly vulnerable. Kobe did not receive heavy rain during the initial recovery phase; we are unlikely to get such favourable weather and the impact of failure will be high. Preparedness The scale of the damage and lessons learnt in Kobe reinforces the need for well prepared and tested response plans. Specific preparedness points that can be learnt form Kobe include: • Resource Planning: The immense resources of manpower, materials and plant available to Kobe from 38 major neighbouring cities allowed the recovery of drainage services to proceed without restriction. Major New Zealand cities do not have this luxury and need to put in place mutual support agreements and contingency plans for resources. • Traffic control planning: The securing of priority traffic routes for emergency and essential service crews is vital, particularly in the initial response period. • Temporary Services: There is value in contingency planning for servicing temporary housing and the provision of portable toilets. • Communications: Ensure availability of suitable systems. • Damage Assessment: Kobe's experience highlights the need for reliable preplanned impact assessment programmes, including survey methodology, criteria for damage estimation and training. The value of aerial photographs was highlighted in Kobe and it is desirable to have prior arrangements in place to have a disaster area flown. • Public information: Preplan public information strategies and material for different phases of response, restoration of services and repair of private drains. Mitigation -Sewers and Stormwater Drains • Replacement/strengthening programmes are needed to ensure critical drains retain their function after an earthquake event ('critical' being defined as those drains for which, because of their size, location or function, failure will have an unacceptably high financial, service or environmental impact). • Flexible design details should be used for joints between pipes and structures (pump stations, manholes, chambers, etc.) • Use appropriate materials, particularly in liquefaction prone areas (e.g. HDPE, PVC). • Design local sewers, lateral, manholes and inspection chambers to facilitate repair. • Special attention should be given to the design of attachment details of . above ground drains at fixed to structures. Mitigation -Pumping Stations/Treatment Plants • Consider options to provide redundancy in the rising main network to allow diversion of sewage in the event of pump station or rising main failure. Kobe will consider interconnecting treatment plants to allow diversion of flows. • Design facilities to minimise the risk of damage to electrical and mechanical equipment in dry well pumping stations. • Provide standby power for key pump stations & essential treatment plant components. Sufficient resources were available to do all work required, and the ,equivalent of 10 years normal work has been done in six months. The majority of repairs were driven by service complaints (14,500 up to the end of May) received as the water supply was restored. 200 contractors were available to fix these problems. Central Government subsidies have been available for demolition (100%) and restoration of services to pre-earthquake condition (80% for two years), and there is currently a high level of activity to take full advantage of subsidy. Earthenware pipes are no longer being used, with PVC being the preferred material for local sewers. 3,000 portable toilets were gathered nationally by the Solid Waste Bureau and located strategically throughout Kobe, particularly at schools used as refugee centres. Chemicals were poured into school swimming pools to provide a basic sewage treatment facility. At the Higashinada treatment plant a section of the adjacent canal was sealed off and chemicals poured in to dose the sewage. Temporary screens and a borrowed belt dewatering press completed the process. The Bureau did not assist with the repair of private drains and there is no precise picture of the state of these drains. Information supplied by the Plumbers Association indicated that a peak of 4,800 repairs were completed in February. City staff were dispatched to liaise with the Association and help with public information. Not surprisingly, contractors prices went up and there were a Jot of complaints from property owners. Airport The two airports in the area are: (a) Itami Airport (previously Osaka International Airport) located just to the east of the heavily damaged area, and now a domestic airport. (b) Kansai International Airport, a new facility located 27 km south of Kobe on the opposite side of Osaka Bay. It is built on a reclaimed island with road and rail linkages and a sea ferry linkage. It handles air cargo and passengers. It has 33 boarding gates and a total terminal length of 1,672 metres and covers 511 ha. Built over a three year period, it overlays soft weak clays compacted by a sand pile surcharge system. Subsidence prior to opening was 10 metres, and a further 1.5 metres is expected over the next 30 years. Hydraulic jacks are attached to 900 columns to counteract differential settlement of the terminal building. The access link is 3.75 km long with a 2.7 km double deck trussed section bridge with road and rail on each deck. Six lanes, a number of rail tracks and electricity, water and gas services also cross this bridge. There appears to have been little or no damage to airport facilities, with flights being disrupted only on the day of the earthquake while road and rail routes were closed for precautionary inspections. Rail There are a number of east-west routes passing through Kobe -TRUNK • BREAKWATERS --- FIGURE 5./ Damage to primary transportation routes in the Kobe area [Billings, 1995] and port layout In addition there are a number of more local and north-south feeder lines and freight linkages to land based Port areas ( other than the Islands). Generally the rail systems are people movers, while the road system carries both people and freight. All passenger services are electrified. Some of the JR Tokaido line is located on embankments. There are also two 'new transportation systems', Port Island Liner and Rokko Island Liner serving these islands. They operate automatically and are in fact 'guided bus ways'. They were built by the City of Kobe. The two Island Liners are entirely located on elevated structures. Road A number of east-west arterial routes pass through Kobe (refer Figure 5.1), with connections to the Port for road freight. Freight transport is an important function of the road network, with some 22 % of total road freight in Japan passing through Kobe. Hanshin Expressway Public Corporation Osaka-Kobe Route (Route 3) Principal link to Osaka, major through route. W angan Route (Route 5) Coastal link to Osaka and Kansai International Airport. Japan Highway Public Corporation Meishin Expressway Chugoku Expressway Bypasses Osaka to northeast. Northern bypass route. There are essentially four types of road in Japan: (a) Expressways (toll roads) These are operated by four private corporations in Japan, the two operating in the Kobe-Osaka area being the Hanshin Expressway Public Corporation (HEPC) and the Japan Highways Public Corporation (JHPC). In the Kobe-Osaka area these are essentially elevated structures. Tolls are relatively expensive ($NZ18 for a trip from Osaka to Kobe), but because of congestion on other roads toll roads are extensively used. The toll revenues are set to include a payback for the cost of construction. Most expressways in the Osaka-Kobe area are operated by HEPC which has a sophisticated electronic surveillance and traffic conditions monitoring system. It is able to provide up to date information to users on levels of congestion and travel times via overhead electronic signs, telephone enquiry, console stations and by radio. Around 900,000 vehicles used the Hanshin Expressways prior to the earthquake. The 'historical cost' value of the HEPC network is $NZ21 billion. (b) National Highways Operated by the Ministry of Construction for the Government, Highways 2, 43 and 175 pass through Kobe. (c) Prefecture Highways Operated by the Hyogo Prefecture in Kobe, similar concept to 'regional roads', but operating extensively throughout Japan. There are no Prefecture Roads in Kobe, the City being one of twelve designated in Japan to have full control. (d) Local Roads and Highways These roads are operated by local authorities such as the City of Kobe. Damage Assessment The locations of the main areas of damage to the primary transportation routes are indicated in Figure 5 .1. (b) JR Trunk Line Heavy damage to railbed (particularly embankment distortion and settlement), viaducts, bridges, stations, electrification system and workshops. A critical section of2.3 km elevated viaduct between Sumiyoshi and Nada Station suffered severe column damage and collapse of several sections. Repair costs for Shinkansen and Trunk Lines total $NZ2 billion. (c) Hankyu Railway Damage at several locations, severe column shear failures and collapses of several viaduct sections, $NZ710 million repair costs. (d) Hanshin Railway Damages at several locations, $NZ920 million repair costs. (e) New Transportation Systems ( 'Liners') Port Island Liner structures were seriously damaged, including failure of reinforced concrete piers, downwards displacement of girders and unseating of a continuous beam from its bearings. At the bridge to Rokko Island, a pier moved laterally causing collapse of the main bridge span. Roads (a) Hanshin Expressway Public Corporation There was damage to structures on all 13 routes operated by the HEPC, covering 200 km and located within 60 km east of the epicentre. The large majority of damage occurred on Kobe Route 3 and Wangan Route 5. • Collapse of two simple girders at Takase-cho, Koshien, Nishinomiya due to damage to reinforced concrete piers. • Collapse of two simple girders at Honmachi, Nishinomiya due to large horizontal movement. • Collapse of four spans of the rigid-frame beam bridge at Hatoba-machi, Chuo-ku, Kobe because of damage to the single reinforced concrete columns. • Partial falling of the two-span continuous box girder, curved bridge at Minatogawa Ramp, Nagata-ku, Kobe because of damage to the reinforced concrete piers. • Two simple girders were close to falling at Niwa-machi, Nishinomiya because of the settling of damaged steel piers. Damage levels were significantly lower than on Route 3 above, confirming the more resilient design characteristics of 1980s and 90s codes. • Traffic volumes using the HEPC network were estimated to have fallen from 920,000 to 500,000 per day in March 1995 reducing toll revenue by $3.2 million a day. • Other routes suffered damage to viaducts and bridges, but not as severe as the Expressway routes. Highway 43 was severely disrupted by the collapse of Kobe Route 5, as it runs directly underneath it for much of its length. This situation was typical, with various elevated transport linkages being damaged and affecting others. For example: the collapse of the Mondo viaduct deck onto a Hankyu Railway Line; transverse movement of double deck girders on Highway 2, Hamate Bypass; collapse of steel piers and superstructure at Iwaya Junction overpass. (d) Minor Damages There was very little damage to surface level roads, even in liquefaction areas such as Port Island where settlements of up to 0.5-0.6 metres occurred. With minor exceptions, these areas appeared to settle uniformly with little cracking or distortion of kerb lines, and little disruption due to underground services. Exceptions occurred adjoining structures with piled foundations. Around bridge/viaduct piers, original ground levels were retained resulting in 'bumps'. Discontinuities also occurred at entranceways to buildings. In other areas evidence · of cracking of tiled surfaces was still visible and some cobblestone paths were disturbed. Concrete footpaths in some locations had been forced against kerbs, rotating them and riding up onto the kerb top. Again, this was the exception rather than the rule. There was some damage to street furniture such as traffic light standards and signs. ( t) Traffic Volumes The effects of the disruption to traffic on the 'east-west corridor' through Kobe is highlighted by the changes in traffic volumes in the Higashinada Nada Wards area, ie the vicinity of Rokko and Port Islands). Retaining walls constructed from reinforced earth used in approaches to several new bridges were observed to perform well. Effectiveness Of Mitigation And Preparedness Measures In general, the authorities were not prepared for the scale of the earthquake and the damage it caused. There had been no examination· of 'weaker' structures built to codes earlier than 1980, when more demanding seismic standards were required. It was felt that the structures were 'impregnable' to natural disasters. We were not advised of any such study or of any physical mitigation measures similar to those being carried out currently in Wellington and Christchurch. This is perhaps surprising given the experiences in California and the design deficiencies such as lack of bridge column ductility and stirrup detailing. Since the earthquake however, the Government, through the Ministry of Construction, has established design specifications which the authorities are required to adhere to both in restoring and rebuilding damaged structures and for retrofitting. These apply both to road and rail. The criteria are: • Structures must not collapse under similar earthquake loadings to those of the Great Hanshin Earthquake, retaining continuity of design. • Facilities must be able to continue operation without disruption. [20] (24] CAPACITY The extent of the liquefaction and lateral spreading problem was also unexpected. The extreme levels of liquefaction induced damage to port seawalls and facilities designed and constructed in recent years is evidence of this. HEPC advised that written disaster manuals were available and that annual training took place. However, the primary focus was typhoons, and the manuals were of little use in the event that occurred. What was written about earthquakes was inadequate because it did not envisage such an intense event. The surveillance cameras of HEPC were rendered useless as the cables broke in the earthquake. HEPC had a seismometer alarm which registered the shaking and sent a message to the traffic control centre, which is manned 24 hours a day. The operator immediately activated electronic road closed signs across the Expressway network. This was effective in warning motorists at the time, but more needed to be done in communicating information in the period following the earthquake. Response And Recovery Aspects Traffic control initially was virtually impossible as vehicles that were unable to use the elevated Expressways used at grade roads instead. There was confusion with Police, whose first duty was the rescue of people, being unable to impose effective traffic control which hampered rescue, reconnaissance and emergency restoration work. Staff of the Municipal Government made their way to City Hall during the first few hours by foot, cycle, motorcycle or car. Of much value was video footage taken by these people on their way in. Logistical problems, such as the need to transport demolition and rebuilding materials on roads already congested markedly slowed the recovery process. From the roading congestion perspective, it was probably just as well that Port operations were dramatically curtailed because of damage. The roading system would have been unable to cope. Because the damage to rail structures was less severe, priority was given to restoring rail routes for the movement of people. Rail services were restored as follows: The restoration of these services will have eased the congestion somewhat as commuters switch from cars back to trains. Very high volumes of Port traffic were observed to be coming back onto the roads as the Port recovers (now at 60% of pre-earthquake activity). At times almost one vehicle in two seemed to be a heavy freight/container truck. It is clear that the collapse of essential overhead structures has had a dramatic effect, and unless adequate capacity and/or reasonably close alternatives exist then major problems will occur. This must be looked at in any New Zealand city with these sorts of structures. Initial restoration focused on preventing secondary disaster (eg by aftershock), bandaging damaged piers and propping structures and in providing a 24 hour monitoring system. This was primarily to protect traffic using the national highways under the Expressway. HEPC were responsible for the design and installation of the temporary propping, with the costs effectively being met by the Government. Expressways were progressively reopened with traffic initially being restricted to emergency vehicles. Within two days 76 km of the 200 km network was able to be used by the public. This increased to 149 km (75%) by 12 February (26 days). The Wangan Route 5 Expressway will not be fully operational until the end of October, while repairs to the Kobe Route 3 will not be completed until the end of 1996. From 25 February a 'Lifeline route' for transporting essential supplies for people was established on the Kita Kobe Route, and a 'Revival route' for reconstruction supplies on parts of the Kobe Route 3 and Wangan Route 5 was set in place. Co-ordination between the Local and Prefectural Government, Ministry of Construction and Police authorities has been required to ensure ·that traffic capacity, safety and environmental considerations are taken into account. HEPC, as a commercial organisation, is now concerned about the loss of trust in its network and is anxious to properly safeguard all routes against future earthquakes. As well as repairing the damaged elements within a two year timeframe, HEPC have committed themselves to upgrade their structures to current standards within a three year period. Typical measures adopted for the strengthening of reinforced concrete bridge piers to current earthquake standards include: • Removal of damaged concrete and reinstatement of original longitudinal and transverse reinforcement. • 30mm longitudinal reinforcing bars grouted into the existing foundation outside the existing column, and full strength welded to bars above for the height of the pier. • Transverse reinforcement consisting typically of D16 bars at 150 mm centres added to confine new and existing concrete. HEPC is looking at upgrading emergency facilities and evacuation guidance facilities. HEPC has also identified the need to enhance its disaster recovery manual and to make improvements to communication mechanisms with users following such events. The costs of demolition (now complete) were fully met by the Government through the Ministry of Construction. The Government also funds restoration costs to the value of the standard to which the structures were initially built (estimated to be $NZ1 billion). It is understood however that the estimated NZ $6.4 billion cost of seismic upgrading, of both damaged and undamaged expressways, to current standards is to be met by HEPC. It is clear that the scale of resources available in Japan to deal with the aftermath is enormous. Huge floating cranes are being employed to restore not only Port facilities but also sections of elevated roadway adjoining the coast. New Zealand's access to such resources will not be so easy. Interdependence Issues The severe congestion caused as a result of the collapse of many structures hampered the recovery of other essential services and the provision of emergency relief. It is also affecting the recovery of the local economy. The first week after the earthquake was the most difficult. Gas supplies were cut, and food and water delivery and rescue activity was severely restricted. Sea transport was important in bringing in emergency supplies and providing temporary accommodation for workers. The current installation of a major underground services conduit (or 'lifespot') on Highway 43 is in tum affecting the congestion levels as work appears to be occupying one to two traffic lanes. This concrete conduit is designed to minimise the disruptive impacts of future earthquakes on underground services and in turn the roading system. Overhead power reticulation generally behaved well (about 10% of total pole damage was due to breakage of the poles). Disruption to traffic tended to be in areas where fires and collapsed buildings occurred, bringing down wires and blocking road access. Electricity was difficult to restore because of traffic congestion. The dominance of overhead reticulation, although much repair work was necessary, meant that service was restored quickly. It took four months to check all underground cables, suggesting that while overhead wiring will be a short term disruption to roads, it takes a lot longer to restore underground services. It is believed that there may be significant amounts of minor underground service damage yet to be uncovered, possibly resulting in subsidence of road surfaces at some future time. In the case of large stormwater pipes, joints pulled apart and material fell (and may continue to fall) into the pipe, leaving a cavern above the joint. There are bound to be many pipes (especially smaller sizes not yet inspected by video) with damage that will require future excavation and repair. 25 Asbestos cement water pipes had all been replaced with ductile iron before the earthquake, largely because of the traffic disruption in repairing them. 'Strong' services pipes/ducts also help minimise damage to roadways. Some drainage manholes formed using precast sections moved laterally and skewed (refer Wastewater). There was significant circumferential cracking of pipes. These damages will take time to repair and be potentially disruptive to traffic. Drainage pipes fared better in newer areas, and even in liquefaction areas such as Port Island appeared not to disrupt the road system. Lessons For New Zealand • Recognise that a devastating earthquake could occur virtually anywhere in the country and have major impacts on transport and structures in particular. • Recognise the possible short term and long term recovery aspects, impacts on people, the economy and the transport infrastructure itself of damage arising from such an earthquake. Look particularly at bridges and elevated structures, both road and rail. Identify the key Lifelines routes -for recovery and the economy. • Recognise that elevated structures (eg bridges) built before the introduction of seismic design codes of the 1980s are likely to have significant damage levels in such an earthquake. • Have pre-arrangements in place for heavy construction equipment and resources to be brought in from other cities/countries and ensure that this equipment can reach the site. • Establish design standards for earthquake performance of bridges and elevated structures ( eg ductility, bearing restraints) that will ensure vital structures survive a major earthquake, can be used by emergency traffic (including heavy vehicles) within several hours, and can be restored for normal use without severe disruption within days/weeks. This will require further work in terms of risk assessment and justification and may require Government support in terms of legislation/directive. • Access controls must be placed as soon as possible after the earthquake event in order to allow access by emergency vehicles, reconnaissance teams and restoration resources. • Rapid and effective communications are needed to alert the public of access restrictions on the roading network. • The use of road status indicators for emergency communication to motorists on motorways and major routes is worthwhile. • Long term controls on access routes may be neededdetours and other arrangements need to be thought about before the event and transport priorities established. • Reconstruction efforts require restoration of a high standard of transport linkages. To minimise downstream economic losses, transport linkages need to be protected and routes for materials supply needs to be identified in advance. Network Description At the time of the earthquake the Port of Kobe had berthage for 250 large vessels with major trading routes handling a range of cargoes throughout the world. These facilities included 24 container berths at Rokko Island, Port Island and tlie Maya public piers, all major reclamations connected to the mainland at Kobe with substantial bridging. Together these three "island" reclamations alone (all developed since 1967) provided 375 hectares of port facilities under the control of the Port and Harbour Bureau of the City of Kobe. There is considerable berthage for conventional cargo vessels, ferries, passenger liners and shipyard repair along a substantial length of the Kobe waterfront area. This is addition to the newer port reclamations. Rokko and Port Islands include within their areas commercial, educational, housing and recreational areas with public transport (light rail) available to the city. The layout of the port area and the main piers is shown in Figure 5 .1 in the previous section.· Foreign cargo handled through the port area totalled 169 million tonnes which is equivalent to approximately 28 ports of the size of Wellington in tonnage terms. The port handled 30% of the container cargoes in Japan (as quoted by Tokio Marine and Fire Insurance Co Ltd) with up to 2.5 million standard 6 metre containers ( or TED) handled per year. The container cargo represented 76% of the foreign trade through Kobe. Many of the shipping lines had container facilities dedicated to their own services, although there are public container terminals also available at the Maya pier. At the time of the earthquake there were 55 container cranes within the port. The Port and Harbour Bureau has responsibility for the construction and maintenance of access bridging, and roading connecting the port reclamations and the mainland. Development plans were well advanced for the construction of Stage 2 of Port Island and much of the reclamation work associated with this project has been completed. In the future the Port and Harbour Bureau is to establish a local airport on an island reclamation beyond Port Island, again to serve the Kobe area. All freight to and from the main container berths is by truck with no rail delivery of cargo within the main areas of the port. Organisation of the construction and repair of breakwaters lies with the 3rd District Port Construction Bureau of the Ministry of Transportation as part of Japan's sea defences against typhoon damage. Damage Assessment Report There was considerable settlement and/or liquefaction of the reclaimed areas of the Port. This settlement was accentuated alongside any well-engineered piled structures with drops of up to 2 metres to the settled reclamation against them. The new reclamations were faced with either gravity seawalls or caissons which had all been rotated outwards by the lateral earthquake force exerted by the reclamation. Lateral loadings of 15 % gravity load had been allowed for in the design of these structures (as reported by port engineers). This lateral load allowance is now being increased to 25 % as a result of analysis of this earthquake. The spreading of the wharf resulted in subsequent spreading of the legs of all the 55 container cranes at the port and the total collapse of one. All container berths were rendered inoperable as a result. Machinery within the cranes was largely undamaged by the earthquake apart from that within the crane which collapsed. Many elevated linkspans and gangways to roll-on roll-off berths were also dislodged with spans pulled off the support seating at one end by the lateral movement. Some undamaged piled wharf structures were however immediately useful and were available to assist in the relief effort. Pictures were shown of a naval water tanker berthed against a wharf supplying water to road tankers for distribution to the public immediately after the earthquake. There are reports of damage to cargoes in containers affected by muddy water from the liquefaction of the soil and from containers sinking into depressions behind the wharf frontage. Some cargoes were also damaged inside collapsed warehouses, particularly at the Shinko piers. Power to the Port areas was generally not available for up to two days in most areas until the city supply was re-established. Water supply however was out for longer due to damage in the main feeds across bridging to the main islands. Containerised refrigerated containers were therefore at risk as was cargo in cold stores which required water to operate. However, the very cold weather conditions at the time led to small rises in temperature, insufficient to adversely affect the cargoes stored. Approach spans on the Kobe Bridge connecting it with Port Island were damaged, but with restricted access, limited traffic could use one lane each way within two days of the quake. The Rokko Bridge to Rokko Island was also undamaged, although it was eight days before heavy traffic could gain access without weight restriction. However, access to the Port area continues to be restricted by the central city congestion while the overhead expressways are under repair or being strengthened along the entire length of the city. Ship building yards owned by Kawasaki and Mitsubishi were both damaged during the earthquake, including some minor damage to ships at the yards at the time. While both companies operate other facilities, commitments to existing ship building orders at these other yards will mean delivery delays to future orders. Seawalls which provide protection from typhoons also settled as a result of the earthquake by as much as 2 metres. This was cause for some concern as their effectiveness in protecting against storm generated waves was reduced. Rapid measures to increase the height of these have been undertaken and this work had neared completion in time for the typhoon season. Effectiveness of Mitigation and Preparedness Measures The Port and Harbour Bureau engineers did have some limited emergency preparedness procedures in place prior to the earthquake. However, this scale of earthquake had not been predicted for the Kobe area and the plans developed were insufficient to cope with the size of the disaster and the widespread extent of damage. The Port was well protected for typhoon damage with extensive breakwaters built out into the harbour. The Port was able to assess quickly the extent of damage and communicate this promptly to its customers to enable them to organise diversion of shipping away from the affected wharf areas. A regular communication sheet in languages other than Japanese was widely and regularly distributed to publicise the state of reconstruction of the Port area. Response and Recovery Aspects The extent of recovery from the earthquake in eight months has been most impressive Initially the emphasis was placed on making temporary repairs to enable cargo to fiow across the wharves, without repositioning or carrying out any long term strengthening of the key structural elements. Arrangements for permanent repairs were also made and Figure 1. 6 .1 illustrates the two different permanent repair methods proposed for the repair of rotated caissons within the container handling wharves. The following methods have been adopted for the repair of the face of the berths: • Temporary repairs have been carried out at some berths by leaving rotated caissons where they finished up after the earthquake and reconstructing the pavement behind. • Figure 6 .1 shows the long term repair methods where sufficient room is available within the harbour area to extend the wharf face beyond the line of the previous berth face. This is being achieved by one of two different construction methods: 1. The placing of a new line of caissons outside of the existing damaged face. The new caissons will be mounted to resist higher lateral loadings from the reclamation (Figure 6. la). 2. The driving of piles beyond the rotated caisson to create a new wharf structure in front (Figure 6. lb). • The final alternative for areas where the wharf face cannot be moved outwards is to lift the old caisson clear, excavate and rebuild the foundation and reseat it near to its previous position in a manner which is more resistant to the lateral earthquake loads. Heavy lifting plant brought into Kobe from elsewhere in Japan is making a fast recovery feasible. The use of heavy lift floating cranes of sufficient reach to liR entire container cranes has meant that the cranes do not need to be completely dismantled. There are still several cranes sitting with their superstructure on the deck awaiting the reconstruction of their supporting legs. Of the 55 cranes damaged, 26 cranes were operational as at 26 August 1995 with 10 container berths open for use. As of that day there were a total of 86 of the original 239 berths available for use and this was meeting up to 70% of the number of liner trades which had been operating through the port prior to the earthquake. Shippers have redirected some vessels to other ports such as Osaka and Yokohama and some of the Kobe port workers had moved to these ports to cope with the extra traffic through these ports. Price reductions through the Port of Kobe have been offered to users of damaged facilities to compensate for disruption. These reductions will remain in place until March 1996 or until a facility is fully restored. Cost estimates provided in fact sheets issued by the Port of Kobe show that all existing port facilities will be completely restored within two years of the earthquake at a cost estimated at NZ$10 billion. An amount of NZ$16 billion is also set down to be spent on additional port development over a ten year period. (Costs for reconstruction of private port facilities are additional to these figures, although 27 an allowance of NZ$0.33 billion has been included in these figures for the reconstruction of damaged seawalls). Interdependence Issues The inability of the port area to operate without power and water has been emphasised by this earthquake. Power was restored relatively quickly to enable refrigerated cargoes to be saved, although in the heat of summer more losses would have resulted. The Port is still constrained by the delays to traffic within the city area. This is caused by the extensive reconstruction of the roading network. While container trucks are getting priority access through the city area, the congestion does result in long delays for the truck operators. Twenty-four hour seven day a week operation of the Port has been brought in to attempt to alleviate this problem and to ease pressure on the available berthage. The need for the reinstatement of the damaged link between Kobe Route 3 and the Port Island wharves was essential. This need was such that the Port of Kobe required the construction of a temporary bridge while the damaged link remains under repair. This multi-span bridge was completed in August, and has been designed and constructed at some cost to a high standard. The reclamations underway for port development have meant that the disposal of demolition waste from demolished city buildings can be used as fill material. Part of the reclamation on Port Island is also being utilised for emergency housing. Thus, as well as forming a vital route in for relief and construction supplies, the Port is meeting a social need in the reconstruction of the remainder of the adjacent city area. Lessons for New Zealand • Well engineered port structures can be made to withstand earthquake loadings of the magnitude experienced in Kobe. The extra costs in designing to these loadings is likely to be small in comparison to the reinstatement cost for any failure and the inconvenience of being without the port infrastructure during the repair period. • Many port areas are on reclaimed land. Settlement and/or liquefaction of these areas is likely during an earthquake and allowance should be made for this in services where they transfer from reclamation to a rigid piled structure. Retaining structures do need to be designed to withstand the earthquake loading from land behind with a considerable safety margin if damage is to be avoided. • Use of materials which can flex with earthquake movement is important for services within reclaimed· areas. • Port areas are very reliant on good transport routes to the areas they serve. If these are disrupted the resulting congestion is likely to divert trade to alternative ports unless the loss can be compensated for to those affected. • The port area has a vital role in assisting with the recovery through the delivery of heavy equipment, emergency water supplies and materials to aid with the initial-recovery and the reconstruction. • . The reconstruction is made much easier and quicker by the availability of heavy lifting and reconstruction plant close at hand. This plant is required for the large civil engineering projects undertaken regularly in Japan. Many large fioating cranes are now in the Kobe area to assist with this reconstruction. .1 Network Description Gas supply is 94% LNG which is imported under long-term contracts from Brunei, Australia, Indonesia and Malaysia. The remaining 6% is Petroleum Gas (from Naptha) which is basically maintained as an emergency backup to LNG. The average residential user uses 19 GJ/year and gas is the dominant energy source for cooking, heating and hot water in the home. Natural gas from the LNG terminals is transported through high pressure welded steel transmission pipelines which operate at pressures of up to 40 bar. These transmission lines take natural gas to 19 supply stations which are distributed throughout the supply area. At the supply stations, the pressure is decreased and fed into a medium pressure pipeline network. The medium pressure networks are divided into two pressure ranges known as medium pressure A (3 to 10 bar) and medium pressure B (1 to 3 bar operating pressure). Both medium pressure A and B pipeline networks are used in inter-city transportation and in gas supply to large hospitals, hotels, factories and other institutions. A total of 37 large diameter spherical gas holders are located at the various supply stations and these provide a buffer for the daily fluctuation in gas demand. The gas is then reduced to low pressure (1.0 -2.5 kPa) for distribution to households and small to medium industrial and commercial customers. The medium pressure mains are either welded steel (87%) or ductile iron pipes (13%), and are consequently reasonably resistant to damage by earthquake. The low pressure distribution mains comprise steel pipes (61 %), ductile iron pipes (30%), grey cast iron (6%) and polyethylene PE (3 % ) . Only a small amount of the low pressure steel system is welded, with most of the low pressure steel pipes being either screwed or mechanically jointed. Together with the older cast iron pipes, these proved to be the most vulnerable part of the system during the earthquake. Distribution mains with pipe diameters of 100mm or greater are buried at a depth of 1.2 metres. Those mains less than 100mm in diameter which are commonly referred to as branch pipes are buried at a depth of 0.6 to 0.8m. Because of the possibility of third party damage, current regulations only allow the use of PE pipe for low pressure systems. The entire gas distribution area of Osaka Gas is divided into several blocks which can be isolated independently from each 29 other in the event of an emergency. The whole service area is divided into 8 "super blocks" and these are further subdivided into 55 "middle blocks". Only gas supplies to the 8 super blocks can be remotely controlled. Since the distribution system was relatively recently converted from manufactured gas to natural gas many of the sector valves from the conversion still remain in place and were able to be used for isolation. However because Osaka Gas supplies 5.6 million customers, isolation of each of the middle blocks usually resulted in suspension of supply to greater than 100,000 customers. Damage Assessment Report Damage to the network and production facilities can be summarised as follows: Production Facilities Major production facilities at LNG terminals at Senboku (further east round Osaka Bay from Kobe) and Himeji (west of Kobe) were not affected by the earthquake, and although they were not in operation at the time, automatic terminal shutdown would have occurred if an earthquake of seismic intensity greater than four on the JMA scale (MM VI) had been detected. At Senboku terminal, which is on reclaimed land, site liquefaction occurred over 9,000m 2 (1 % of site area) where the ground had not been compacted. However damage was limited to inclined instrumentation boxes and water leakages through flanges of a seawater line. Even though a maximum ground acceleration of 0.833g was recorded at Fukiai supply station, no damage was sustained by the two gas holders at this site. A total of 12 gas holders were located near the epicentre and none experienced damage. None of the 3,212 governor stations installed in the Osaka Gas service area were damaged. High Pressure Pipelines The high pressure pipeline network (total length 490km) sustained no damage even in areas along Osaka Bay where sand boil, fissure and ground subsidence occurred due to liquefaction. Medium Pressure Pipelines The medium pressure A & B pipelines sustained failure at a total of 106 locations (see Table 7 .1) and most of the leakage was due to loosened dresser couplings caused by the resultant ground movement. A total of 14 leaks were found on welded steel pipes due to weld cracking because of a low grade welding process. Most of the leakage at dresser couplings occurred at areas of ground liquefaction and where seismic intensity was greater than JMA 7 (MM XI), and in particular. where dresser couplings were used in valve manholes. Dresser couplings used on direct buried steel pipes had in many cases been covered with a sleeve which had then been welded to the pipeline, and in such cases leakage had not·been a problem. Low Pressure Pipelines The majority of the leakage from the distribution network occurred on the low pressure pipelines, as indicated in Table 7 .2. It can be seen from Table 7 .3 that the majority of failures occurred at screw threads on customer service pipes or at threads on main pipes less than 100 mm diameter. This is not unexpected as threading is the preferred method of jointing on these smaller pipe sizes, and the joints then become the weakest part of the network. Osaka Gas had developed a compression type joint with an end loading arrangement to anchor and join smaller diameter steel pipes. These joints prevent slip out, are highly earthquake resistant and are known as the SGM joint. These SGM joints were used for repair and restoration of the damaged sections of the low pressure steel pipelines. There were no reports of any failures from a total length of polyethylene pipe of 1169 kms. The disaster mainly affected residential customers together with some commercial users. No large industrial customers were severely affected and, generally speaking, users of any size were not in a position to use gas immediately after the earthquake, even if it were available. Effectiveness of Mitigation and Preparedness Measures The overall performance of the distribution network and production facilities during the earthquake was enhanced by the fact that Osaka Gas had recognised that earthquakes were a distinct possibility, and had implemented various measures to reduce the possibility of damage. These included the construction of earthquake resistant LNG storage and distribution facilities and the introduction of micro computer controlled intelligent gas meters. Production facilities and gas holders are designed to conform with relevant legal requirements, and are constructed after confirming their earthquake resistant performance by means of dynamic structural analysis and other advanced techniques. Osaka Gas had also been strengthening parts of the pipeline network to make them more resilient by using earthquake resistant joints for joining pipe (particularly in areas of public assembly and locations of known subsidence such as reclaimed land). Gas meters for domestic customers have been replaced with intelligent meters that shut off the gas flow when a seismic sensor detects an earthquake, and currently 70% of domestic customers have intelligent meters installed. Response and Recovery The initial response following the earthquake is summarised as follows: In other areas, gas supply to a further 23,000 customers was also suspended, giving a total of 855,000 customers suspended. As the above summary shows, some 6 hours elapsed before Osaka Gas acted and began shutting off supply. While the company was severely criticised in some quarters for taking this long to act, they had immense difficulty in establishing the extent of the damage, in mobilising staff and in actually reaching and operating manual shut off valves. This is despite having: 31 • a sophisticated scada and monitoring system • a well designed system in terms of supply discontinuation • a well prepared response plan • an independent, earthquake proof radio communication system • annual emergency drills Some of the initiatives that Osaka Gas took in dealing with the residents of areas where gas supply was suspended included: they set up telephone hotlines and manned them around the clock so that customers were able to relatively easily obtain information on the restoration and reconnection situation. they provided, free of charge, around 170,000 portable LPG cookers to those customers in the worst hit areas. they set up temporary shower facilities at shelters and temporary baths at their own bases for use by local residents. These facilities were used for in excess of 90,000 man hours. Key aspects in managing the recovery phase were: • provide clear, frequent and ongoing communication as to what is happening and what progress is being made. • ensure essential services have supply by whatever means necessary. The first week after the quake was occupied with damage assessment and resumption of gas supply to vital locations via the medium pressure lines. During these critical days rescue works had greatest priority and gas supply ( or alternative supplies) was restored to hospitals, nursing homes, schools, kindergartens and a crematorium. Also some important customers, normally supplied from the low pressure system, got a temporary supply from the medium pressure system via a regulator. The restoration of the low pressure network commenced on 23 January with the logistics of the operation being a massive management project in itself. Osaka Gas established up to 12 temporary bases at accessible locations ringing the damaged area. Most of the bulk material was shipped by boat or helicopter to the bases where it was stored. The approach taken with restoration was to begin work in the less affected areas on the perimeter and gradually work inwards to the most seriously affected areas. A set procedure was followed in all cases with safety being of utmost priority, as follows: Close customer's gas meter valves Isolate Sector Inspect and repair gas pipes buried under road Inspect and repair gas pipes and equipment installed in houses Resume gas supply First, Osaka Gas personnel visited every customer in a particular area and closed the customer's gas meter cock. Then the mains connection was cut off to form a block (called a restoration sector) of a size from three to four thousand customers. Upon establishing the restoration sector, the mains and branches of each sector were inspected and, in the case of leakage, repaired. If water and/or sand were found to have entered the pipe it was cleaned and discharged. Water ·ingress was a major problem due to a combination of the very low pressures in the branch and service lines and the common occurrence of water pipe leaks. After repairing the mains, personnel contacted every customer in the sector and inspected the gas facilities inside the premise (i.e. piping, appliances, air intake and ventilation system etc). Once this occurred for every house in the sector, supply of gas was resumed. This process was repeated for every restoration sector. In total, restoration involved 9700 people and 4800 vehicles. Of these, 3700 people and 2000 vehicles were contributed by the various other gas utility companies under the auspices of the Japan Gas Association. The restoration works were carried out by dividing the five middle blocks, where the gas supply was suspended, into over 220 restoration sectors. Restoration of a sector was normally completed in four days but in the worst hit areas much time and labour was taken up in removing large quantities of water and sand/mud which had entered the system. In these areas it often took well over a week to restore one sector. Restoration efforts were also impeded by wrecked houses which blocked roads. Despite the problems, restoration works were essentially completed on April 11 (i.e. a period of 85 days, refer Figure 2.4) with supply having been restored to over 700,000 customers in the intervening period. The total cost of the earthquake to Osaka Gas was estimated to be NZ$3.2 billion. .5 Interdependence Issues The greatest problem was the communication within the first hours after the earthquake. The cellphone network was overloaded within one hour, as soon as people headed to work. The company's own radio communication system was not affected by the earthquake and was invaluable for information transfer between the offices and field crews. Many of Osaka Gas' employees were living in other cities within the greater Osaka region and depending on trains to commute to work. With most of the trains being out of operation, commuting from home to work became a major problem. The restoration teams had to start work very early in the morning in order to avoid traffic congestion. The availability of electricity was not critical for the gas company, as repair crews were mobile and had their own energy supplies. Some criticism was expressed for a lack of communication between Osaka Gas and Kansai Electricity during the first hours and days after the quake, when some gas leaks were ignited by restored electricity, Accommodation as close as possible to the damaged area had to be found for the 3,700 gas workers coming from other parts of Japan. This was an additional demand on an already overstretched supply of temporary housing in the area. Some ships were used as hostels, and other workers were accommodated in hotels that had suffered only minor damage. Because the damaged piping was large in diameter and made from cast iron or steel, much of the material was bulky and heavy, and had to be transported by sea. Many of the harbour facilities however were damaged and could not be used. Small goods such as food boxes were often unloaded by hand. Another problem was too little space for storage of equipment, tools, vehicles and removed rubble. Those lifeline facilities which depended on gas, such as hospital, schools, nursing homes and a crematorium were reconnected within a week. These institutions were either supplied from the medium pressure system which had only little damage or got a temporary supply with LPG, CNG or LNG. Other interdependence issues included: • Broken water mains allowed water to enter damaged gas lines. • Road and footpath excavation for gas repairs hindered surface restoration. .6 Lessons for New Zealand As a result of this visit it is very clear that the larger the earthquake and the larger the population· centre affected, the longer will be the period of disruption. Although New Zealand does not have the same degree of population density, there are several lessons that can be learnt from the Great Hanshin earthquake. The following recommendations are suggested to minimise damage and disruption both to gas networks and to customer relations in the event of a similar earthquake occurring in New Zealand. • Rehabilitate Old Networks Continue with and, if possible, accelerate the rehabilitation program to replace in particular screwed steel, old cast iron and asbestos mains with polyethylene. • Investigate Vulnerability Examine the network in detail for areas of vulnerability, particularly in areas prone to earthquake and/or close to known fault zones. Buildings and bridges designed and constructed according to modern seismic standards in general survived the earthquake well. However, older buildings are prone to collapse and damage to meters and pipework will occur. Consideration should be given to identifying the risk associated with these buildings and how the gas supply can be isolated with minimum disruption to other customers when access to the site is restricted. All pipework on older bridges should be subject to risk assessment and appropriate action taken. • Review Emergency Plans Emergency plans should be reviewed regularly and tested by way of regular emergency exercises. Communication with other service utilities is important. Several fires were caused because the local electricity company did not liaise with Osaka Gas before restoring power to an area. Communication within the gas industry is very important, because with a reducing workforce of skilled people, dependency on each other is likely to be significant. • Investigate Intelligent Meters These meters contain a seismic sensor that shuts off gas flow if a tremor occurs that is in excess of a pre-set limit. They cost about NZ$300 and have batteries that last 10 years. The use of these meters or other automatic shut off devices should be investigated. They could, for example, be made available as an optional extra for concerned customers. • Sectorise Networks The importance of sectorising the network in order to be able to shut off damaged portions and maintain supply to undamaged areas cannot be over-emphasised. All utilities should examine the flexibility that is available in their systems. • Communication In the event of a disaster like this good communication is vital. Good relations with the media, particularly TV, must be developed. The establishment of a media plan for the industry could be considered. A plan to provide alternative supplies (ie CNG, LPG) to key customers could also be developed. • Expect the Unexpected Prior to January, 1995 Kobe was regarded NOT to be prone to earthquake; an opinion which was obviously ill-founded. While factors such as lower housing density, better building standards and a higher proportion of polyethylene mains will mitigate earthquake damage in NZ, everything possible should be done to be prepared and regard all areas as potential earthquake sites. Network Description Kansai Electric Power Company, with 11. 7 million customers and a system peak load of 28,000 MW, is the second largest utility of the nine generating, transmission and distribution utilities in Japan. The company transmits electricity at a number of voltage levels from 500 kV, through 275 kV, 187 kV, 154 kV to 77 kV. Kansai Electric has a total of 1271 sub stations of which 67 were within the earthquake affected areas. Local distribution in the Kobe area is principally by low voltage (100 volt household supply) and high voltage overhead lines. These are supported on concrete poles, with only limited underground cabling in the central business district. All Japanese utilities have adopted the same earthquake design standard since 1980. This requires the qualification of all equipment to survive an earthquake with a 0.3g ground acceleration. All equipment is shown to be adequate by a sine beat shaking table test. By comparison, Trans Power uses a 0.4g ground acceleration spectra modified to O. 7 g near faults. Equipment can be proven adequate by either test or analysis. If analysis is used, conservative safety factors are imposed to cover uncertainty in modelling. Damage Assessment Report No functional damage was caused to the 500 kV facilities which represent the backbone of the Kansai power system. All of the 500 kV network is located outside the peak ground shaking area. There was however substantial damage to the 275 kV and lower voltage facilities [Morii, 1995]. Generation Facilities Ten of the 63 available thermal power plant units were damaged. This damage typically involved distortion of the boiler tubes. Sub stations Of the 67 sub stations in the earthquake area, 48 sub stations sustained some damage and 8 were blacked out completely. Three of the four 275 kV sub stations in the Kobe area were blacked out. 14 out of the 29 (48%) transformer sub stations within the highest intensity zone were damaged. These 14 were all constructed prior to 1965, while the seven sub stations built after 1975 were undamaged. Sub station damage featured the following: • Lightning arrestors and disconnectors either destroyed or badly misaligned (Figure 8.1). • Centre-clamped bushings on 8 older oil circuit breakers slid across the baseplate resulting in oil leakages. • Anchor bolt failures to 17 transformers resulting in transformer movement and consequent destruction of attached equipment Transmission Lines Twenty five transmission lines sustained damage, including the following: • Jumper insulators on strain towers failed. Most jumper insulators are long rods rigidly bolted to the tower in contrast to Trans Power practice of using disc insulator jumpers. • Tower steel member distortion on twenty towers resulted in excessive crossarm deflections and electrical failures. This appears to have principally occurred due to movement of tower foundations in areas of very high ground displacement, rather than direct failure of steel members. • Damage to underground transmission lines was found along 95 lines. The greatest damage was concentrated in areas subject to strong earthquake movement along the waterfront. Distribution Lines A total of 649 circuits were damaged due to failure of distribution lines. Many distribution poles carried either polemounted transformers or SFL gas switches, and are consequently top heavy. Approximately 8000 distribution poles fell over or were broken when buildings collapsed. Underground ducts through which distribution cables run suffered various degrees of damage, including the collapse of the side walls. Effectiveness of Mitigation and Preparedness Measures Following the 1978 Off-Miyagi earthquake, there has been a considerable mitigation effort put in to reduce the effects of earthquake motion, including stringent design standards for new equipment. However, at a system level the design of the network has also embodied the following key features: • Redundancy and flexibility -a strongly interconnected grid. • Diversity of equipment types. A key preparedness measure was participation in a nationwide mutual aid scheme to provide assistance to/ receive from other electric power companies. The importance of these measures are elaborated upon in the following section. Response and Recovery Aspects Loss of power to 2.6 million households occurred at the time of the earthquake. This figure represents approximately 25 % of the total number of customers. Switching operations were immediately performed to re-route electricity via unaffected sections of the distribution network (i.e. 77 kV). Within two hours, the number of households without power was reduced to 1 million. 35 Within 8 hours, 50% of lost power was restored, and within three days 90% of power was restored. Full restoration was achieved in seven days (refer Figure 2.4). In comparison, in Los Angeles 90% power restoration was completed within 20 hours of the Northridge earthquake. Six 275 kV sub stations and two 154 kV sub stations were completely blacked out after the earthquake. Despite equipment damage and substation blackout, the system was quickly relivened by bypassing damaged equipment. This was enabled by the strong and well gridded network in the area, and the multiple interconnections between the various voltage levels permitting complete damaged sections of sub stations to be duplicated by a lower voltage equivalent. The system load was reduced to 60% of the monthly average demand by the estimated total loss of 50,000 of the 1 million customers in the area. The demand was subsequently increased to 80% of the monthly demand as people swapped away from gas cooking to electricity. The demand could be met due to system redundancy, despite damage to equipment. Personnel for recovery works Immediately after the earthquake, the company modified its entire organisational structure in order to assign as many staff as possible to the task of restoring power supplies. This task was made more complicated because a number of service centres were located in the areas worst affected by the earthquake, and sustained damage. Only 45 % and 55 % of staff were able to get to the Osaka and Kobe offices respectively on the day of the earthquake. The company allocated more workers on a step-by-step basis, as the assessment of damage proceeded and the restoration work entered new stages. Figure 8.3 illustrates the source of workers during the emergency power distribution stage. Other electric power companies in Japan provided an additional 319 personnel. .3 Deployment of Staff During the Emergency Power Distribution Stage It is understood that the "staff from affiliates" shown in Figure 8.3 refers to contract staff who carry out everyday tasks for Kansai Electric Power Company. This available regional resource base, particularly in proportion to Kansai Electric Power staff is highly impressive, and was a key factor in the rapid recovery of service. Procurement of materials and equipment Kansai purchased the majority of electrical equipment from one supplier in the local area. As this supplier was out of action due to earthquake damage, some replacement equipment was unavailable. The company made use of other materials that had been stored for emergency use. To restore damaged facilities quickly and safely, the company provided the affected service centres not only with workers but also with as many special purpose vehicles as possible. Since the earthquake caused extensive damage to support structures, the company provided a large number of special vehicles for aerial work (cranes), and excavation and erection of poles. In addition to the additional personnel noted above, the other Japanese power companies provided diverse types of material assistance including 52 high-voltage power generation vehicles and 77 working vehicles and vehicles equipped with satellite communications facilities. The loan of the power generation vehicles was particularly important, as Kansai Electric Power only had eight such vehicles. The power generation vehicles were used to provide electricity to vital service facilities such as police and fire stations, hospitals and community shelters. Other facilities to receive power from these vehicles included hotels, camping grounds and the company's own facilities. Food and temporary accommodation After the earthquake the company undertook to transport food and drink and other necessities for workers to the affected areas from two different sources -from the head office and from the Himeji branch office. On the day of the quake, however, traffic in Kobe and surrounding areas was heavily congested -much worse than expected -and this caused delays in delivery of these provisions. In some places on that day, staff had to make do with dry biscuits and other emergency food kept in stock. From the next day on, however, conditions improved with earlier departures of delivery vehicles and the use of sea transportation where possible. Since most hostels, hotels, and other private accommodation were closed in the affected areas, the company used meeting rooms and other available space in its service centres as temporary overnight accommodation. In addition, for extra accommodation capacity, the company also made use of passenger ferries, long-distance sightseeing buses with toilets, and tents. Principles and Strategies Adopted for Emergency Power Distribution The following key principles and strategies were adopted in establishing emergency power supplies: • In allocation of emergency power supplies, the highest priority must be given to vital services (hospitals, shelters, municipal offices, etc.), and to providing temporary power supplies for refugees. • Emergency power distribution measures requiring the minimum amount of labour to implement were used, as well as the maximum number of available workers; also as many vehicles as possible were used for maximum mobility. • Impose strict safety standards in relation to equipment, personnel, and electricity, in order to avoid secondary hazards. The nature and scope of temporary repairs implemented in accordance with these strategies is diagrammatically represented in Figure 8.4. Emphasis was placed on repair methods requiring the least amount of labour. The result was the supply of power to all customers able to receive electricity by 3pm on 23 January. Communications One of the key factors contributing to the rapid restoration of power supply was that the private microwave communications system operated by Kansai Electric Power was not damaged. This microwave system links the central load dispatching centre with all the main power stations and sub stations. Kansai Electric Power also have an independent telephone system using microwaves and fibre-optic cables. As a consequence, the failure of the public telephone system did not impact on the restoration activities. Interference did become a problem in the days following the earthquake, and so the company set up temporary radio base stations and assigned a different frequency band to each restoration team. Communications via portable cellular phones was also effective. For communication between a base station in each block and a service centre, due to traffic congestion, the company made frequent use of motorcycle courier services for faster communication. Trans Power has a similar communication system, and it is designed to a higher security level than the power system to ensure availability after an earthquake. Transporlation Means On roads from branch offices in the peripheral area to service centres in the affected area, the flow of traffic on the day of the earthquake was around 10% of the normal speed (ie. about as slow as walking). The company had five emergency vehicles in the service area of the Kobe branch office. Combining them with those provided by other branch offices, the company managed to put to use 30 emergency vehicles after the earthquake. They were useful even before traffic was controlled, but they became much more effective after traffic controls were imposed, particularly since these vehicles were permitted to ignore general restrictions. For example, these vehicles were allowed to travel three times as fast as ordinary vehicles. Temporary Repairs to an HV network The emergency vehicles were used every day for about a month after the earthquake. They proved especially useful in leading other vehicles employed in restoration work. Subsequent Restoration Work Following the supply of power to all customers one week after the earthquake via temporary measures, emphasis shifted to pennanent repairs. There was a degree of urgency about this work, as the peak demand for electricity occurs in summer due to air conditioning and cooling requirements in 30° C plus conditions. Work such as repairing segmented routes and replacing damaged underground cables was instigated, along with permanent repairs to equipment and the replacement of damaged poles before the onset of the typhoon season. The process of checking underground cables for damage took four months. There was also considerable ongoing demand for disconnections and re-connections in relation to damaged and new buildings, along with tli.e wiring of temporary housing camps. Financial Implications It is estimated that full restoration will cost NZ$ 8 billion. Approximately 10% of this relates to office building , ;,pairs and 30% to repair and replacement of equipmen1 with the remainder being required for investn1ent in new fa , ,:ies due to damage from the earthquake. A NZ$125 million net loss of revenue occurred m January. February and March 1995. A decrease of 1.6% of total electricity sales for the April 1995 -March 1996 year is predicted. This is due in the main to the number of collapsed and inoperative buildings. Interdependence Issues The response by Kansai Electric Power Company was severely hampered by the congestion in the transportation systen The extensive use of heavily marked emergency vehicles was of some benefit in this regard. The private microwave system operated by Kansai Electric Power removed what would otherwise have been a critical interdependency. Some standby generators in emergency institutions were 'found to depend on reticulated water for cooling, and could not function in the absence of this, While the process of re-livening power was believed by some to be responsible for igniting buildups of gas, Kansai Electric Power maintained that affected lines were fully checked out prior to re-livening. Lessons For New Zealand • The design loads for disconnectors in New Zealand have been increased. Further consideration will be given to the relative merits of seismic testing or more detailed analysis of disconnectors. • Both at Los Angeles and more so at Kobe, the underlying soils affected the sub station performance, A better understanding of the factors that affect sub station equipment interaction with soils is now possible and will be developed in the near future, • The Japanese require all new equipment for sub stations to be tested to prove adequate seismic performance, and do not rely on seismic analysis. Trans Power currently permits either analysis or testing to seismically qualify equipment. For major replacement contracts where a significant amount of equipment is to be purchased, testing should be considered. Some advanced work has already been initiated with the University of Auckland, in conjunction with Los Angeles Water and Power, to refine analysis techniques so they may be used with more certainty. Once completed this analysis will be required for smaller orders where testing can be uneconomic. • A policy to ensure that identical items of critical equipment not be installed in one sub station should be considered. If different brands are installed, then it is unlikely they will all have a similar design defect. There are potential maintenance problems that must be assessed. • Site specific response plans for particular sites (e.g. sub stations) which identify how equipment can be bypassed and where appropriate spares are located are being developed. In both Los Angeles and Kobe, the systems were restored quickly due to the strongly interconnected grid and the redundancy within the system. Damaged equipment at Kobe was simply bypassed by using alternative voltage levels. . This level of interconnection and redundancy is not present in the New Zealand transmission system and similar response times cannot be expected merely because strong equipment is purchased. In the absence of system redundancy, quick restoration will be assisted by the development of specific disaster plans for each sub station site which identify how equipment can be bypassed and where appropriate spares or equivalents are located. Involvement of the connected power companies in compiling these plans will enable the prioritisation of crucial supply circuits. • Careful co-ordination of the re-livening process with other utilities, particularly gas, as well as the public must be thought through as a part of the response planning process. Network Description Telephone services in Japan are provided by a single company, Nippon Telephone and Telegraph Corp. (NTT). NTT owns and operates 85 percent of Japan's cable infrastructure as well as nearly 80 percent of the country's radio services market. The carrier's dominance of Japan's domestic telecommunications network is such that most alternative service providers license and resell network capacity from NTT. The telephone local access network in the Kobe area consists of a combination of overhead lines and ducted underground cables. Approximately 20% of cables for local calls and 80% of the toll lines are underground. There are 1.5 million lines in the affected area. Four companies provide cellular services in the Kansai area. Damage Assessment Report On the morning of the earthquake, telecommunication services were disrupted to 285,000 of the 1.5 million lines in the region. Of the 30,000 data circuits and general leased lines 3,170 failed. Major switching facilities, which carry communication between Prefectures were not damaged. Major transmission lines were affected in a four block area only. These were switched over to spare transmission lines which substantially reduced the impact on ordinary subscribers. Communication within the Prefecture were severely impacted by the failure for a number of hours of 11 switch units at 7 sites. These failures caused problems for up to 285,000 subscribers. The cause of these failures was damage to batteries and emergency power generating facilities. Some exchanges failed immediately the earthquake struck and others failed later when battery reserves were exhausted. The power failures also caused links passing through these sites to fail. While NTT reportc;d damage to three steel towers which were built on the roofs of buildings, the damage was not sufficient to interrupt the radio links using the towers. Three of NTT's network buildings were damaged. Approximately 3,500 public telephones in the disaster area were disabled, with 1,800 being restored by the 1st of February [Takei and Maki, 1995] . Telephone service congestion occurred on a huge scale. It was estimated to be the largest case of traffic congestion in Japan since entering the information age. The number of calls trying to reach the Kobe area from throughout the country reached almost 50 times normal peak on the day of the earthquake. The next day they still reached 20 times the normal peak. Congestion was compounded by many handsets having been knocked off hook and the concentration of calls to specific emergency organisations immediately after the earthquake. This resulted in many complaints. Normally such congestion would converge quickly but in this case new disaster information was added as time passed and the reported scale of damage expanded day after day. Thus it wasn't until January 22 that the congestion status between Kobe and the rest of the country was finally resolved. Some specific circuits for emergency calls (119 and 110) stopped operating immediately after the earthquake. Problems increased as time went by as calls to directory assistance, faults and service assignment centres could not be responded to immediately. Local access links which connect NTT with their customers premises are predominantly provided by aerial cable. These were cut by either building collapse, damaged poles or fire at the subscribers premises. As a result 193,000 subscribers were without service. Approximately 200,000 lines were recovered within two days, although it took about 2 weeks to fully restore the service. The earthquake caused about NZ$460 million in damage to NTT' s physical plant and cable infrastructure. This figure does not include damage to NTT's Cellular network. Cellular network damage affected different service providers in different ways. One service provider had two switches, neither of which were damaged by the earthquake. However, 27 of the company's 153 base stations sustained damage that caused site failure [Sextro, 1995]. This damage fell into three categories. At two sites, buildings around the base stations collapsed damaging antennas and other equipment. Twenty base stations ceased operation because their transmission lines between the station and the switching centre were cut. An additional five base stations stopped functioning due to loss of the mains power supply. Sixteen base stations were restored on 17 January and the remainder within 6 days. Call volumes increased 30 fold which caused some overloading of the cellular switching equipment. A second provider had 37 out of 80 cell sites out of service primarily through loss of mains power. Recovery was rapid and all were restored within 2 days. Failure of a base station did not always prevent communication from the surrounding area as sometimes coverage could be obtained from more distant cell sites. Overload of cellular switches was a major problem. Underground cables generally suffered minor damage, whereas overhead lines sustained considerable damage. This damage typically resulted from either fire or the collapse of buildings, especially houses. There is an open cut telephone tunnel with a length of about 2 km in Kobe, and this has an average cover depth of 3m. A maximum movement of 180mm was found at the expansion joint at the section between the telephone exchange centre and the tunnel. A box culvert near the badly damaged Daikai underground railway station had movement at an expansion joint of 40mm horizontally and 130mm vertically. Liquefied sand and water flowed into the culvert at this point. Locations where liquefaction of the ground occurred was found to correspond with one of the hardest hit sections of the open cut tunnel. The breaking off of metallic conduits, the failure of polyvinyl conduits, and the peeling of cement mortar off the walls near the connections between the conduit and the manhole, was quite widespread. Most of the conduits that had been laid throughout Kobe City were old and provided for no expansion or contraction. The damage by the earthquake was concentrated in those old type conduits. Approximately 14,000 manholes had been installed, chiefly in the hardest hit areas of Kobe City, Ashiya City, and Nishinomiya City, and about 15% of these suffered some damage from the earthquake. The most common type of 39 damage was cracks in the neck of the manholes. NTT uses an underground conduit enclosing communications cables to make connections to subscriber's buildings. In Port Island and Kobe -Sannorniya Area, settlement in the range of 300-500mm in the ground near an approach to some buildings occurred. This movement led to the separation of some of the metallic conduit joints, the breakdown of polyvinyl conduits and the cutting off of some of the communications cables in the approach to the buildings. The NTT communications cable conduits across waterways are usually fixed to the underside of bridge girders. Joints that allow sufficient expansion or contraction in the event of an earthquake have been recently developed, and it is understood that these new joints suffered no damage from this earthquake. On the other hand, severe damage was done to the joints constructed to earlier specifications that permitted less expansion or contraction. Some of the conduits were damaged by the settlement of an embankment behind a bridge abutment during the earthquake. Underground cables did not sustain damage that affected the telephone service. There was however widespread relative movement of cables in manholes. Cables fall into two classes: optical fibre cable and the metallic cable made up of copper wire. No significant difference in either of the classes that would have affected the telephone service was found. Table 9 .1 summarises the history of seismic improvement measures for underground telecommunications facilities in Japan. These measures were initiated in view of the damage caused by some of the strong earthquakes that had occurred in the past. Soon after the Miyagi-Ken Oki Earthquake occurred in 1978, a thorough investigation into the development of the duct sleeve and the seismic conduit joint began. The former is set at the place where conduits and manholes are joined together and permits contraction or expansion of the joint, while the latter is inserted in a conduit run to provide for the expansion or contraction of the conduits and serves to prevent it from becoming separated. In 1982, the application of both of these devices to underground facilities began. Effectiveness of Mitigation and Preparedness Measures The underground facilities to which these devices had been applied suffered little damage from this earthquake. It has been found that the damage to underground facilities varies in size according to the following three conditions: (i) the structure the conduits are made up of; (ii) whether the ground becomes liquefied or not; (iii) whether or not the facility is located in an earthquakestricken area with a seismic intensity of 7 or more on the JMA scale (MMX-XII). There is a significant difference in the amount of damage to underground facilities between the liquefied area with a seismic intensity of JMA 7 and the unliquefied area with a seismic intensity of JMA 6 (MM VIII-IX). The breaking off of conduits joints resulting from ground deformation in a liquefied area is more frequent than in an unliquefied area. NTT has used stringent building construction control based on the Japan Building Standards Act. While many buildings and highways were reported damaged, NTT buildings received less damage. This was considered a good result and justification of their mitigation measures. • Apply steel conduits, stopper joints and gravel drain Nihonkai-chubu Earthquake method in Iiquefiable ground Response and Recovery Aspects NTT set up the headquarters for the disaster at 8.30am on the morning of the earthquake, and began to check and restore. Switching units without backup power were revived by the end of the 18th of January (the second day) because vehicles with power generators were found. The 285,000 interrupted lines were decreased to 85,000 lines by January 19th. Checks for the tunnel containing communication cables in Kobe City was finished by January 18th. Restoration works were carried out by 1,000 workers of Kobe branch and 3,000 workers from other branches. 11 vehicles with power generators and 6 vehicles with satellite communications were utilised. On January 31st the restoration was finally over except for 38,000 lines which were impossible to restore due to building damage or collapse. Figure 2.4 shows the relationship between the recovery of telecommunications with those of the other key utility services. To help handle the load, NTT added 5,000 lines serving the call district of Hyogo prefecture, which includes Kobe and several adjacent towns. The carrier also used selective call blocking to control call volume. Congestion due to the high number of inward calls restricted the availability of outward lines in the days following the earthquake, particularly for the cellular network. In the days following the quake, NTT added more reserve circuits and installed telephone sets in the high-damage areas. They also installed approximately 2,700 devices, including 350 fax machines, at 760 locations around the city for free use by emergency officials and the general public. It also deployed temporary and portable communications systems throughout the region to support the rescue and emergency efforts, including vehicle-based satellite terminals and cellular phone sets. The carrier adopted a triage approach in its repair operation, giving first attention to the relay stations and trunk circuits with the least amount of damage. On January 22, five days after the quake, NTT removed the selective call blocking from the network. During the time that the NZNSEE reconnaissance team was in the area (i.e. the second week after the earthquake), the cellular phone system appeared to be functioning efficiently, despite cellular call volumes up to four times normal in the damage area (20-25% higher over the whole Kansai area). However the demand for rental cellular phones considerably outstripped the supply, making it very difficult to find a phone available for hire. NTT has acknowledged that reliance on its sheer strength of resources was not enough to address the extent of problems that arose after the Kobe earthquake. The carrier is now involved in several initiatives aimed at improving the resilience of its network. · NTT's network planners have re-evaluated some of the carrier's deployment practices, and changes in thinking will be reflected in the way the carrier rebuilds its network in Kobe. NTT has said that it will place a high priority on installing underground cabling and switching facilities in the rebuilding process. The reason for this is that above ground facilities incurred severe secondary damage from fires, floods, and debris from collapsed structures following the quake. Underground deployment is significantly more expensive than above ground facilities, but NTT believe the improved reliability and network integrity justify the expense. In rebuilding the Kobe infrastructure, NTT will enclose underground equipment in specially hardened structures that are able to withstand greater levels of shock than conventional structures can sustain. NTT says it will develop a model transmission facility entirely underground in one of the restoration areas to demonstrate its new approach. In the Kobe and Kansai districts that suffered the greatest damage, NTT intends to provide all commercial users of network services with local-loop fibre installed underground. One of the cellular network providers managed to increase the number of channels at three cell sites by 50% within a week and during February did the same to another 13 base stations. This action required the prompt co-operation of Japan's PTT Ministry. Interdependence Issues The greatest dependency of the telecommunications networks is on power supply, and so their performance was greatly assisted by the rapid recovery of the power supply. Conversely, many of the other utilities were heavily dependent upon the main telecommunications networks. It is therefore somewhat ironic that one of the reasons given for the good performance of the power company is an independent communications system. Four vehicles with satellite units were dispatched by NTT immediately after the earthquake but they could not reach their destination until the following day due to road congestion. Several dedicated emergency communication circuits did not fare well. This included the failure of two dedicated NTT circuits used for automatically transmitting seismograph readings. As a result, there was a delay of over half an hour before the seismograph recording from Kobe was transmitted (manually, by wireless) to the Osaka District Meteorological Observatory. Until then, the peak shaking intensity was thought to be only JMA 5 (MM VII), which is below the threshold for severe damage to buildings. The Hyogo Prefectural Government's NZ$125 million emergency satellite communications network also failed, cutting communications with central government and local municipal offices, as well as with the local emergency services. The systems stand-by generator was damaged during the earthquake, and although the back-up batteries worked, they lasted only two and a half hours. JR West's communication system was another important emergency circuit that failed. Train crews normally communicate by radio with the stations, which are linked to head office by copper wire and fibre optic cables. All cables were severed by the earthquake and as a result, head office was unable to communicate with either the stations or any of the nine trains operating in the area at the time of the earthquake. Station staff took their own initiative and manually shifted signals to red immediately after the earthquake. Head office had attempted unsuccessfully to order the trains to halt. Lessons for New Zealand • Network management controls were important in maximising the network availability in the disaster area. • Availability of portable engine alternator sets is important in a major earthquake. • Overhead lines and cables can be a weak link. • Emergency plans must allow for the management of large numbers of staff and contractors from outside the damage area. Office was also present, along with other City officials. On behalf of the group, Wellington Regional Councillor Ernie Gates expressed New Zealand's sadness and condolences to the people of Kobe. Councillor Tanaka acknowledged with heartfelt appreciation the support offered by New Zealand in the form of blankets, clothes and other kindnesses. Councillor Tanaka reviewed the events following the earthquake from the council's perspective, and made available information relating to the response of the City, Prefecture and National Government that is reported on subsequently in this section. In his closing remarks, Councillor Tanaka stated the determination of the City of Kobe to build the city not only strong against natural disasters but also an affluent and comfortable city suitable for the 21st century. He expressed the wish that the co-operation between New Zealand and Kobe will further develop in terms of the study of protection against natural disasters. Japan The Kobe City Assembly is the political body which makes decisions on significant policies affecting the city. This assembly has a total of 72 elected councillors. The municipality headed by the elected mayor administers the Kobe City Council. The prefectural system operates at a regional level, with there being 47 prefectures in Japan. Kobe falls within the South Hyogo Prefecture, along with more than 100 cities and towns. The prefectures appear to act largely in a regulatory and monitoring role, and the prefectures contain sub-offices of government departments. Kobe is however one of twelve Designated Cities (based on size), which means that it has full control over regional functions in the city, and so is largely autonomous with respect to the prefectural government. The South Hyogo Prefecture has a greater role in the areas outside Kobe City. The National Government is the equivalent of our Parliament, although it features an Upper House (called the Diet). Approximately 70% of the tax revenue from Kobe citizens goes to the national treasury. Kobe City Assembly By 5 pm on the day of the earthquake, 25 of the 72 councillors were able to come to an emergency meeting at .the City Hall. The Mayor and Deputy Mayor reported on the immediate effects of the earthquake to these councillors. It should be noted that at this stage (ie. 12 hours after the event), it was thought that less than 1000 people had lost their lives. For the week after the earthquake, councillors and city officials were principally occupied with organising the delivery of water and food to the homeless. On 23 January the City Council established an Emergency Management Committee to address issues arising from the earthquake. Emergency Management Headquarters were established by city officials, and the individual councillors were encouraged to send suggestions regarding disaster management measures to this single office. The City Assembly gave priority to securing financial resources for restoration, with particular emphasis being placed on seeking assistance from the National Government. The Vice Chairman and other members of the Emergency Management Committee went to Tokyo nine times in the period February to April to meet the Prime Minister, other ministers concerned and members of the Upper House. The results of these appeals include national assistance schemes for restoring the Port and related industries, the underground railway and the private railway, and the allocation of state funds for the disposal of disaster waste. The following statement was included in a resolution of the City Assembly passed on 15 February 1995: "The people of Kobe recovered from war damage and the Hanshin Flood Disaster in the past. This is yet another occasion for the effort, perseverance and courage of the people of Kobe to prevail. This is the time of trial when an international disaster-proof city should rise out of the ashes under the banner of the people's city. The Kobe City Assembly pledges itself to make every effort to restore our city back to its full vigour, to the city of every citizen's dream". A Supplementary Budget was passed at the end of March, which modified the original 1995/96 budget. As a part of the measures contained in this supplementary budget, the remuneration allowance for City Councillors was cut by 10%. Another political consequence of the earthquake is that the election for city councillors, which was to be held in March, was deferred for three months. Housing is the most immediate challenge for the City, and the biggest. The City is planning for 82,000 new houses to be built in the next three years, both by the public and private sectors. South Hyogo Prefecture The South Hyogo Prefecture established its own Earthquake Emergency Management Headquarters on 19 January. We did not meet with prefectural officials, and so were not able to ascertain other specific actions undertaken and political inputs/ decisions made by the prefectural government with regard to the post-earthquake period. Grants of approximately NZ$1,500 were made by the Prefecture to those whose houses were partly or completely destroyed. National Government The National Emergency Management Headquarters was established in Kobe on 22 January. At a national level, the key decisions made included: • Allocation of NZ$1.5 billion out of the 1994/ 95 Budget Reserve for Disaster Restoration was made on 24 January. • Decisions on the full coverage of the demolition cost of destroyed houses and the provision of 60,000 temporary houses for victims by the end of March were made on January 28 and 29 respectively (ie. 11 and 12 days after the event). • A Government ordinance on the application of the "Temporary management of leased lands/ housing in a disaster-stricken city" law was enacted on 6 February. It is understood that this ordinance gave the City of Kobe the authority to place a hold on reconstruction over severely damaged parts of the city pending a full planning review. This review was a zero-based planning process which took into account the possible widening of streets to reduce fire hazard and introduction of parks for improved amenity. In order to provide enough housing, taller buildings than previously permitted were required. • An advisory body to the Prime Minister, "The Hanshin Awaji Restoration Council", was created on 15 February, with a one year term of operation. • Legislation for local tax cuts for victims of the earthquake was passed on 17 February (ie. one month after the earthquake). • On 28 February, NZ$2.2 billion for building temporary accommodation and NZ$0.5 billion for the disposal of disaster waste was made available. A law for special measures concerning employment creation in public work for disaster victims in the earthquake stricken area was passed, and land tax was halved for the Kobe area. • On 3 March, food expense allowances at emergency shelters was raised from NZ$13 to NZ$18. • On 24 March, a decision was made by the Ministry of Construction to aid repairs of private housing from public funding (no details available). It was clear that both the Kobe City Assembly and National Governments faced difficult decisions in the days and months following the earthquake. Given the unexpected scale of the disaster, the Kobe City Assembly appears to have moved rapidly to set up appropriate structures. While the National Govermnent was subject to some criticism for what was perceived as being a slow response, it appears that key mechanisms were made in a relatively timely fashion, given the nature of the central government processes. It is interesting to consider the reaction and response of local and central government in New Zealand if such an event were to strike "next week". The painful process that the Japanese authorities have been through provides a clear view of the fundamental planning steps that should be undertaken in New Zealand before a major earthquake strikes. Impact Assessment Assessing the imact of the disaster was the major problem encountered by all levels of government. A reasonable indication of the number of deaths took until the end of the second day to emerge. Major trouble spots were not easily identified. This contributed to delays in response actions, and a lag in requests for assistance being made. The Management of Foreign Aid City officials highlighted the practical difficulties associated with managing aid received from outside the region. Politicians and officials alike received criticism for rejecting some offers of foreign aid. Clearly there is a need for New Zealand to be prepared at the Central Government level so that the benefits of foreign aid is maximised. As an example, overseas search and rescue teams created a number of difficulties for local officials. They did not speak Japanese, and required food, water and accommodation at a time when these were at a premium for all and there were sub-zero temperatures. They were also not familiar with the types of domestic construction (given that houses were where most people that could potentially be saved were trapped in). The real need was for Japanese helpers. For physical assistance, the needs changed daily and at a rate faster than aid from overseas could respond to. On the first day, blankets were needed; on the second, water; and on the third, food. However, blankets arrived on the third day, water on the fourth and food on the fifth. This problem was added to by the difficulties due to transportation congestion in firstly unloading the material and getting it to a central warehouse when received, and secondly distributing it to the community centres. Planning For Reconstruction Planning for reconstruction covers a wide area of issues. Key areas where difficulties can arise that have been highlighted by the Kobe earthquake include: • Where to dispose of demolition material and associated debris -the need in New Zealand to pre-plan and have the appropriate approvals under the Resource Management Act • Planning controls for reconstruction in areas of widespread destruction -the imposition of a reconstruction moratorium in such areas is a logical reaction, but has created considerable anguish and dislocation • The approvals process for major repairs and reconstruction These are areas that can and should be addressed in New Zealand. Kobe City produced an outline plan for the reconstruction process in March, and compiled a more detailed version in June. This plan is understood to include target time frames, including inputs from each lifeline organisation, and budget issues. Social Issues The most prominent social issues directly resulting from the earthquake appeared to arise from the enforced relocation of people from damaged houses to temporary accommodation in other areas. This is commented on more in the Housing section. Alcoholism is also an area for concern. People who have lost their home and job are turning to alcohol in increasing numbers. Lessons For New Zealand • Fundamental steps in the recovery process can and should be planned now, including: -considering the process of applying the Resource Management Act in a post-earthquake context. -defining design standards and establishing streamlined approval processes for the reconstruction work. • The role of Government in the response and recovery phases requires urgent clarification. The interface between Government and private sector needs to be defined (e.g. what the Government will do and what it will not do); input from the private sector in terms of physical reconstruction should be maximised and planned for. • The importance of people being self-reliant for at least three days (in both home and work situations) needs to be continually reinforced. Employers (particularly lifelines or related organisations with a significant post-earthquake role) need to consider how they will meet this requirement. • The value of the Local Authority Protection Programme (LAPP) scheme and other similar schemes and Earthquake Commission cover. In the absence of household earthquake cover, a number of owners of destroyed houses with existing mortgages are in considerable financial difficulty. General To date only limited information has been made available regarding the economic implications of the Great Hanshin earthquake. While estimates of direct costs have been obtained from most of the lifelines organisations, figures which illustrate the effect on other organisations, companies and the community are not readily available, and are likely to remain in the private domain. Broader economic effects are dependent upon the time to achieve full recovery, with periods of years rather than months being involved. This section of the report is therefore limited in its scope, although it is hoped that additional information in this area will become available with time. In this section, estimates for the direct costs to lifelines organisations current at the end of August 1995 are collectively summarised, leading economic indicators which compare the years prior to the earthquake to the six months following are presented and general observations are made regarding funding mechanisms implemented following the earthquake. A case study of the effect of the earthquake on a major hotel is also included. Summary of Costs to Lifelines The estimates of rep.air costs to the various lifelines that were presented in the previous individual sections are summarised in Table 11.1. These figures should be regarded as approximations only, given the number of organisations involved in some of the utility classifications. For example, the value given for roads is understood to include only those roads operated by the Hanshin Public Expressway Corporation. In some cases it is also This table indicates that a total repair cost to lifelines organisations of the order of NZ$26 billion has resulted from this earthquake. In addition, the electricity and gas networks sustained NZ$125 million and NZ$280 million loss of revenue. Where identifiable, the direct damage expressed as an average percentage of the estimated replacement cost of the lifelines networks lies in the range 6 % to 9. 5 % . The damage ratio for the port is however likely to be considerably higher, and their damage cost estimate of NZ$10 billion clearly has a significant influence on the total. Recent work by Hopkins for the Wellington After the Quake conference (Hopkins, 1995a) indicated potential losses for lifelines in the Wellington metropolitan area in a Wellington Fault event to be approximately $1 billion, which reflects an overall damage ratio across all lifelines organisations of 1 O. 2 % . An explanation for this figure being higher than those sustained in Kobe is that a higher proportion of the total value of key lifelines assets in Wellington lie within the predicted peak MM IX and MMX isoseismals. Damage ratios from Kobe confirm post-earthquake observations that a significant proportion of lifelines assets can be expected to be undamaged. However this level of damage corresponds to a high level of disruption, and this is the aspect which determines the extent of economic loss sustained by the community at large. Key Economic Indicators Useful economic performance indicators have been obtained for Kobe City. These cover the both the pre-earthquake period and the post-earthquake months through until early and mid-1995. They are presented in Table 11 It is understood that most of these economic indicators continued to imp~ove ~n Jul~, August and September, particularly as the Port raised its available capacity. Key observations from these figures include: • Monthly sales volumes in department stores fell to almost zero in February, rising again to almost half of the base figure (non-holiday and non-Christmas seasons) by the third, fourth and fifth month after the earthquake ( Figure 11.1). • The number receiving the unemployment benefit doubled withi~ two months of the earthquake, and stayed at this level m the subsequent two months (Figure 11.2). • Exports fell to one-third of pre-earthquake levels before picking up slig~tly in May (Figure 11.3) as the port returned to ~alf its usual level of function. It is interesting to note that imports fell by a similar degree, and remained below exports. It can reasonably be conjectured that this situation would not exist for New Zealand where there is likely to be a greater dependency upon imports for many aspects of reconstruction. • The industrial productivity index rose 10% above preearthquake levels in March, presumably as a function of reconstruction activity. • In the five months following the earthquake, the Consumer Price Index has not risen above pre-earthquake levels. • The number of bankruptcies fell below the pre-earthquake base level. These trends have significant implications when viewed from a New Zealand context, noting some of the more favourable economic influences applying to Kobe and Japan. They provide a clear indication of the overall impact of an earthquake beyond the physical damage to people, buildings, bridges and services. Funding Mechanisms for Utility Organisations Typical arrangements involved the National Government meeting demolition costs and also paying up to 80% of repair/reinstatement costs. A defined period applies for this arrangement, depending on the utility organisation concerned (e.g. 1 year for Hanshin Public Expressway Corporation, 2 years for Kobe City Sewage Bureau and 3 years for Kobe City Waterworks). Strengthening or upgrading works that are now considered necessary after the earthquake are to be met by individual utilities. In a number of cases the separation or distinction between repair and strengthening work is difficult to define. Assistance for Private Organisations The Government agreed at an early stage to meet virtually all of the demolition costs incurred by residential, commercial and industrial owners of small or medium size buildings. This Monthly Trade Volumes For Kobe (Imports and Exports) proved necessary due to the low level of insurance cover in Japan, and provided a significant 'kick-start' to the recovery process. The issue of monitoring these costs is however ansmg as demolition proceeds and the total cost mounts, particularly with regard to commercial buildings and facilities. It is understood that payment for demolition for large buildings was based on unit rates, and some disputes are arising. Employment Insurance Scheme The immediate adverse effect of the earthquake in terms of unemployment and pressures on businesses led to the Government instituting an employment insurance scheme which will continue until the anniversary of the earthquake. While details of the scheme have not been obtained, the objective is to help private companies affected by the earthquake to keep staff that they would otherwise be forced to make redundant during the recovery phase. It is aimed at businesses that are likely to be able to continue, rather than propping up those are likely to fail. A general observation was made that up to 20 % of businesses ( of all sizes) could fail as a direct consequence of the earthquake. There is speculation as to what will happen to unemployment figures following the expiry of this scheme in January 1996. The general view is that if the recovery of the economy as a whole continues at the current rate, the people that this scheme is designed to protect will be kept on in their jobs. Miscellaneous Observations A number of key industries were badly affected by the earthquake, including the Kobe Steel company which was forced to close for 4 months. Approximately 80% of small shoe-making enterprises in Kobe are also believed to have been affected. These were concentrated in Nagata Ward, which was badly hit by fire. A number of these were effectively cottage industries located in older houses, and the combination of earthquake damage and the presence of solvents is understood to have greatly exacerbated the effects of the fire in this area. One of the number of Japanese banks that collapsed in the third quarter of 1995 placed some of the blame for its situation on the Hanshin earthquake. By September 1995, tourism was down by 50 % on the figures for previous years (on a monthly basis). This is reflected in average hotel occupancy rates of 30% rather than the customary 60% at that time of year. Case Study: A Major Hotel Information relating to the effect of the earthquake on a 770 room hotel was obtained, and is presented as an illustrative case study. The multi-storey building is situated on one of the man-made islands. It was designed in 1981, and sustained negligible structural damage. There were no injuries sustained in the hotel. 47 The surrounding ground settled up to 600 mm around the piled structure. It was interesting to note that the outside of the walls of building were detailed for some settlement by being tiled for approximately one metre below ground level. There was a reasonable level of damage to fixtures and finishes, particularly to the upper level guest rooms. Most rooms needed some repair, which proved to be a messy exercise taking four months to complete. Contents damage included the loss of china in the hotel restaurants, with more than 200 television sets requiring replacement. The immediate effect of the earthquake was a loss of power. Standby generators provided restricted power until mains power was restored at 10 am. Water and gas supplies were also cut, and a period of of two weeks was taken to fix and restore these services. The hotel had big water tanks on the roof and at basement level for immediate drinking purposes until water trucks served the area. Bucketed water from swimming pools and fountains was used for toilet water in the days following the earthquake. The hotel lifts were back in action (i.e. inspected and reset) on the same day. Repairs costs were of the order of NZ$8 million, which reflected a damage ratio of approximately 2 % . While this is not a high level of damage, it is still a significant cost for a facility that effectively did not sustain any structural damage. The hotel was not insured. The hotel was affected by the closure of the Port Island bridge to other than bicycles for 2 months. In the days and weeks after the earthquake, the hotel accommodated broadcasting company staff and gas company staff doing repairs to their networks. There was a peak occupancy level during this time of 300 (including staff who stayed 2-3 nights at the hotel, as they could not get easy access during this time). The hotel re-opened on a limited basis within one month. By August, room occupancy was at 40% of the usual level, with bookings at 60 % to 70 % of th.e usual level for September. Significant factors in the time for the recovery of this hotel were firstly the fall-off in conferences and tourists in Kobe in the months following the earthquake, and secondly the difficulty in access to the hotel compared to others in the centre of the city itself. The Scope of Temporary Accommodation Required The earthquake and subsequent fires led to the destruction of 60,000 to 70,000 houses in Kobe City. An estimated 40,000 to 50,000 houses were half destroyed. There was a maximum number of homeless of approximately 235,000 in Kobe City and 300,000 for the affected region as a whole. The city is planning to build 82,000 new houses (both public and private) in three years, noting that the private houses will be privately funded. A total of 235,443 people required temporary shelter immediately after the earthquake. It is understood that many of these people have subsequently moved in with relations in preference to temporary accommodation. Issues such as the enforced relocation of affected people into camps many kilometres from their homes have arisen. Approximately 40,000 temporary housing units have been constructed on 278 sites around Kobe. These sites are typically on reclaimed industrial land, with some parks also being used. These units were effectively provided by the City with funding from National Government. The average number of people per temporary unit is not known. It is understood that a further 3-4,000 people would like to be in temporary housing, but are not prepared to live in the locations offered for reasons of business, schooling or being part of migrant communities (eg. Vietnamese). These people are either: staying in tents staying with relations; or prematurely back in damaged houses By September 1995 it was estimated that approximately 1,000 people were still living in tents. In terms of overall progress, by the end of March, 30% of demolition and clean-up of houses was completed, whereas by the end August, 80% of demolition and clean-up of houses had been completed. The Nature of Temporary Accommodation Provided A brief visit was made to two temporary housing camps on Port Island. One camp had 460 units, the other 800, each unit of approximate size 5.5m x 3.0m. They were of insulated panel construction, apparently of good standard, although it was not possible to view inside. Washing machines and air conditioning units were on the outside (refer Figures 12.1 and 12.2). The camps were provided with fully reticulated services, including new buried water and sewer mains. This level of deliberate preparation (i.e. not rushed immediately after the earthquake) was particularly impressive. The wide cross-section of society occupying these camps was illustrated by the range of vehicles parked outside -there were some high quality cars in amongst more basic models. The City of Kobe is looking at creating more temporary housing units, particularly in central areas where the people would like such accommodation to be. However there is no more unused land or park space available in the inner city area. Issues Associated with Establishing and Operating Temporary Accommodation The City seriously considered a cash payout to homeless people in lieu of arranging temporary accommodation. This was because (i) it suited some people better, and (ii) it would be cheaper for the city in long run (Le.enable a maximum total payout ceiling to be established). However it was not legally possible to do this. The City also considered increasing the quality of temporary houses, but this would heighten the problems of people not wanting to leave, particularly as it is rent free. People have been given up to one year to occupy the temporary units, but fears are growing about the likely difficulty in getting some people to leave in a year's time. For the planning and costing of temporary housing, all associated costs need to be considered. These include: site establishment unit purchase, procurement and installation maintenance dis-establishment There are concerns regarding the health of camp occupants, particularly through summer, as there are many elderly in the camps. The City is looking to establish a medical clinic at each camp in order to enable easy check-ups, and to keep critical eye on general health issues. The City are encouraging the establishment of "camp councils" to report needs, etc. to council. It is interesting to note the difficulties that would occur in a similar situation in New Zealand where people would not fit in well with a camp situation on even a short-term basis, due to aspects of societal behaviour being different from the Japanese. Reconstruction Issues The Kobe Housing Phoenix Centre opened in a new building in August to act as information centre for those wanting to reconstruct houses from scratch. Some people however cannot reconstruct until planning issues in badly damaged and burned areas are resolved. Such issues include land surrender for wider roads, and the creation of park spaces. This centre has offices for representation from major Japanese and overseas housing construction companies to discuss design and construction issues. Information on zoning status throughout the city is also displayed. A model with recommended construction details, e.g. timber lining with ply lining and tie-down straps to foundations, is also on display. Lessons for New Zealand • The temporary and permanent re-housing of residents following a major earthquake requires considerable planning. All major metropolitan centres should establish plans for the procurement and establishment of temporary housing units, with particular consideration being given to the following points: -types and source of housing units -location, configuration and utility services requirements for camps -length of occupancy to plan these camps for • It is considered that this planning should typically be undertaken on a regional basis, with primary input from emergency managers and planners. Input from lifelines managers should also be sought, e.g. proximity of housing camps to principal water supply mains; the infrastructural demands of establishing these camps should be factored into utility services response plans. • Temporary housing camps should be planned to be as selfsufficient as possible, including the provision of medical services, etc. • Re-location of people into camps some distance away from their homes is to a large extent unavoidable and can be expected to cause concern and resentment. • New Zealand housing stock is however expected to perform much better in terms of major damage than in Kobe. • The provision of temporary accommodation for out-of-town workers assisting with post-earthquake reconstruction also needs to be considered. 13.l Taking Stock of the Disaster Kobe's emergency managers were confronted with a number of problems they had not anticipated, and which hindered effective, timely, impact assessment from taking place. This was partly because a large-scale earthquake disaster had not been contemplated by the authorities. Although recent efforts had been made to prepare Kobe for typhoons and flood impacts, planning did not prepare either citizens or government for major earthquake. On top of this, Kobe's disaster plan was predicated on only part of the city being impacted, with the concomitant assumption that resources could be shifted internally. The numerous roadblocks of debris that obstructed emergency vehicles; the failure of the remote surveillance camera designed to provide areal views of the city; and overloaded telephone systems were apparently not completely factored into the city's contingencies. Once fire, police and medical officers were able to extricate themselves from their houses and get to the nearest station they could reach, they faced the fact that many of buildings had also suffered damage. For example, six of the main police stations in Kobe had major structural damage. Three of the 11 fire stations in Kobe were damaged. Almost half the 1329 firefighters could not get to their stations within the first two hours, although 90% had reported in within five hours. Unlike some parts of eastern Japan where large earthquakes are expected and vital public buildings are reinforced, many of Kobe's critical facilities including emergency services were not. Buildings that housed the Kobe Municipal Government and the Hyogo Prefecture Government were similarly affected. Located in the centre of Kobe, both sustained structural damage as well as substantial office disruption (files turned over, furniture toppled, and so on). Electricity supply and back-up generators failed. About 40% of Kobe Municipal Government's 20,000 employees were directly affected by the earthquake, and only 20% of the workforce was available for the first several hours. Under the City's disaster plan, all employees are obliged to report to their workplace and activate disaster planning procedures. The fact that they were unable to do so seriously impeded the city's initial actions. Moreover, since little consideration had been given to prioritising activity, staff reported to normal places of work and awaited further instructions. Under the Disaster Countermeasures Basic Act 1961, city governments are required to establish a municipal headquarters for disaster countermeasures to execute emergency operations. The Mayor of Kobe, who also holds the position of Director, Kobe City Disaster Relief Headquarters, arrived at City Hall at 6.30am to find a skeleton staff attempting to initiate local coordination. The Vice-Governor of Hyogo Prefecture arrived at 6.45am. With the Mayor's presence, the City Government was able to set up its emergency operations by 7.00am. One of the first instructions the Mayor issued was to have an aide video what he could of the earthquake's damage. An early decision to deploy helicopters for the initial impact assessment was confounded because the city's helipads were located on one of the artificial islands. On the way to the helipads, it was subsequently discovered that passage to the island was damaged, necessitating a two hour walk before the helicopters were accessed. The 1961 Act also obligates prefectural governments to establish a disaster headquarters to collate disaster information from impacted municipalities in its jurisdiction and forward an assessment to Central Government, via the Disaster Prevention Bureau in the National Land Agency (DPB-NLA). This was not an easy task. The emergency radio telephone network of Hyogo Prefecture did not function because the controller computer was damaged, and normal telephone lines were congested. So, messengers were sent on foot to police headquarters and the fire command to obtain information. However, these attempts at information-gathering were stymied because officers became immediately engaged in rescue activities and could not report back. As a result, neither of the two services had an overall picture. The first damage estimate the Hyogo Prefectural Police Headquarters issued, at 9.20am, almost 3 hours after the earthquake struck, was '8 dead, more than 189 buried alive, 33 missing and 203 houses destroyed'. At the national level, emergency response mechanisms start when an earthquake exceeding intensity V on the JMA (Japan Meteorological Agency) scale (i.e. MM VII) is reported. At 6.07am, the JMA informed DPB-NLA that a strong seismic event of intensity V had been observed at Kyoto. This was upgraded to intensity VI (9-10 MMI) at 6.21am, with the impact site re-located to Kobe. DPB-NLA called the National Police Agency (NPA) and the National Fire Defense Agency (NFA) for reports. However, little was passed on because local personnel had insufficient information themselves. At 7.00am the public broadcasting corporation, NHK began showing footage of the affected areas. At 7 .30am the DPB-NLA decided to convene a National Emergency Headquarters meeting. The Bureau continued to monitor the situation via television reports, and started to realise that the magnitude of the event was more serious than first thought. By 8.30am NPA alerted prefectural police headquarters of non-affected areas to be on stand-by; by 9.00am NFA did the same for fire commands. By 9.05am the DPB-NLA was able to make telephone contact with a senior official of the Hyogo Prefecture, and urged them to send an immediate request to the National Self-Defense Forces (SDF, the Japanese armed forces). This was transmitted by the Governor of Hyogo at 10.00am. All three levels of government have acknowledged that impact assessment was the major problem encountered. Not knowing what had happened, or where the major trouble spots were, became significant factors that resulted in consequential delays in response actions. Initial Co-ordination at Kobe It was not until about 10.00am, more than four hours after the main impact, that officials in Kobe had a reasonable appreciation about what had happened. Given the scale of the disaster, this is not surprising. Together with the fact that coordination efforts and response priorities had to be established meant that early efforts were largely unco-ordinated. Police officers (agents of the prefectural government) were side-tracked into assisting initial rescue needs rather than undertake impact assessment and report-back. The Director of the USA-based Disaster Research Centre (University of Delaware), who was attending a conference in Osaka at the time of the earthquake and was able to get to Kobe soon after, observed that during the early hours of the earthquake there was no apparent coordination at the 'street level' of public safety and response personnel. Moreover, there was no apparent attempt to control access to or movement within high hazard or severely damaged areas. She also reported that there was no evidence of an official system in place for inspecting damaged buildings or regulating entry into dangerous structures. Mindful of the fire conflagrations following the Great Kanto earthquake of 1923, fire officers (a municipal government instrumentality that incorporates ambulance services) faced a similar dilemma as the police: should they give priority to firefighting or to search and rescue? Immediately after the main jolt, 30 fires broke out in the city. The need for search action was not great, because those who had escaped knew immediately where others were trapped. However, basic tools for rescue activity were in heavy demand. Many fire-fighters who set out to distinguish fires found themselves in the midst of such rescue activities. The failure of the water supply, and favourable wind conditions which prevented a rapid spread of fires helped ease the fire-fighters' predicament. It is worth noting that only 4.4% of the reported deaths were fire victims. The Formal Organised Response As soon as reliable information was available, the organised response was able to commence. It has been estimated that at the height of the remedy phase, over 60,000 fulltime professional responders and 30,000 volunteers were in the field. This number included 25,700 personnel from the Self Defense Force (SDF As part of its disaster planning, Kobe had entered into mutual aid agreements with nearby cities. These were mainly for specific resources, such as fire-fighters. As it turned out, however, these agreements could not be fulfilled because the resources specified in these contracts were desparately required where they were. Nevertheless, the call for assistance did not go unheeded. One thousand fire-fighters were brought in from outside the city on the first day; 2,700 on Day 2; 3,400 on Day 3; and over 3,500 on Day 4. External assistance teams soon found a shortage of local resources. After shelter, the greatest demands were, in order, blankets, water, and food. When the blankets were distributed, and water/food provided, officials started calling upon the rest of the nation to provide clothing, toiletries, powdered milk, and the like. While the Prefectural Government keeps a supply of blankets, food and water was difficult to procure immediately. The City Government had also set up agreements with local commercial enterprises to provide basic items such as food, clothing, blankets, and the like. But, many of these premises were severely affected by the earthquake and stock could not be accessed. One of the biggest problems for the authorities turned out to be transportation. Roads and rail were severely disrupted. Congestion with the city was acute and severely hampered emergency response operations (a recent survey conducted in March by Kobe City officials into aspects of traffic behaviour during the earthquake indicated that people used private vehicles to evacuate as well as to check on friends and families (the latter because the telephone system was inoperable). The city tried to secure one major road for emergency access, but almost every route into the city had problems. Since road congestion had not been considered, it took almost a day to get signs printed that designated the priority route for emergency services. Under the Disaster Relief Act 1948 evacuation camps must be established within seven days following a major disaster. Temporary housing (generally prefabricated designs) must be established within 20 days. These requirements were impossible to achieve, given the extent of the damage (unlike typhoon or flood damage, which the Act's sponsors had in mind). The city's disaster pre-planning included the identification of 364 pre-designated evacuation shelters, based on location, availability of open space and safety factors. Access routes, primarily for the evacuation of people had also been predetermined. As it turned out though, far more than this number of spontaneous shelters were created: at the height of demand, over 1100 shelters were established throughout the city. Even this number of shelters, however, did not prevent overcrowding, and sanitation became a problem in many. Many others camped in public parks or in make-shift shelters. A large number were taken in by relatives and friends, while others who had the means, moved away from the impact area. Those requiring emergency shelter reached a peak of 235,443 on the evening of the earthquake. Problems with the amount and type of goods being donated were also reported. Newspaper articles reported a mismatch of supplies to the needs of the evacuees, citing a possible time-lag between asking for specific goods and the arrival of the donations. Large numbers of personnel, mainly volunteers, had to be deployed to deal with the vast amount of donated supplies. At times, the supply far exceeded the capability of personnel to distribute. This cornucopia effect is a well-known and researched disaster relief problem. The experience in Kobe, however, is one of the few occasions that it has been documented in Japan. Nevertheless, some concern was reported in various shelters at the lack of clothing and shoes, but generally relief supplies appeared to reach those in need. Criticism was also levelled at local government for failing initially to provide emergency aid to the elderly, people with disabilities, and others who fall through the net of conventional disaster relief measures, which tended to assume a uniform population, and hence similarity in victim needs. The elderly living in shelters were particularly affected by illness due to living conditions and stress, with many unable to survive the conditions created by the disaster. Two weeks after the earthquake, reports of influenza and pneumonia, particularly among the elderly, were becoming common in some shelters. Fortuitously, while the earthquake occurred in mid-winter, when temperatures are very cold and length of days are short, Japan's winter does not ordinarily coincide with high rainfall. Although the city government provided information in a number of languages about available services and assistance, information for foreigners living in the Kansai area was found to be insufficient. Representatives from nine ethnic media agencies formed a relief group for foreigners immediately after the earthquake. They provided information about receiving aid and services and to provide moral support to minority groups in the devastated area. This information was distributed on air, via newsprint, and through the temporary shelters. The Hyogo Prefectural Police opened a 24-hour counselling service which could be accessed either by phone or in person at police headquarters in Kobe. A special radio station (796 FM) was also established in Kobe to broadcast earthquake information to the public. This station was still operating two months after the event. Newspapers provided regular damage reports and provided a means of passing on any information useful for people affected by earthquake. This included information on counselling, medical aid, accommodation, easy loans, business support, job information, and relief supplies. Lessons for New Zealand Emergency Management The Great Hanshin earthquake provides a number of significant and valuable response insights for the emergency management community, both for Wellington as well as for New Zealand as a whole. Perhaps the key lesson for New Zealand is that disaster response is only as good as the efforl and insights that go into pre-impact planning and preparedness. Other key implications are highlighted below. • Hazard analysis and risk assessment is an essential component of emergency response planning. The Kobe earthquake occurred in an area assumed to be free from major damaging tremors. • Hazard mapping needs to include critical resource siting. Many of Kobe's key emergency response facilities, as well as many critical lifelines, were located in high-risk areas. • Response planning must be based on realistic assumptions on what is likely to occur. Kobe's disaster planning was predicated on the assumption that only part of the city would be affected at any one time. • . Impact assessment is a critical factor in emergency response that has to take priority. Kobe underscores the need for a reliable impact assessment procedure. Because this was not built into their impact response programme, the allocation and direction of resources was confused and delayed. • Impact consequence analysis should be a part of routine emergency management preparedness. In Kobe, emergency managers had little idea of areas likely to be damaged. Similarly, there was no assessment of likely downstream economic costs from damaging quakes. • Disaster planning must be proactive and all-inclusive. In order to reduce uncertainty, the 'all-hazards' Comprehensive Emergency Management (CEM) model is a useful framework. Kobe's disaster plan was built around it's 'pet hazards' of floods and typhoons. These agents have very different impact profiles from the earthquake that struck in January, 1995. Many of the post-impact difficulties the Kobe emergency management system experienced can be explained because Kobe appeared to have neither an all-hazards perspective nor a comprehensive approach. • Ensuring that key emergency responders have a coordinated response plan is essential. The Integrated Emergency Management System (IEMS) is a practical and worthwhile approach to assist inter-agency pursuits during times of disaster. In Kobe, both horizontal and vertical integration was a problem. • Outmoded legislation can hamper effective disaster response. It is important to review statutory requirements on a regular basis. All three key pieces of natural disaster legislation in Japan were outdated (Disaster Relief Act 1948; Disaster Countermeasures Basic Law Act 1961; Large-Scale Earthquake Countermeasures Act 1978). Similarly, introducing disaster-relevant legislation immediately following impact (Disaster Recovery Act 1995) does not produce good legislation. • Disaster recovery planning must be seen as an integral part of preparedness and response planning. It is too late to start the process of recovery after an event has taken place. • The need for Emergency Operations Centre (EOC) personnel to be selected and prepared, through regular training, for specific tasks was also underscored in the Kobe disaster. The earthquake raised questions such as how EOC personnel are to be alerted and how to ensure they can get to the EOC. • Similarly, the Kobe experience raises issues such as securing communications systems between components of the wider emergency management system. Two other points are worth raising in conclusion. First, the Japanese emergency management system is more reliant on technical resources than it is on organisational development. Hence, problems that arise have a tendency to be 'solved' by developing or introducing new technologies, rather than assessing the intra-or inter-organisational structures, systems or processes. The problem with this orientation is that, irrespective of the type, level, and quantity of technical and physical resources that are available, without a good management system they are unlikely to be put to their optimal use. Effective disaster management is, above all, good management applied to a specific circumstance. An allied point is that, overall, Japan has sufficient rescue and medical personnel to respond to emergencies, as well as abundant supplies of consumer goods in the country. As such, the emergency management system is not reliant on in-kind assistance from external sources. However, almost all the international media and many international rescue groups fail to understand or appreciate this reality. Both these groups incorrectly believe that any disaster provokes internal mayhem to the point that outside assistance is imperative. As the Great Hanshin earthquake revealed, failure to 'comply' with disaster myths can produce widespread criticism, and force governments to accept assistance, even when it creates more problems than it solves. While New Zealand does not enjoy the same level of resource availability as Japan does, it will nevertheless face a similar dilemma with reference to being faced with inappropriate, even unrealistic, offers of help. Even following a major urban earthquake disaster, it will not be a nation dependent on external aid, although many elements of the international media and rescue groups will think otherwise. The emergency management system, especially at central government level, needs to think seriously about how it is likely to respond to the cornucopia effect that will be part of the post-impact environment. The Main Effects of the Earthquake on Kobe The main effects of the earthquake can be summarised as follows: • More than 5,500 people were killed and 26,800 seriously injured. Approximately 300,000 people were made homeless • Vital transportation routes and the port facilities were significantly disrupted • Key utility services sustained considerable damage. The earthquake resulted in an estimated overall direct cost for repairing lifelines networks and facilities of approximately NZ$26 billion • Buildings and facilities designed and constructed prior to the first issue of the current design standards in 1981 sustained the greatest damage, but well designed and constructed structures performed well The Immediate Response of Lifelines Organisations While there was a considerable amount of confusion on the day of the earthquake, overall the immediate response was very effective. As a leading example, the Hanshin Expressway Public Corporation closed the complete network down within three minutes of the earthquake. Impact assessment was the major problem encountered. Not knowing what had happened, or where the major trouble spots were, became significant factors that resulted in consequential delays in response actions. The rapid mobilisation of key equipment items such as truckmounted electricity generators (56) and water trucks (420) only serves to underscore a major vulnerability in New Zealand. Co-ordination problems were encountered with some fires resulting from the restoration of power supply in areas where there was a buildup of gas present. The Recovery and Restoration Process The achievements since the earthquake are extremely impressive. In addition to the rapid restoration times outlined in this report for most utility services (Figure 1 .4), these include: -the rail system back in operation by June 1995 53 -the main expressway through Kobe will be partially reopened in March 1996 -by the end of August 1995, the port was operating at 63% of the pre-earthquake activity level, and retail activity was 50 % of that prior to the earthquake This level of response and recovery has in the main been due to strong regional and national support from other utilities, and the availability of a massive contracting workforce and material supply. There is however an incredible amount of work that still has to be done. Large scale reconstruction will continue for several years. Having reinstated the basic network service in the case of utilities or carried out temporary repairs for the bridges, permanent repairs are being undertaken. Added to this, however, is the need to plan and implement the upgrading of the systems to current standards in the light of the earthquake experience, and in many cases this is being undertaken concurrently. It is clear that both the Kobe City Assembly, South Hyogo Prefecture and the National Government faced difficult decisions in the days and months following the earthquake. Each of these levels of government appear to have moved rapidly given the unexpected scale of the disaster to set up appropriate structures and implement funding mechanisms. Key economic indicators for the City of Kobe for the months following the earthquake send clear warning signals for New Zealand in general, and Wellington in particular. Concluding Observations An over-riding impression from the visits by members of the New Zealand lifelines groups is the many parallels between Kobe and Wellington (e.g. geography (hills, limited flat land), geology and seismicity (ground liable to liquefaction, presence of faultlines), port facilities, etc.). These parallels extend to the expected nature and range of damage in the Wellington Fault scenario as currently portrayed. For Christchurch and other areas in New Zealand, the parallel with Kobe is that an earthquake of this scale was not anticipated, despite the national context of high seismicity and the presence of liquefaction-prone geological formations .. It is almost certain that the Kobe Earthquake will provide the strongest parallels for New Zealand of any earthquake this century. Continuing study of the impacts of this earthquake will have significant potential benefits for New Zealand organisations involved in earthquake risk reduction, and lifelines in particular. Additional valuable information from this event will continue to become available in English from Japan and elsewhere. The degree of preparedness and mitigation measures undertaken by the Japanese was _very impressive. This was partly due to their preparedness in relation to frequent typhoons. In the absence of a significant earthquake in a major city in New Zealand in recent years, the same level of preparedness cannot be assumed. Impact assessment was the major problem encountered. Not knowing what had happened, or where the major trouble spots were, became significant factors that resulted in consequential delays in response actions. Of particular significance to New Zealand is that there will not be the large pool of skills and resources available as in Japan. This means that it is even more important that mitigation, preparedness and response planning in New Zealand is to the highest standard in order to minimise disruption and enable the quickest possible recovery after an earthquake. There is a need for New Zealand cities to ensure that key structures within the transportation network are robust enough to survive a major earthquake event, to be able to quickly control traffic after the event, and to be able to restore traffic capacity. The experience gained from the earthquake will be of considerable benefit to mitigation, response and recovery planning in New Zealand, particularly in Wellington which has direct geological and topographical parallels with Kobe. Key Lessons for New Zealand • The fundamental importance of developing response plans, and the holding of annual exercises to familiarise people with the plans and their roles. This earthquake has again confirmed that lifelines organisations cannot anticipate a staff turnout of more than 50% on the day of the earthquake, and that this must be planned for. • Fundamental steps in the recovery process can and should be planned now, including: -considering the process of applying the Resource Management Act in a post-earthquake context. -defining design standards and establishing streamlined approval processes for the reconstruction work. • The painful process that the Japanese authorities have been through provides a clear view of the essential planning steps that should be undertaken at a political level in New Zealand before a major earthquake strikes. • The role of Government in the response and recovery phases requires urgent clarification. The interface between Government and private sector needs to be defined (e.g. what the Government will do and what it will not do); input from the private sector in terms of physical reconstruction should be maximised. The issue of clarifying responsibilities assume greater importance in view of the recent splitting up of the assets of national and local government agencies. • The criticality of transportation and access in the response and recovery phases. The ability to control and prioritise traffic in the days following an earthquake has a major impact on the response of utility operators. The unavailability of key road and rail routes has been the single biggest hindrance to the recovery process in Kobe. • The importance of people being self-reliant for at least three days (in both home and work situations) needs to be continually reinforced. Employers (particularly lifelines or related organisations with a significant post-earthquake role) need to consider how they will meet this requirement. • Networks, facilities and buildings constructed to modern standards can generally be expected to perform well. • There is a need to urgently address the seismically vulnerable buildings and bridges designed and constructed prior to the modern design standards of 1976. Such a move will require regulatory backing, which in turn will necessitate legislative change. • Physical mitigation works undertaken by lifelines organisations have proven worthwhile in major earthquakes. • Lifelines in areas prone to liquefaction and in soft, poorly compacted soils are highly vulnerable to earthquake damage. • The value of the Local Authority Protection Programme (LAPP) scheme and Earthquake Commission cover in providing a base level of financial support for those eligible. In the absence of household earthquake cover, a number of owners of destroyed houses with existing mortgages are in considerable financial difficulty. General Recommendations • The direction and emphases of the Wellington and Christchurch Lifelines Groups in terins of co-ordinating hazard information and mitigation measures, and facilitating response planning is appropriate, and should be strongly continued. The restoration of utility services in the days after an earthquake will have a major impact on the morale of a community, and planning to minimise this timeframe is a key objective. • Central and Local Government agencies must develop plans which outline their response to a major earthquake. These plans should include provision for compiling a restoration strategy, including the scope of such a plan, at an early stage following the event. There is full support for the Disaster Recovery Review and its work in this area. • There is support for the initiatives of the Building Industry Authority and the New Zealand National Society for Earthquake Engineering in addressing seismically vulnerable buildings designed and constructed prior to the modern design standards of 1976, and encouragement for this work to pn?ceed with urgency. • Lifelines organisations must place emphasis on ensuring the robustness of critical structures such as control facilities and record storage buildings. • GIS systems and related methodologies for the analysis of lifelines systems need to be developed and used on a more widespread basis in New Zealand. Specific Recommendations • Councils in conjunction with Transit New Zealand should develop plans for traffic control immediately after earthquakes. Ensuring appropriately prioritised access to affected areas is extremely important. • All response plans must include provisions for obtaining resources and materials. Such provisions should be i .!J. addition to mutual aid agreements, and must take into account the concurrent demands of other utilities. • All utilities should review their post-earthquake communication arrangements, and consider backup mechanisms in the event of failure of the frontline system • Utility organisations should make an annual commitment to response planning in terms of specific prior time allocation. It appears that despite best intentions, for many organisations this work is not being carried out due to the high level of everyday workloads resulting from reduced staff structures. There should be an associated commitment to participating in annual exercises to be organised on a regional basis to evaluate the effectiveness of response plans. • A national schedule of key resources such as truck-mounted generators and temporary housing units should be compiled.
2019-05-30T13:15:05.403Z
1996-03-31T00:00:00.000
{ "year": 1996, "sha1": "519d222f63c294c7cbbaba1f6b97166a987829ab", "oa_license": "CCBY", "oa_url": "https://bulletin.nzsee.org.nz/index.php/bnzsee/article/download/609/586", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "ffd5a10ed8900103658393b403d12204e93819ca", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "History" ] }
258079931
pes2o/s2orc
v3-fos-license
Comparing the responses of grain fed feedlot cattle under moderate heat load and during subsequent recovery with those of feed restricted thermoneutral counterparts: metabolic hormones We set out to determine the impact of moderate heat load on the plasma concentrations of a suite of hormones involved in regulating energy metabolism and feed intake. The responses of the thermally challenged (TC) feedlot steers were compared to those of feed restricted thermoneutral (FRTN) steers. Two sequential cohorts of twelve 518 ± 23 kg Black Angus steers on finisher grain ration were housed in climate-controlled rooms (CCR) for 18 days and returned to outdoor pens for 40 days. The TC group was subjected to a diurnal range of 28–35 °C for 7 days (Challenge) but held in thermoneutral conditions beforehand (PreChallenge), and in Recovery (after Challenge). The FRTN group was held in thermoneutral conditions and feed restricted throughout. Blood was collected over the three periods in CCR and two periods in outdoor pens for 40 days (PENS and Late PENS). Plasma concentrations of prolactin, thyroid stimulating hormone, insulin, leptin, adiponectin and thyroxine (T4) were determined during the five periods. Whilst the pituitary hormones were relatively stable, there were differences in plasma leptin, adiponectin and T4 between the two groups during Challenge and Recovery, and occasionally in PENS. The interaction of the plasma hormone concentrations and rumen temperature and DMI were also investigated. Whilst the positive relationship between DMI and leptin was confirmed, we found a strong negative relationship between adiponectin and rumen temperature, and a strong positive relationship between adiponectin and dry matter intake (DMI) in the TC steers only. Supplementary Information The online version contains supplementary material available at 10.1007/s00484-023-02464-w. Introduction As global food production adapts to a warming climate, there is great imperative to understand the impacts of increased heat load in high value production animals. Decades of research on the impacts of heat stress have documented its detrimental consequences on animal health and production (Silanikove 2000;Collier et al. 2006;2017;Mayorga et al. 2019). Reduced feed intake, increased water loss via respiration and sweating, and increased water intake are direct responses to increased heat load (reviewed Blackshaw and Blackshaw 1994;Lees et al. 2019). Return to full feed intake, production and growth can be protracted after acute heat stress (Beatty et al. 2006;Sullivan et al. 2022). The responses to heat stress in ruminants are influenced by many animal, environmental and management factors. Moreover, earlier experiments in climate controlled facilities were hampered by the inability to control humidity and deliver diurnal temperature cycles. A major confounder in understanding the responses to increased heat load is the voluntary reduction of feed intake, as an immediate response to increased core temperature. Furthermore, there is a paucity of studies in the growing beef animal (Collier et al. 2009). Given these knowledge gaps, we conducted a series of experiments in climate-controlled rooms with grain fed Black Angus steers to investigate a wide range of responses in feedlot cattle exposed to and recovering from the impost of increased heat load. In this paper, we focus on metabolic hormones associated with energy metabolism and feed intake during and after 7-day exposure to diurnally cycled moderate heat load. The hormones in question were insulin, prolactin, adiponectin and leptin, as well as TSH and thyroxine. Firstly, we tested the hypothesis that the hormone trajectories of moderately heat stressed feedlot steers will differ from those obtained from feed restricted thermoneutral counterparts (FRTN). We assayed the plasma hormone concentrations of Black Angus steers subjected to moderate heat load as well as those of the FRTN group. Both treatments were followed through recovery in thermoneutral conditions and then in outdoor feedlot pens for a further 40 days. In the context of Australian summer feedlot animals, moderate heat loads occur with daily maximum temperatures in the mid-30 °C range (daily maximum THI in the mid-80 s range; Danger category). Our interest was on the overall trajectories of the treatment groups through challenge, recovery and feedlot finishing, rather than acute homeostatic responses to heat stress. Thus, our secondary hypothesis was that the plasma hormone trajectories of both treatment groups will show homeorhetic behaviours through challenge (be it heat load or feed restriction) and recovery. This phenomenon was observed for physiological measures of moderate heat stress in these same animals . A detailed description of performance and the physiological responses of grain fed Black Angus steers to moderate heat load compared to FRTN animals during the three periods in the CCR is presented in Sullivan et al. (2022). A description of the metabolic responses across all periods, as interpreted from clinical biochemistry markers, is available also . In this report we followed the trajectories of a suite of plasma hormones in these same animals through challenge, recovery and final feedlot finishing. Furthermore, we examined the relationships between the hormone concentrations and rumen temperature or dry matter intake (DMI) and found altered relationships upon imposing heat load. Animal experiments In outline, two cohorts of 12 grain fed steers with live weights of 518 ± 23 kg were subjected to five sequential periods over 60 days (Fig. 1). For the first 18 days, the animals were housed in the climate controlled rooms (CCR) and allocated to two treatment groups (n = 6), feed restricted thermoneutral (FRTN) and thermally challenged (TC). Whilst in the CCR, they were subjected to three periods, PreChallenge, Challenge and Recovery. The FRTN group remained in thermoneutral conditions throughout the 18 days and experienced near constant air temperature (20.3 °C), percent relative humidity (%RH, 71.5%) and Temperature-Humidity Index (THI, 69). During the 4 days of PreChallenge, the TC group experienced mean air temperature and %RH of 22.7 °C and 64%, giving a mean THI of 70. A highly detailed methodology depicting the animal treatments whilst in the CCR is given by Sullivan et al. (2022), and a summarised version with information of the treatment during outdoor feedlot finishing is presented in Wijffels et al. (2022). The 7-day Challenge period for the TC group was delivered with diurnal cycling: the respective mean daily maximum air temperature and %RH were 34.5 °C and 58%, and respective mean daily minimum air temperature and %RH were 27.9 °C and 36.4%. The respective mean daily maximum and minimum THI were 82.5 and 73.6 ( Fig. 1). The diurnal cycles of air temperature and %RH imposed during Challenge were guided by an analysis of meteorological data for a locality that hosts a number of feedlots. Hourly air temperature and %RH data of the two preceding years were obtained for weather station 041,522 (Dalby airport, latitude − 27.1605. longitude 151.2633) from the Australian Bureau of Meteorology. Several moderate to severe heatwaves were identified, and the hourly data of each day of the heat waves were averaged to model diurnal air temperature and %RH cycles (unpublished data). The air temperature cycle revealed that morning air temperatures of 25-27 °C at 0700-0800 h rose rapidly to 32-34 °C by 1000 h. Over the same time interval, the %RH decreased from 56% RH to 32% RH. The afternoon air temperatures fell from the daytime plateau from approximately 1600 h to the nighttime minimum which occurred at variable times. In the CCR during Challenge, these conditions were accelerated so that the morning rise and plateau of daytime maximum air temperature, and conversely the fall and daytime plateau in %RH was achieved between 0750 and 0920 h. In the afternoon, the reduction in air temperature from the daytime maximum plateau to the overnight minimum temperature occurred over 1550 -1750 h. %RH increased over this interval returning to the overnight %RH of approximately 58%. The conditions for the third and last period in the CCR, the 6-day Recovery, were similar to the PreChallenge period with air temperature held at 22.7 °C, whilst %RH ranged over 37.6 to 58.8% (mean THI of 70.3). The steers returned to outdoor feedlot pens for the remaining 40 days where they experienced late winter -mid-spring conditions (mean minimum and maximum air temperatures (± SD) of 12.9 ± 3.1 and 27.8 ± 4.1 °C respectively; mean minimum and maximum THI (± SD): 55.4 ± 4.8 and 73.2 ± 3.9; Fig. 1). All animals were offered a grain-based feedlot finisher diet throughout the 60-day experiment. The finisher diet provided 13.1 MJ/kg metabolizable energy, 2.9% crude fat and 15.4% crude protein and was augmented with 20 mg/kg sodium monensin . The TC group were fed ad lib, whereas the FRTN group were feed restricted during PreChallenge, Challenge and Recovery, based on the feed intake and feed offered to a live weight matched pair in the TC group. The feed offered regime imposed on both treatment groups in the current experiment saw each live weight match pair offered the same amount of feed on the same day. The amount of feed offered was based on the feed consumed by the TC steer the previous day and an additional 20% (by weight). Simply put, if the TC steer consumed 10 kg on day 8, both the TC and TN animals of the pairing were offered 10 kg feed on day 9. This method differs from the pair fed approach applied to many animal challenge experiments. When pair feeding is imposed in thermal challenge experiments, the TN animal is fed the exact amount as that consumed by the TC counterpart on the previous day (O'Brien et al. 2010;Wheelock et al. 2010). The rationale for devising and implementing the pair offered regime was to avoid large sudden reductions in feed intake in the TN Fig. 1 The climatic regime imposed on the Thermally Challenged (TC) group. The range of the daily ambient temperature and Temperature-Humidity Index (THI) and duration of each period is depicted for the 60 days of the experiment. The PreChallenge (PreCh), Challenge and Recovery periods were conducted in climate controlled rooms (CCR). The diurnal temperature and THI cycle experienced by the TC group is plotted for the 18 days in the CCR. The PENS and Late PENS periods occurred in outdoor feedlot pens; the climatic conditions during these periods for each of the two consecutive cohorts (cohorts 2 and 3) are presented in the inserted tables. Days of blood sampling are indicated also. Cohort 1 did not proceed through all periods, and the data from these animals was not included in the analyses for this paper (Courtesy of Wijffels et al. 2022) animals which can cause anxiety, stress and impairment of rumen function (Schwartzkopf-Genswein et al. 2003). Hormone assays Concentrations of plasma prolactin, TSH, adiponectin and leptin were determined using sandwich enzyme-linked immunosorbent assays (ELISA). Assay plates (clear 384 well, Perkin Elmer, MA) were coated overnight with the relevant capture antibody in a sodium carbonate coating buffer (0.1 M Na 2 CO 3 pH 9.6) at 4 °C. Recombinant protein standards were serially diluted to appropriate concentrations in either a pooled clarified calf serum (leptin and TSH) or commercial horse herd serum (prolactin and adiponectin). Plasma samples and standards were then diluted further as required in Tris-buffered saline containing Tween 20 (TBST: 50 mM Tris, 150 mM NaCl, pH 7.6 and 0.1% Tween 20). EDTA treated plasma was used in all assays. All reagents were delivered using an EpMotion 5075 liquid handling robot. The concentrations of capture and detection antibodies are given in Suppl. Table 1. Plates were manually washed 4 times in TBST between each step. The coated plates were blocked for 30 min in 2% skim milk powder in TBST. The samples (in duplicate) and standards (in quadruplicate) were added, and the plates were incubated for 1 h at room temperature (RT). Detection antibodies (in TBST) were added to all wells and incubated for 30 min (RT). An amplification step (30 min, RT) was included for the prolactin ELISA using Streptavidin/HRP (Pierce 1:20,000), and for the adiponectin and leptin ELISAs, anti-chicken/HRP (KPL; 1:4000) and anti-rabbit/HRP (KPL; 1:2000) antibodies were utilised respectively. Results were visualised using a 3,3′,5,5′-tetramethylbenzidine (TMB) solution (Biorad Core +) and the reaction stopped with 2% sulphuric acid. The colour development was measured at 450 nm on a Spectramax M3 plate reader (Bio-Strategy). The performance of each assay is given in Suppl. Table 2. Plasma thyroxine (T4) levels were measured using a competitive binding ELISA. Clear 384 well plates were coated with a monoclonal antibody to thyroxine (Suppl. Table 1) overnight in carbonate coating buffer (4 °C). After washing, plates were blocked as described above. T4-BSA (SQX-CBS-8168, Squarix, Germany) was prepared in a 200 µM solution of 8-anilino-1-naphthalene sulfonic acid (ANSA) in BSA/TBST. Plasma samples (in triplicate) were diluted 1:10 in a 200 nM ANSA/DMSO solution in TBST. T4-HRP (SQRX-T4HRP.1, Squarix) was used as a competitor at 5 nM in TBST. Diluted samples (in triplicate) and standards (in quadruplicate) were added to the assay plates along with the T4-HRP and incubated (30-40 min, RT). Reagents were delivered by robot, and the plates washed as described above. Reactions were visualised with TMB, stopped with sulphuric acid and read at 450 nm. Plasma insulin concentrations were determined using a radio-immunoassay kit (TKIN2, Coat-a-Count Insulin, Siemens Healthcare Diagnostics, CA) according to manufacturer's protocol. The results are expressed as mIU/mL in reference to the WHO human insulin standard (IRP 66/304). Statistical analysis of plasma hormone concentrations and linear regression Seven hormone concentration variables were analysed: adiponectin, log 2 (insulin), log 2 (insulin):glucose, leptin, log 2 (prolactin), T4 and log 10 (TSH). The statistical analyses methodology followed the same method as described by Wijffels et al. (2022), i.e. applying the PROC Mixed model with the REML estimation method within the SAS Program (version 9.4, SAS Institute Inc., Cary, NC;2002. Note, that as all animals underwent the same treatments until day 5, the PreChallenge mean and SEM for each variable were determined from the values collected on all animals on day 3. Simple linear regression was conducted in Prism 9.0 (GraphPad Software, San Diego, CA) to discover and describe relationships between the daily mean hormone concentration and the corresponding daily means of rumen temperature or daily mean DMI over the 18 days in the CCR. p values less than 0.05 were considered significant. A trend towards significance was noted over a p value range where 0.08 ≤ p ≥ 0.05. Pituitary hormones-TSH and prolactin The daily mean plasma TSH and prolactin concentrations showed high individual animal variability and were log transformed to facilitate comparison between the two treatment groups (Fig. 2). The plasma TSH(log10) trajectories of both treatment groups were in close resemblance, and mean concentrations were not different to each other at any period ( Fig. 2A, B). Similarly, for plasma prolactin whilst there was an effect of period (p = 0.033), there was no discernible difference between the groups at any time ( Fig. 2C, D). Metabolic hormones Plasma insulin concentrations were highly variable in both groups during all periods. With log2 transformation, major effects of treatment (p = 0.009) and period (p < 0.0001) were identified. The mean plasma insulin concentrations in TC animals were significantly higher (~ 9%) than FRTN during Challenge and Recovery (Fig. 3A, B). In PENS and Late PENS, plasma insulin concentrations increased for both groups. The log 2 (insulin):glucose ratio also showed treatment (p = 0.047) and period (p < 0.0001) effects. The TC mean was higher than the FRTN mean during Challenge only (Fig. 3C, D). In Late PENS, the mean log 2 (insulin):glucose ratios were both increased relative to the means of all preceding periods. Changes in mean plasma leptin concentrations were comparable in both TC and FRTN groups (Fig. 4A, B). There were major effects of treatment (p = 0.0086) and period (p < 0.0001). For the TC group during Challenge through to PENS, the mean leptin concentrations were higher or tended to be higher than the FRTN means, although ~ 16-17% reduced relative to the PreChallenge mean. Whilst the leptin levels were stable in both groups in Challenge and Recovery, the mean leptin concentrations fell a further 25% for both groups in PENS. In Late PENS, leptin levels in both groups were close to the PreChallenge mean (Fig. 4A). There were significant effects of treatment (p < 0.0001) and period (p < 0.0001) on plasma adiponectin concentration. The TC means were persistently lower than the FRTN means during Challenge, Recovery and PENS (Fig. 4C, D). Relative to the PreChallenge mean, the TC mean fell approximately 18% during Challenge. There was no change in the FRTN mean. Both groups achieved the highest adiponectin concentrations during Recovery; in PENS and Late PENS, the adiponectin concentrations stabilised at 21-27% below the PreChallenge mean. For plasma T4 concentrations there were major effects of treatment (p = 0.011) and period (p < 0.0001). The TC group was distinctive in the absence of a rise in plasma T4 during Challenge, being 11% lower than the FRTN mean. Otherwise, the two treatment groups behaved similarly with the highest concentrations of plasma T4 occurring during Recovery (Fig. 4E, F), followed by a fall in PENS to be equivalent to the PreChallenge mean. There was a rise in Late PENS. Interactions with rumen temperature and DMI during PreChallenge, Challenge and Recovery Investigation of the relationships between the daily mean concentrations of the plasma hormones and daily mean rumen temperatures and DMI were inspired by the linear and elliptical responses discovered between rumen temperature and physiological and performance measures under moderate heat load and during recovery . During the 18 days in the CCR, the daily mean rumen temperature range for the TC group was 39.57 -40.64 °C, whilst that of the FRTN Fig. 2 Plasma concentrations of TSH and prolactin over the five periods. The PreChallenge (PreCh) mean is denoted by X. Panels A and C. Comparison of the mean values (± SEM) of the groups for plasma TSH(log10) and prolactin(log2) concentrations respectively. Panels B and D. Within group comparisons for each group across the five periods for plasma TSH(log10) and prolactin(log2) concentrations respectively. + , p < 0.1; *, p < 0.05; **, p < 0.01 ). Thus, the relationships described below can be ascribed to these ranges only. Pituitary hormones-TSH and prolactin A negative linear relationship between daily mean rumen temperature and daily mean TSH(log10) concentrations was evident when combining the values from both treatment groups (r = − 0.727, p = 0.0014; Fig. 5A). The highest daily mean rumen temperature (40.64 °C) was associated with a mean daily TSH concentration of 3.38 ng TSH/mL. The lowest daily mean rumen temperature (39.22 °C) occurred with a mean daily TSH concentration of 4.67 ng TSH/mL. Thus, the 1.42 °C increment in rumen temperature related to a 1.29 ng/ mL fall in plasma TSH concentration, representing a fall of ~ 28% in mean daily TSH concentration between the two extremes of rumen temperature. In contrast, the daily mean prolactin(log2) concentrations did not display any relationships across the whole range of daily mean rumen temperatures obtained by the two groups (p = 0.929; data not shown). Neither TSH nor prolactin demonstrated a significant relationship with DMI (data not shown). Metabolic hormones The mean daily concentrations of plasma insulin(log2) and the insulin(log2):glucose ratio did not display any relationships (linear or otherwise) with daily mean rumen temperature or DMI (data not shown). The mean daily concentrations of adiponectin and T4 yielded negative linear relationships with rumen temperature across both treatment groups (Fig. 5B, C). In the case of T4, the relationship was attributed a moderate negative Pearson correlation (r) of − 0.733 (p = 0.0022), whilst that of adiponectin possessed a strong negative correlation (r = − 0.985, p < 0.0001). Theoretically, the falls in T4 and adiponectin concentrations were at rates of 10 nM and 14 µg/mL per degree (°C) increase in daily mean rumen temperature respectively. The percent reduction in daily mean T4 concentration over the rumen temperature range was 16.4%, whilst the percent fall in daily mean adiponectin concentration was 25.0%. Due to the linear relationship between rumen temperature and water consumption in this experiment , both daily mean T4 and adiponectin concentrations displayed moderate to strong negative relationships with mean daily water consumption (Suppl. Fig. 1A and B). Daily mean leptin concentration showed a trend towards a moderately correlated positive linear relationship with rumen temperature (p = 0.106, r = 0.613) for the FRTN group only (Fig. 5E). When assessing the interaction of metabolic hormones with DMI, relationships were discovered for adiponectin and leptin (Fig. 5D, F). No relationship was apparent for T4 concentrations (data not shown). The TC group demonstrated a positive linear relationship between mean daily adiponectin concentration and DMI (r = 0.876, p = 0.0043); for every 1 kg/day/head reduction in DMI, the daily mean adiponectin concentration fell by 2.9 µg/mL In the case of leptin, the FRTN group exhibited a strong positive relationship with DMI (r = 0.926, p = 0.0010); the rate of reduction in leptin concentration was 0.22 ng/mL for each 1 kg/day/head decrease in DMI. In contrast, the TC group showed only a moderate correlation (r = 0.665, p = 0.077). The rate of decrease in leptin concentration was fourfold less than that of the FRTN group at 0.055 ng/mL per kg/day/head decrement in DMI. Fig. 4 Plasma leptin, adiponectin and thyroxine (T4) concentrations over the five periods. The PreChallenge (PreCh) mean is denoted by X. Panels A, C and E. Comparison of the mean values (± SEM) of the groups for plasma leptin, adiponectin and T4 concentrations respectively. Significant differences are indicated by the asterisks above the period means. Panels B, D and F. Within group comparisons of the plasma leptin, adiponectin and T4 concentrations respectively for each group across the five periods (PreChallenge, Challenge, Recovery, PENS and Late PENS, refer to the x-axis Fig. 2D). The asterisks under the x-axis indicate statistically significant difference with the PreChallenge mean. + , p < 0.1; *, p < 0.05; **, p < 0.01 Discussion Uninterrupted feed intake and growth are critical to feedlot profitability and animal welfare. Increased heat loads are well known to reduce feed intake and weight gain (Hahn 1999;Brown-Brandl et al. 2003;Beatty et al. 2006;Sullivan et al. 2011). However, the behaviours of the hormones that influence these responses during increased heat loads are understudied especially in the feedlot animal. Moreover, most studies focus on one or two such hormones. In this study, the plasma concentrations of a suite of hormones modulating energy metabolism within tissues and at the whole animal level, as well as appetence, were followed through two differing short-term perturbations, and then through recovery and feedlot finishing in outdoor pens. The thermally challenged group (TC) during moderate heat load employed self-directed reduction in feed intake. Likewise in Recovery, the TC group self-directed re-alimentation. Pituitary hormones The plasma concentrations of the two pituitary hormones, TSH and prolactin, were relatively stable across the periods regardless of treatment. Previous studies reported that circulating TSH levels were remarkably unresponsive to fasting, starvation and protein malnutrition in humans (Palmblad et al. 1977;Merimee and Fineberg 1976) and rats (Rondeel et al. 1992;van der Wal et al. 1998). The TSH concentrations we observed in FRTN animals agree with these reports. No change in TSH levels was observed with increased heat load in late gestation or early lactation dairy cows (Weitzel et al. 2017), or calves, heifers and bulls (Schams et al. 1980). At the aggregate level of periods, our data concur with these findings. Only Kahl et al. (2015) observed a 40% reduction in plasma TSH in steers after 9 days of increased heat load relative to FRTN steers. However, in the current experiment and at the level of daily means of rumen temperature and plasma TSH(log10) concentrations across all animals, a moderately correlated and negative linear relationship between the two variables was discerned. Overall, there was a substantial reduction (~ 28%) in plasma TSH concentration across the range of daily mean rumen temperatures experienced by these steers. Prolactin has been associated with increased heat load in ruminants (reviewed by Alamer 2011) and in humans (Robins et al. 1987;Burk et al. 2012;Wright et al. 2012). Good correspondence between plasma prolactin concentrations and core temperatures (or ambient temperatures) has been shown in studies of ruminants undergoing thermal challenge (Wettemann and Tucker 1976;Sergent et al. 1985;Smith et al. 1977;Schillo et al. 1978;Scharf et al. 2010;Ronchi et al. 2001;). Moreover, some studies have indicated a role for prolactin in regulating water consumption in heat stressed ruminants (reviewed Alamer 2011). Contrarily, fasting or underfeeding can depress plasma prolactin concentrations in steers, humans and rats (Becker et al. 1985;Tegelman et al. 1986;Bergendahl et al. 1989). Ronchi et al. (2001) could not report such a response in feed restricted heifers. Clearly, the perturbations of moderate heat load and/or reduced feed intake imposed on the beef steers in the current study were not sufficient to induce an overt response in secretion of prolactin. Furthermore, no relationships were discovered for daily mean prolactin concentration with the daily means of rumen temperature, DMI and water consumption. Metabolic hormones Thyroid hormones, T4 and T3, have strong influences on metabolic rate and thus endogenous heat production. Fiveday fasted sheep and cattle experienced reduced plasma T3 and T4 concentrations (Blum et al. 1980;Blum and Kunz 1981). Unusually, the FRTN group during Challenge (reduced feed intake) and Recovery (realimentation) experienced a rise in plasma T4, which then fell to PreChallenge levels in PENS. The increased plasma T4 might possibly be a manifestation of a confinement stress, i.e. being held in climate rooms for a prolonged period combined with reduced feed on offer. This argument is supported by a small transient rise in rumen temperature detected over days 3-6 in the FRTN group, indicative of a stress induced hyperthermia . Confinement of lambs and calves induced sharp rises in plasma thyroid hormone concentrations (Friend et al. 1985;Bowers et al. 1993) as does sudden change in animal handling practices (Pierzchala et al. 1983;Falconer 1976). According to Chatzitomaris et al. (2017) this is a classic type 2 allostatic load response by healthy animals anticipating increased energy expenditure. So, despite an average 25% decrease in DMI imposed on the FRTN group, the allostatic load response dominated. When returned to outdoor pens, with the confinement/handling stress removed, plasma T4 levels fell to PreChallenge levels. The TC group exhibited an apparent absence of a T4 response during Challenge. There was no equivalent rise in plasma T4, and the rise in Recovery was muted relative to the FRTN group. Typically, increased heat load depresses thyroid hormone secretion (Johnson and Ragsdale 1960;Valtorta et al. 1982;Magdub et al. 1982;Baccari et al. 1983;O'Kelly 1986;Pereira et al. 2008;Kahl et al. 2015;Weitzel et al. 2017). When comparison was made with pair-fed thermoneutral (PFTN) animals to their heat stressed counterparts, the reduction in plasma thyroid concentrations was greater in the latter treatment group (Valtorta et al. 1982;Kahl et al. 2015;Weitzel et al. 2017). The lack of plasma T4 response in the TC group during heat load in our study might be interpreted as a consequence of a depression in thyroid output due to increased heat load counteracted by an elevation induced by the type 2 allostatic response. Once the heat load was removed, the T4 trajectory of the TC group paralleled that of the FRTN group. The interaction of plasma T4 levels and core temperature was corroborated by a linear relationship with daily mean rumen temperature across the rumen temperature range experienced by the steers. In Late PENS, plasma T4 levels rose again in both treatment groups. Increased thyroid activity and T4 levels above basal levels are frequently observed during late compensatory gain (Blum et al. 1980;Cabaraux et al. 2003;Valtorta et al. 1982;Baccari et al. 1983). This seems to be the case for both treatment groups in this study. The insulin response of the FRTN group also suggests a confinement and/or handling stress. Plasma insulin during Challenge and Recovery showed a small but significant increase, along with an increased insulin:glucose ratio during Challenge. A reduction in plasma insulin is anticipated as a consequence of reduced feed intake. Short-term fasted steers and lambs experience substantial reductions in basal insulin levels which recover quickly on refeeding (Blum and Kunz 1981;Rule et al. 1985;Cole et al. 1988). PFTN lactating cows when underfed produced either no change or a fall in plasma insulin (Baumgard et al. 2011;Wheelock et al. 2010). However, acute stress induces elevated plasma insulin levels in rodent models (Rostamkhani et al. 2012;Depke et al. 2008;Ricart-Jané et al. 2002). The TC group experienced no change in plasma insulin during Challenge and Recovery. So, if the rise in plasma insulin in the FRTN group reflected a confinement stress and reduced feed on offer, then it is clear that the TC animals did not perceive or respond to that stress via insulin secretion. Insulin responses to heat stress in ruminants are inconsistent revealing the many factors involved in regulating its secretion. For the heat stressed lactating dairy cow all manner of responses to various thermal challenges have been reported (Hall et al. 2018;Garner et al. 2017;Itoh et al. 1998a;Wheelock et al. 2010;Baumgard et al. 2011;Koch et al. 2016). Reduced plasma insulin as a consequence of heat stress has been observed in the heat stressed nonlactating cow (Itoh et al. 1998b;Koch et al. 2016), but no response was recorded for heifers (Itoh et al. 1998c). Thermally challenged beef calves tended to higher plasma insulin relative to PFTN counterparts but then there was an increase in plasma insulin in both groups (O'Brien et al. 2010). In our study, as DMI and live weight rose in PENS and Late PENS, plasma insulin levels for both treatment groups increased in a comparable manner. The upward trajectories of the plasma insulin concentrations and the (log 2 )insulin: glucose ratios over these periods are likely to indicate insulin resistance not uncommon in finishing feedlot cattle (Kneeskern et al. 2016;reviewed DiGiacomo et al. 2014). Leptin is an anorexigenic protein hormone mostly secreted from white adipose tissue into circulation, and as such plays a role in signalling to the hypothalamus of the availability of energy reserves stored as fat. It is therefore implicated in the regulation of feed intake, energy metabolism and readiness for reproduction (reviewed Chilliard et al. 2005;D'souza et al. 2017). In both groups, plasma leptin concentrations were reduced during Challenge, Recovery and PENS, but the TC group experienced a weaker response. The reduced plasma leptin in the FRTN group during Challenge (i.e. feed restriction) and Recovery (realimentation) is typical of feed restricted or underfed ruminants (Chilliard et al. 2005). The reduction in plasma leptin by the TC group during Challenge and Recovery was only half that of the FRTN mean. Mean DMI for the TC group was 2 kg/head/day less than the FRTN group over those same two periods (p < 0.0001). Despite the lower feed intake, the TC group exhibited higher plasma leptin concentrations which may contribute to the low appetence during increased thermal load. A rise in plasma leptin is more frequently reported in heat stressed ruminants (Scharf et al. 2010;Al-Dawood 2017;Bernabucci et al. 2011) although Garner et al. (2017) found no change to plasma leptin levels in short-term heat stressed lactating cows. In the current study, the relationship between plasma leptin and rumen temperature suggested a distinct threshold at approximately 39.7 °C, after which leptin concentration was stable; however, the threshold was only evident due to the fall in daily rumen temperature in the FRTN group as DMI fell . Plasma leptin concentration has been shown to have a strong positive relationship with DMI in the context of the lot fed beef steer (Foote et al. 2016). We found that this relationship was altered by heat load; the reduction of DMI by the TC group was associated with a corresponding rate of fall in plasma leptin concentration that was fourfold less than that of the FRTN group. The restrained lowering of plasma leptin in the TC group would have influenced appetence. In an animal trying to reduce endogenous heat production, lessening the rumen fermentative and metabolic heat loads by downregulation of appetite, and thus feed intake, is entirely appropriate. On return to outdoor pens (PENS), leptin levels were further reduced in both treatment groups even though DMI was close to the PreChallenge mean. Chelikani et al. (2004) observed similar behaviour on refeeding 2 and 3-day fasted heifers and non-lactating cows. This scenario curtails the anorexigenic influence of leptin in the early phase of reestablishing feed intake and initiating compensatory gain. With much increased DMI and weight gain in Late PENS, plasma leptin returned to the PreChallenge mean in both groups. Adiponectin best discriminated between the two groups. Like leptin, adiponectin is expressed mostly by adipose tissue, but its secretion decreases with increased adipose mass (reviewed Lee and Shao 2014;Khoramipour et al. 2021). Most reports on circulating adiponectin in healthy humans and rodent models indicate no response to fasting or feed restriction; the consensus being that circulating adiponectin levels do not reflect acute reductions in feed intake (Imbeault 2007). In the lactating Holstein, Singh et al. (2014) observed no change in plasma adiponectin concentrations during feed restriction and realimentation. Not surprisingly then, the FRTN group showed no change in plasma adiponectin concentration during feed restriction and, whilst there was a substantial increase during realimentation (Recovery), no relationship of plasma adiponectin concentration with DMI was discerned. Reports of adiponectin responses during increased heat load in ruminants are limited. A 5-day moderate heat stress mouse model revealed the hyperthermic group exhibited increased circulating adiponectin and adipose expression relative to the thermoneutral feed restricted group (Morera et al. 2012). Contrary to this model, the TC steers in this study showed reduced plasma adiponectin, which returned to normal with cooling and resumption of feeding. Furthermore, across both treatment groups, a strong negative linear relationship between plasma adiponectin with rumen temperature was evident. The only other instance associating adiponectin with core temperature was a study conducted by Wei et al. (2017) whereby adiponectin-null mice were unable to maintain core temperature during short-term cold stress. Moreover, unlike the FRTN group, the TC group displayed a strong and positive linear relationship with DMI. Thus, plasma adiponectin was at its lowest concentrations when rumen temperatures were highest, and DMI at its lowest suggesting other inputs to the regulation of adiponectin secretion than white adipose mass alone. However, in PENS and Late PENS, as live weight and presumably adipose mass increased in both groups, plasma adiponectin concentrations fell and stabilised. Synthesis Having addressed the responses of each hormone independently, this section attempts to integrate the findings to understand the metabolic hormonal milieu of the treatment groups during the Challenge and Recovery periods, and feedlot finishing. Furthermore, the metabolic consequences for these steers are discussed also. Challenge The nature of the two perturbations imposed in this study, one of reduced feed intake and realimentation, and other of moderate heat load and recovery, were not sufficient to significantly alter plasma TSH and prolactin concentrations. However, a negative linear relationship between plasma TSH and rumen temperature suggests that higher heat load and higher core temperatures than those imposed and observed in this experiment could significantly diminish TSH secretion by the pituitary. The downstream endocrine organs, the thyroid, pancreas and adipose depots, did respond to the perturbations. The two groups were differentiated by insulin, T4, leptin and adiponectin levels, during Challenge and to some extent in Recovery. Moreover, the earlier report on the metabolic responses of these same animals revealed that during Challenge the two treatment groups were differentiated by various energy metabolites, namely plasma concentrations of glutamine, triglycerides (TG) and cholesterol. These were all increased in the FRTN group ). On the other hand, NEFA and glucose concentrations were not altered in either group during Challenge. In fact, both treatment groups were euglycaemic through all periods with minor fluctuations. In Challenge, both plasma T4 and insulin unexpectedly rose in the FRTN group. In an underfed state, these hormones typically fall in plasma. In a stress state, these hormones are known to increase as part of an allostatic response. The atypical increases in levels of insulin and T4 for the FRTN group may be a response to confinement and/ or unpredictable chronic handling stress. Meanwhile, both plasma leptin and adiponectin reacted to underfeeding and realimentation as anticipated from numerous other studies. Thus, the metabolic hormone milieu that arose in the FRTN group during Challenge was one of reduced leptin levels, increased insulin and T4 concentrations and unchanged plasma adiponectin. Under conditions of elevated plasma insulin and euglycaemia, as in the FRTN group, insulin promotes release of glutamine into circulation (Meyer et al. 2004), thus contributing to the rise in plasma glutamine. Moreover, elevated levels of circulating thyroid hormones are known to increase the release of glutamine from some skeletal muscles (Parry-Billings et al. 1990). The combined actions of insulin and T4 on skeletal muscle and liver would promote uptake of glucose, amino acids and fatty acids, in turn drive gluconeogenesis, glycogen synthesis and protein synthesis. The higher level of insulin would have enhanced lipogenesis in the adipose tissues and liver. Insulin also promotes cholesterol synthesis by enhancing dephosphorylation of HMG-CoA reductase, activating the enzyme consequently favouring cholesterol synthesis. This would have enabled the rise of plasma cholesterol in the FRTN animals. The higher T4 levels encourages cholesterol synthesis also. Meanwhile, the combination of increased plasma T4 and reduced plasma leptin may have augmented insulin sensitivity and possibly insulin secretion. The fall in plasma leptin enhanced appetite, and diminished its inhibitory action on hepatic glucose production, and reduced lipolysis. The lowering leptin levels promotes many of insulin's effects on hepatic lipid synthesis and metabolism (reviewed Lago et al. 2009). Overall, the FRTN group during the Challenge and in Recovery favoured uptake of glucose, fatty acids and probably amino acids, to maintain an anabolic state in laying down glycogen, lipid and protein. The TC group, under moderate heat load, saw unchanged plasma T4 and insulin levels, and these were lower than their FRTN counterparts. The TC steers apparently did not or could not respond to the putative confinement and/or unpredictable chronic handling stress in a comparable manner. An interpretation is that in this case the heat stress metabolic hormone response dominated the allostatic response that occurred in the FRTN group. The most distinctive change in the TC group during Challenge was the fall in adiponectin levels. Consequences of reduced circulating adiponectin are diminution of signalling that promotes skeletal muscle fatty acid oxidation and glucose uptake, and adipose release of fatty acids. However, it lessens the inhibitory influence on hepatic gluconeogenesis, glycogenesis and TG synthesis (reviewed Khoramipour et al. 2021). The lower plasma adiponectin may contribute also to reduced insulin sensitivity in muscle and liver. The plasma leptin level was also reduced but to a lesser extent than the FRTN group. This is likely to have contributed to inappetence in the heat stressed steer, and suppressed glucose uptake by muscle and fat. With no change to plasma insulin and T4 during Challenge, TC group may have tended towards conservation of energy, and reduced energy metabolism and limited the anabolic capacity in what should be a rapidly growing animal. Recovery Insulin and leptin concentrations remained unchanged in the FRTN group compared to Challenge, but the steers experienced a rise in plasma adiponectin and a further rise in plasma T4. This milieu is likely to reinforce 1 3 the anabolic activity in the FRTN group during Recovery. The TC group during Recovery showed remarkably similar trajectories for these same hormones but with reduced concentrations in each case compared to the FRTN group. The TC group appear to be transitioning/returning to a more anabolic state during this period with the higher levels of T4 and adiponectin promoting insulin sensitivity and uptake of energy metabolites. Feedlot finishing PENS and Late PENs: Once the steers had returned to the outdoor feedlot environment and resumed weight gain, the hormone concentrations of the two groups converged. There was no difference in the circulating levels of the two groups. However, with increased age, weight and feed intake, the metabolic hormone profiles did not revert to that of the Pre-Challenge state. Leptin levels initially fell in PENS, encouraging feed intake, and then recovered in Late PENS when DMI and live weight were highest . T4 levels also fell in PENS, then recovered to be slightly higher than the PreChallenge levels. Adiponectin levels fell and stabilised, consistent with gradual increase in fat deposition. The increasing plasma insulin and (log 2 ) insulin:glucose ratio, higher leptin and low adiponectin are suggestive of insulin resistance in the feedlot finishing phase. Conclusion The limitations of this study are acknowledged. Ideally, a thermoneutral ad lib fed control group would have been run in parallel with the TC and FRTN groups. The ad lib fed group would have enabled a full assessment of the impact of reduced feed intake on the plasma concentrations of these hormones in rapidly growing grain fed steers. The substantial body of literature was used to fill this gap. Finally, there is the potential confounding factor of a confinement/handling stress which may have been reduced with a longer period of familiarisation in the CCR before commencement of the experiment (although such is limited by animal ethics concerns). These steers are relatively young, having never experienced indoor facilities or underfeeding at any stage; the continuous 18-day confinement and the anxiety over insufficient feed may have induced a stress state. The transient mild hyperthermic response in the FRTN group observed by Sullivan et al. (2022) is further evidence of a stress response in this group. Overall, this study depicted the homeorhetic responses during Challenge (be it feed restriction or moderate heat load), Recovery and finally, finishing in feedlot environment. Homeorhesis can be thought of as a trajectory to a different but appropriate state to enable adjustment to a new environment. Homeorhetic responses are reversible if the environment reverts to its previous state (Collier et al. 2019). The moderate heat load, typical of a Queensland summer, imposed on these animals was well within the homeorhetic range of these feedlot cattle. We compared the responses over time to two perturbations to physiology and endocrine status that invoked differing homeorhetic mechanisms or differing extents of the same homeorhetic mechanisms. We were able to differentiate the trajectories of the metabolic hormone responses to the two different perturbations during Challenge and to some extent in Recovery. Clearly recovery is not immediate. However, there may have been an overlay of an allostatic response to confinement/handling stress detected in the FRTN group. Interestingly, the TC group appeared to have limited capacity to respond to this potential stressor. The implication is that behavioural responses that provoke increased metabolic activity and consequent endogenous heat production are curtailed even under moderate heat load. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-04-13T06:17:31.377Z
2023-04-11T00:00:00.000
{ "year": 2023, "sha1": "94b2c986a55560ded0d0890e0a7e6d962a773d4e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00484-023-02464-w.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "3ffcf2b9bf1535e1d11e6d1303ed6df92fa87e17", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
26963274
pes2o/s2orc
v3-fos-license
ACC-1 β-Lactamase–producing Salmonella enterica Serovar Typhi, India To the Editor: Typhoid fever, caused by Salmonella enterica serovar Typhi, is a serious form of enteric fever. In 2000, the worldwide number of typhoid cases was estimated to be >21,000,000, and there were >200,000 deaths from this disease (1). ported. To date, there are no reports of AmpC β-lactamases in typhoidal salmonellae. AmpC β-lactamases confer resistance to a broad spectrum of β-lactams, which greatly limits therapeutic options. We investigated an isolate of S. Typhi by using serotyping, antimicrobial drug susceptibility testing, PCR screening for β-lactamase genes, and sequence analysis to confi rm the identity of the isolate and the β-lactamase gene involved in conferring resistance to this isolate. The isolate was obtained in Bangalore, India, in August 2009, from the blood of a female patient (14 years of age) who was hospitalized because of signs and symptoms of enteric fever. She had no history of having received antimicrobial drugs. After a blood sample was cultured, the patient was empirically treated with ceftriaxone but did not clinically improve. Culture yielded gram-negative bacteria after 48 hours. The isolate was identifi ed by standard biochemi-cal methods as S. Typhi. Identifi cation was confi rmed by using Salmonella spp. polyvalent O, O9, and H:d antisera (Murex Biotech, Dartford, UK). Susceptibility to antimicrobial drugs was assessed by using the Kirby-Bauer disk diffusion method according to Clinical and Laboratory Standards Institute guidelines (www.clsi.org). The isolate was resistant to ampicillin, piperacillin, cefoxitin, cefotaxime, ceftazidime, ceftriaxone, aztreonam, amoxicillin/clavulanate, and cefepime. It was susceptible to chloramphenicol, trimethoprim/sulfamethoxazole, nalidixic acid, ciprofl oxacin, and meropenem. Treatment was changed to ciprofl oxacin (500 mg every 12 h for 7 d). PCR screening and sequencing was performed to identify β-lactamase resistance genes bla TEM , bla SHV , bla OXA-1 group, bla CTX-M , and AmpC as described (6,7). Sequencing of β-lactamase gene amplicons was conducted at the Vector Control Research Centre in Pondicherry, India. The BLASTN program (www.ncbi.nlm.nih.gov/BLAST) was used for database searching. We also used a nested PCR specifi c for the fl agellin gene of S. Typhi to confi rm identity of the isolate (8). The nested PCR amplicon was sequenced to confi rm identity of the fl agellin (fl iC) gene of S. Typhi. Sequencing of the fl agellin gene product was conducted by Cistron Bioscience (Chennai, India). The isolate was negative for ESBL production. PCR amplifi cation and sequencing showed that the isolate harbored bla TEM-1 and bla ACC-1. The isolate was negative by PCR for other β-lactamases tested. TEM-1 is one of the most commonly encountered β-lactamases in the family Enterobacteriaceae and can hydrolyze narrow-spectrum penicillins and cephalosporins. We report ACC-1 AmpC β-lactamase in typhoidal salmonellae. S. Typhi could have acquired the AmpC β-lactamase from drug-resistant bowel fl ora. After the isolate was found to be highly resistant to ceftriaxone, the change in therapy to ciprofl oxacin helped in recovery of the patient without any sequelae. ACC-1 AmpC β-lactamases originated in Hafnia alvei and are now found in various members of the family Enterobacteriaceae (9). The ACC-1 AmpC β-lactamases are exceptional in that they do not confer resistance to cephamycins (10). Our isolate contained bla TEM-1 and bla ACC-1 and was resistant to cefoxitin and cefepime but susceptible to meropenem. Bidet et al. (9) reported isolating Klebsiella pneumoniae resistant to cefoxitin and cefepime and intermediate resistance to imipenem. Atypical resistance was attributed to ACC-1 β-lactamase production and loss of a 36-kDa major outer membrane protein (9). We did not analyze changes in the outer membrane proteins responsible for alteration of permeability. Continual monitoring of drug resistance patterns is imperative. Antimicrobial drug susceptibility testing should be conducted for clinical isolates, and empirical antimicrobial drug therapy should be changed accordingly. AmpC β-lactamase genes will eventually be transferred to typhoidal salmonellae, which may pose a threat to public health. Spread of broad-spectrum β-lactamases would greatly limit therapeutic options and leave only carbapenems and tigecycline as secondary antimicrobial drugs.
2016-05-12T22:15:10.714Z
2010-07-01T00:00:00.000
{ "year": 2010, "sha1": "02268c36b52c0e2433ac4d266cfd86bfaa4b24b4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3201/eid1607.091643", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02268c36b52c0e2433ac4d266cfd86bfaa4b24b4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261030591
pes2o/s2orc
v3-fos-license
Optical signal-based improvement of individual nanoparticle tracking analysis Nanoparticle Tracking Analysis (NTA) provides a simple method to determine individual nanoparticle size. However, because size quantification is based on the slowly converging statistical law of random event, its intrinsic error is large, especially in case of limited event number, e.g. for weak scattering nanoparticles. Here, we introduce an NTA improvement by analyzing each individual NP trajectory while taking into account the other trajectories with a weighting coefficient. This weighting coefficient is directly derived from the optical signature of each particle measured by quantitative phase microscopy. The simulations and experimental results demonstrate the improvement of NTA accuracy, not only for mono-disperse but also for poly-disperse particle solutions. Introduction The Brownian motion of a nanoparticle (NP) suspended in solution obeys a statistical law in which the mean square displacement (MSD) depends on the NP size [1]. Nanoparticle Tracking Analysis (NTA) is therefore a common approach for size quantification, not only for individual inert NPs but also nano-size biological objects [2,3,4]. For a NP diffusing in a solution of dynamic viscosity η, its diameter d can be derived from its 3D MSD following: , where N number of tracked points, r i particle position at instant i, k B Boltzmann constant, T absolute temperature, ∆t time lag (duration between two consecutive points). Theoretically, the MSD is proportional to time lags ∆t. Currently, the most accurate method for size estimation relies on a linear fit of first few points of MSD curve as a function of time lags [5,6]. However, this NTA approach possesses a large intrinsic error margin which depends primarily on the length of tracking trajectory (convergence in 1/ √ N ) and the localization error [5,6,7]. In the case of small nanoparticles, their weak signals lead to an important size quantification inaccuracy, due to both low localization precision and limitation of tracking length. As an example, the relative size error (considering only the limited track length) is estimated about 70% if the nanoparticle can be followed in 10 consecutive frames and falls to 20% for 100 frames [6]. As an attempt to improve NTA accuracy, various methods are proposed, either increasing the tracking length (e.g., using a holographic approach in which a nanoparticle can be tracked although positioned far from the imaging plane [8,9,10]), or using the localization error as a weighting factor for size estimation [6]. Besides, if particle population statistics are sufficient, multiple post-processing algorithms have been proposed in order to estimate the mean size of solution from the size distribution by using a covariance-based estimator [11,12,13], or maximum likelihood approach [14,15,16,17]. In this paper, we introduce our individual NTA improvement approach based on the exploitation of the optical signature of each particle and trajectories in an acquisition. Instead of a simple linear fit of all MSD curves (allowing to derive the average size of all the particles but not individual particle [6]), we suggest using weighted linear fit, in which each single particle's MSD curve will not have the same impact for the final size determination of the considered particle. The weighting coefficient represents the optical similarity between particles. It should be in between 0 (extreme case where two particles are considered as totally different) and 1 (in the case of two identical particles). In order to calculate the similarity between particles, quantitative phase microscopy via wavefront imaging is used to obtain two images (intensity and phase) for each particle. These images characterize respectively the absorption/attenuation and the refractive index of the NP with a signal also linked to the particle volume. It is therefore a good criterion for the similarity between particles because it is able to discriminate particles both by their size and nature. To jointly take into account all the optical signatures into the calculation, these two images, intensity (I) and phase (φ), are merged into a unique observable, called Rytov field : where λ illumination wavelength, n m refractive index of the medium. The Rytov field is a complex quantity: its real part (imaginary part, respectively) represents the refraction (absorption/attenuation, respectively) properties of the particle. Moreover, the Rytov amplitude image, A Ryv = |E Ry |, exhibits a signal on a zero background for any type of particle, and looks very similar to a fluorescent NP image. This simplifies the tracking algorithm: an universal tracking algorithm can be applied without a priori knowledge of the particle optical response [18]. In addition, Rytov amplitude indicates the interaction between light and NP and is proportional to the number of detected photons. For our application, Rytov field is therefore much more convenient than classic scalar electromagnetic field. We interpret the similarity between two particles from the difference between their Rytov field images. Because of the imaging and sampling condition, Rytov field image of a particle is considered as a sum of particle's actual signal E s Ry and a noisy background of variance ε. The signal-to-noise ratio (SNR) is defined as |E s Ry |/ √ ε and is proportional to square root of the number of detected photon. The variance of the difference between the Rytov field of i th particle (E Ry,i ) and the Rytov field of j th particle (E Ry,j ) is the actual signal variance plus twice the acquisition error (supposing the two particle images have the same noise statistics). We define the similarity C i,j by the following formula: where σ 2 (E Ry,i − E Ry,j ) the variance of the difference between two Rytov field, ∆E s Ry the actual signal difference between two Rytov field in absence of noise. In case of identical particles, the true signal difference is 0 leading to a similarity of 1. When two particles are different, the similarity C i,j is less than 1. The weighting coefficient is defined as an exponentiation of the similarity C i,j by a positive exponent n w : Since the base C i,j is smaller than 1, using an exponentiation by n w kepdf the final value between 0 and 1 (1 when two particles are identical, and tends to 0 when the Rytov field difference is important). This exponent n w is introduced to modulate the averaging process: the arithmetic average is achieved at n w = 0 (all the trajectories are analyzed equally in order to retrieve the average size of particles in solution), while n w → ∞ corresponds to the classic NTA process in which each particle size is calculated only from its trajectory. Validation by simulation studies In order to study the effect of our weighted MSD fit, we carried out simulation studies (LabView, National Instruments). Particles with sizes satisfying a normal distribution with known mean size and standard deviation (S.D.) are considered and their 200-point Brownian trajectories are generated for each particle. Intensity and phase images of each particle at each position are simulated, using Product-Of-Convolution [19,20]. Camera's shot noise can be added in order to replicate the images in different experimental conditions. From these 200 noise-realistic images, super-localization is applied on the Rytov amplitude to recenter the images although the particle is moving (carried out with realistic simulated localization error of 30-nm along the lateral direction and 100-nm along axial direction). Different averaged sub-images are then produced from the pool of numerically centered images. First, the all 200 sub-images are average to produce an dynamic averaged Rytov field E Ry,i . Then, the image dataset is split in 2 × 100 images to produced, via averaging, two images of the same particle E Ry,i,1 and E Ry,i,1 which differ only by their noise. By computing σ 2 (E Ry,i,1 − E Ry,i,2 ) = 2 ϵ i , one can extract the actual noise amplitude of the particle image E Ry,i . Noteworthy, this method can be applied on actual acquisition and not only simulated particles to measure the intrinsic noise level [18]. The study is first carried out for 100-nm polystyrene (PS) NP. Figure 2 depicts the variation of similarity between a simulated 100-nm PS NP and with different particles, while varying NP size, or NP complex refractive indexñ = n + i · k. The similarity reaches 1 when the particle is compared to another particle having same characteristics, and starts vanishing quickly with the difference in size (and/or refractive index, and/or absorption coefficient). The similarity coefficient can surpass 1 in this simulation for 2 exactly (or quasi) similar particles due to the acquisition noise. If the similarity is higher than 1, its value is set at 1. A second study has been performed for a virtual PS batch of 200 NPs so that their real size are normal distributed with a mean size of 100-nm and a S.D. of 15-nm. Images of each particle are generated without the shot noise (SNR estimated at 55, due to the interference fringes). Figure 3a illustrates all the 200 MSD curves (gray line), versus the weighted fit MSD curve of the first particle (orange line). The gray scale of each MSD curve is color-coded from the similarity between each particle and the first particle (fader the line color, lower the similarity). Then, we vary the exponent n w to discuss the effect of the weighted MSD fit process for size retrieval. Let us consider d N P as the actual particle size, ∆d the difference between the measured size via weighted linear fit and actual particle size, and ∆d N T A the difference between the measured size via classic NTA and actual particle size. The optimal weighting coefficient is reached when the relative size difference ∆d/d N P is minimized which corresponds to a maximization of gain= ∆d N T A /∆d. Results are presented in figure 3b (50 repetitions were carried out, all the repetitions in gray line, its mean value and S.D. in blue line) and we determined that the best n w exponent was 1.125. Comparing to the classic NTA (correspond to n w = ∞ in the graph), the relative size difference is reduce from 12% to 3%, illustrating a gain in NP sizing of about 4 (see inset graph). We have also studied the distribution of NP size determined by the weighted fit at different exponents, compared to the real size and the classic NTA, as illustrated in Figure 3c. When the averaging process is more important (i.e. smaller n w exponent), the dispersion of NP size is smaller than the real dispersion and can be considered as an artifact. In the extreme case where n w = 0, all the trajectories are taken into account equally to extract the average NP size. In the contrary, when n w tends to infinite (n w > 5), the averaging process is vanished and we obtain the distribution of the classic NTA. The histograms of real size, size measured from classic NTA and with the optimal weighted fit (here, n w = 1.125) of our simulated 100-nm PS solution are illustrated in Fig 3d. The measurement mean size and dispersion in standard deviation are 99 ± 20.0-nm in the case of classic NTA and 99.4 ± 13.4-nm in the case of our weighted fit, closer to the size distribution dispersion of 100 ± 15-nm. The weighted fit clearly improves the NP size determination, as its histogram is almost identical to the real histogram. Similar results are obtained for absorbing particles, here 100-nm Au NP (illustrated in Figure 3e), confirming the interest of using weighted NTA approach. The method is also efficient for poly-disperse solutions. We have considered a mix of 100 particles of PS (mean size of 200-nm, dispersion 25-nm) and 100 particles of gold (Au) NP (mean size of 100-nm, dispersion 20-nm). These particles has been chosen since they have mostly the same SNR in Rytov amplitude images [18]. The histograms of its real size, classic and weighted NTA, shown in Figure 3f, also describe an objective improvement of size quantification. For each population of particles, the size dispersion is clearly reduced, from 39.6 to 24.0-nm for PS NP and from 25.7 to 19.5-nm for Au NP while keeping an accurate mean size value (199.3-nm and 99.7-nm for PS NP and Au NP respectively). Noteworthy, the optimal exponent is almost independent to the nature of particle (material, size) thanks to the Rytov field weighting approach, as illustrated in Figures 4a and 4b. While increasing photon shot noise (decreasing the SNR), the similarity between particles is less significant due to image noise. The averaging process is expected to be less important, leading to a shift of optimal exponent toward a higher value (Figure 4c and 4d). The optimal exponent n w is linearly dependant with the reciprocal of SNR (see inset of Figure 4c) and can thus be directly experimentally determined from the actual acquired phase and intensity images. Even at low SNR, the weighted fit remains useful with gain > 1 when compared to classic NTA. Simulation studies indicate that our approach presents a significant improvement of NP sizing for both mono-and poly-disperse solution of nanoparticles. In the subsequently step, experimental studies are performed in order to validate the method. Validation by experimental studies For experimental studies, intensity and phase images are acquired on a homemade microscope using a quadriwave lateral shearing interferometer [21,22,23]. Since we access to the full information of the scalar electromagnetic field, a moving particle can be numerically propagated at its focal plane [18]. A time-lapse averaging process is then applied after registering and cropping the image around the refocused particle. Details of the setup and the process are described in our previous work [18]. We determine the SNR at ≈ 10 to 20 for 100-nm PS NPs and the optimal exponent is estimated as n w = 1.25. Since the actual size of each moving particle cannot be known exactly (each NP is smaller than the point spread function of the microscopy), the comparison between the size histograms of multiple tracked NPs of classic tracking and weighted tracking is used as a proof for the improvement of NTA. Figure 5a shows the result of a calibrated PS solution (Duke Standards TM 3K-100-001). Its properties are provided by the supplier (mean size of 102 ± 3 nm, S.D. of 7.6 nm, measured by transmission electron microscopy). The mean sizes are almost similar between classic NTA (104.2-nm) and our weighted approach (104.8-nm), and are in the confidence range provided by the supplier. Our MSD curve's weighted fit provides a better estimation of NP size dispersion in which the standard deviation is reduced by a factor 2, from 27.6-nm with classic NTA to 12.5-nm with the weighted approach (optimal exponent n w = 1.25 determined from the images SNR), closer to the supplier information. The same observation is obtained for the solution of 100-nm Au NP (Sigma Aldrich), as shown in the Figure 5b. The size dispersion of Au NP is reduced by 2, and the mean size agrees to the hydrodynamic size provided by the supplier (118-nm). These results confirm the performance of our method for dielectric and metallic NPs. Our method is then investigated for poly-disperse solution containing two classes of NPs: 200-nm PS and 100-nm Au NP (at 55%:45% molecular percentage, estimated from the dilution protocol). The two particles, one dielectric and the other metallic, are chosen so that their Rytov intensity are similar. This means that on raw phase and intensity images before numerical refocusing there signal are in absolute almost identical. Figure 5c illustrates the size distribution of the mixture. With classic NTA it is challenging to separate the two classes and to properly distinguish that it was a mix and not a unique solution with very poly-disperse in size objects. The separation of the two classes is improved by the weighted NTA and we can clearly identify two populations, one centered at 200-nm and the other at 100-nm. Our system is stable enough to carry out an experiment inside a biosafety cabinet of a level-3 confined laboratory (CEMIPAI, University of Montpellier). An example of the characterization of infectious HIV-1 (NL4-3) is illustrated in Figure 5d. Size distribution becomes sharper (2.2× gain factor) when weighted fit is applied allowing to determine the infectious HIV-1 size at 169.9 ± 28.4-nm, comparable to the measurement by cryo-electron microscopy (145 ± 25-nm) [24]. The difference is attributed to the outer-shell of glycoprotein gp160 Env which participates in the hydrodynamic diameter measurement (tracking analysis) but not in the electron microscopy measurement. Implementation in NP characterization Analysis of intensity and phase images is not restricted to single particle size determination only. The quantification of the scalar electromagnetic field can be used to characterize NPs by their refraction and absorption properties. For example, complex refractive index of individual NPs can be studied using quantitative phase imaging [18], or digital holography [25]. Here, we presents an application of improved NTA in complex refractive index quantification by quantitative phase microscopy. In this approach, the complex refractive indexñ is derived as an inverse proportion of the NP volume V = π/6 · d 3 [18]: Re(E Ry ) + Im(E Ry ) dS, where, Re(E Ry ) and Im(E Ry ) real and imaginary part of Rytov field, dS infinitesimally small surface element of the image S, n m refractive index of the surrounding medium. Therefore, the reduction of NP size error allows to increase the measurement accuracy since its influence is more important (d 3 ) than the other variables. Figure 6: Implementation to the quantification of refractive index. Comparison of the measurement of complex refractive index of PS NP while using classic NTA and our improved NTA. Figure 6 illustrates the results of complex refractive index (n and k are real and imaginary part) of 200-nm PS NPs, in both cases where NP size is derived from either classic NTA or our improved NTA. Our approach clearly reduces the measurement dispersion, from 0.35 to 0.16 for the real part. While using the improved NTA, the median of real part of refractive index of 200-nm PS NP is 1.64, closes to the literature (n = 1.61 in [26]). The imaginary part is closes to 0, confirming the fact that PS is a transparent particle. The result of refractive index quantification justifies the effect of our improved weighted NTA in NP characterization. Conclusion In this work, we have presented a method to improve size evaluation from Brownian motion of single sub-resolved nanoparticles. Our approach is based on the weighted linear fit of MSD curves of all the tracked particles. Each weighted coefficient is automatically adjusted from the particle images of phase and intensity (image SNR). In our experimental condition, we have demonstrated that the dispersion of NP size can be reduced at least by a factor 2 compared to classic NTA. These results are illustrated for both mono-and poly-disperse solutions. The method can also be implemented in other NP characterization method, for example, in complex refractive index quantification, as demonstrated in this paper. Because the derivation of the refractive index involves NP size, the reduction of NP size error can enhance the measurement accuracy. Our approach is actually obtained thanks to the acquisition of intensity and phase images. We do believe that the method can be generalized for other imaging method, such as fluorescence microscopy or dark field microscopy. However, if only one observation is acquired (e.g. intensity of scattered light or intensity of fluorescent emitter), the confusion may occur in the case of poly-disperse solution, where two particles of huge difference in size but somehow interact with light in the same way. Therefore, the definition of similarity must be modified to cover the case. Data availability statement The data cannot be made publicly available upon publication because they are not available in a format that is sufficiently accessible or reusable by other researchers. The data that support the findings of this study are available upon reasonable request from the authors.
2023-08-21T06:42:56.408Z
2023-08-18T00:00:00.000
{ "year": 2023, "sha1": "6576ca9444a552cad29d3fe14bc67be12803612a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6576ca9444a552cad29d3fe14bc67be12803612a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
6486246
pes2o/s2orc
v3-fos-license
Processes Underlying the Nutritional Programming of Embryonic Development by Iron Deficiency in the Rat Poor iron status is a global health issue, affecting two thirds of the world population to some degree. It is a particular problem among pregnant women, in both developed and developing countries. Feeding pregnant rats a diet deficient in iron is associated with both hypertension and reduced nephron endowment in adult male offspring. However, the mechanistic pathway leading from iron deficiency to fetal kidney development remains elusive. This study aimed to establish the underlying processes associated with iron deficiency by assessing gene and protein expression changes in the rat embryo, focussing on the responses occurring at the time of the nutritional insult. Analysis of microarray data showed that iron deficiency in utero resulted in the significant up-regulation of 979 genes and down-regulation of 1545 genes in male rat embryos (d13). Affected processes associated with these genes included the initiation of mitosis, BAD-mediated apoptosis, the assembly of RNA polymerase II preinitiation complexes and WNT signalling. Proteomic analyses highlighted 7 proteins demonstrating significant up-regulation with iron deficiency and the down-regulation of 11 proteins. The main functions of these key proteins included cell proliferation, protein transport and folding, cytoskeletal remodelling and the proteasome complex. In line with our recent work, which identified the perturbation of the proteasome complex as a generalised response to in utero malnutrition, we propose that iron deficiency alone leads to a more specific failure in correct protein folding and transport. Such an imbalance in this delicate quality-control system can lead to cellular dysfunction and apoptosis. Therefore these findings offer an insight into the underlying mechanisms associated with the development of the embryo during conditions of poor iron status, and its health in adult life. Introduction It is now well-established that early life exposure to an adverse nutritional environment can programme aspects of adult anatomy, physiology and metabolism and thereby determine risk of cardiovascular and metabolic disorders [1][2][3]. Evidence from retrospective cohort studies shows that anthropometric markers of sub-optimal maternal nutritional status during fetal development are predictive of cardiovascular disease, type-2 diabetes and chronic kidney disease [4,5]. Experiments in animals that have manipulated either overall food supply or dietary composition such that one or more nutrients is limiting, strongly support the hypothesis that during nutritional stress, the fetus mounts adaptive responses in order to preserve growth or survival [6][7][8][9][10][11]. In the long-term the modifications to organ structure, hormone responsiveness or gene expression, which characterise the adaptation to stress, predispose to disease in later life. On a global scale, iron deficiency is the most prevalent manifestation of malnutrition. Poor maternal iron status is a recognised risk factor for preterm delivery, low birthweight and neonatal death, particularly in developing countries [12,13]. Iron deficiency anaemia is highly prevalent in many populations and in some parts of the world more than half of pregnant women will be affected. A number of rodent studies indicate that the imposition of iron deficiency prior to and during pregnancy has long-term effects upon the resulting offspring, with reports of high blood pressure [11,[14][15][16], impaired nephrogenesis [17] and altered glucose handling [16]. The mechanistic basis of this nutritional programming of disease has not been fully explored. Much of the research focus has been upon postnatal events and the progression from what is usually a low birthweight phenotype, to adult pathology. One of the outcomes of such work has been the observation that undernutrition frequently results in a remodelling of tissue structure [18][19][20][21]. For example, in the kidney, diverse nutritional insults including maternal protein restriction, food restriction and iron deficiency, lead to a lower nephron number in the offspring [22][23][24]. This, arguably, may be a driver of the renal and cardiovascular outcomes which are also associated with nutritional insult. Such studies do not, however, capture the primary response to undernutrition or explain how tissue morphology is programmed by the maternal diet. Studies of specific candidate genes suggest that there may be modifications to the epigenome which establish altered patterns of gene expression [25][26][27]. However, to date, there is no convincing evidence that these epigenetic modifications are widespread, or that they modulate expression of genes of relevance to renal or cardiovascular disease. We have previously hypothesised that nutritional programming of physiological function occurs through changes in a limited number of ''gatekeeper'' processes [24]. These are genes or gene pathways for which the adaptive response during development is common to all of nutritional insults. Earlier work in our laboratories considered the mechanistic basis of programming in two separate nutritional programming models (iron deficiency and protein restriction), across two rat strains. By comparing whole genome array and proteomics data across both models and strains, this work identified a number of putative gatekeeper genes, proteins and processes which may offer a generalised mechanism to explain the common phenotypes exhibited by offspring of pregnant rats exposed to diverse nutritional insults [24]. The experimental approach used in this previous study discarded gene or protein expression changes which only occurred in response to one of the dietary treatments or in one of the strains of rat, as the lack of replication across models suggested that these were not central mechanisms to the programming of a common phenotype. However, the existence of a common phenotype in response to different dietary insults is not universally accepted and it may be the case that the common endpoints observed are actually a nonspecific consequence of different programming pathways. The wealth of data produced from each model within the original gatekeeper study is worthy of more in depth analysis for identification of diet specific mechanisms. The previously reported analysis was constrained by the low numbers of gene targets which were differentially expressed across two rat strains and two nutritional interventions. The data analysis reported in this further paper focuses on the response of the embryonic genome and proteome to maternal iron deficiency in the Rowett Hooded Lister rat and exploits a larger pool of differentially expressed genes for pathway analysis and increases confidence in the demonstrated role of maternal nutritional status as a regulator of cell cycle in the developing embryo. As before, any significant changes in gene and protein expression will not automatically suggest a causal relationship with the observed phenotypic changes such as increased blood pressure, but pathway analysis may shed light on the potential mechanisms at play early in the programming process. Ethical Approval All animal experiments were performed in the BioResources Unit of the University of Nottingham, under license from the United Kingdom Home Office in accordance with the 1986 Animals (Scientific Procedures) Act. The study was approved by the UK Home Office (Project Licence PPL40/2990) and University of Nottingham Ethics Committee (approval ID SLE/ 005/07). Animals As previously reported [24], 16 female Rowett Hooded Lister (RHL) rats (Rowett Institute of Nutrition and Health, University of Aberdeen, UK) were subjected to a 12h light (08:00-20:00)-dark (20:00-08:00) cycle at a temperature of 20-22uC with ad libitum access to food and water. For 4 weeks prior to mating, half of the rats were fed a control iron (50 mg Fe/kg; FeC) diet and the other half an iron deficient diet (7.5 mg Fe/kg; FeD; [11]) to ensure depletion of iron stores during pregnancy. Iron-deficient diets were isocaloric relative to the control diet. Previous studies in our laboratories have shown that this protocol reproducibly reduces both maternal and fetal liver iron content by approximately 70% and 50% respectively [11,28,29] Cornock et al., unpublished observation). At a weight of approximately 180-200g, females were mated with stud males. After conception, determined by the presence of a semen plug on the cage floor, females were singlehoused and remained on their pre-pregnancy diet. During pregnancy the animals were weighed and food intake was recorded daily. This protocol has been previously demonstrated to result in elevated systolic blood pressure and lower nephron number in the adult offspring [11,24].On day 13 of gestation, pregnant females were culled by CO 2 asphyxia and cervical dislocation. Individual embryos and placentas were harvested and RNA and protein prepared from a pool of all male embryos within a litter as described previously [24]. Microarray An Affymetrix Genetitan Rat 230 microarray was performed by Service XS. This array comprises over 31,000 probe sets, analysing over 30,000 transcripts and variants from over 28,000 well-substantiated rat genes, as well as positive and negative controls. Before the labelling process, the integrity of all RNA samples was further checked using the Agilent 2100 Bioanalyser. Output data were supplied as Affymetrix CEL files and loaded into Genespring (Agilent). Data were normalised to QC controls and samples assigned to the appropriate diet group. Data were loaded into MetaCore (GeneGo) to identify pathways affected by iron deficiency. All microarray data is MIAME compliant and the raw data has been deposited in ArrayExpress (accession number E-MTAB-664). Real-time PCR Real-time PCR was performed on the same embryonic RNA sample that was prepared for microarray, to further explore some of the gene expression changes observed in the microarray analysis. RNA was reverse transcribed using Moloney murine leukemia virus (MMLV) reverse transcriptase (Promega) and real time PCR was performed using Probe Master Mix (Roche) and Taqman Rat Custom Expression Assays (ABI) on a Lightcycler 480 (Roche) in 384-well optical reaction plates. Expression values and linearity were determined using a cDNA standard curve. Data were normalised to total cDNA levels measured by Oligreen reagent (Invitrogen) at 80uC [30]. Proteomics Proteomic analysis was carried out by the Proteomics Section at the Rowett Institute of Nutrition and Health (University of Aberdeen). Protein samples were loaded onto sixteen 8-16% acrylamide gels to separate proteins in each embryo sample by isoelectric focussing in the first dimension (pI range 3-10) and SDS-PAGE in the second dimension. Gels were stained using the Colloidal Coommasie stain method, and imaged on a Bio-Rad GS800 scanning densitometer, followed by analysis using Progenesis SameSpots software. Each image was quality assessed before selection of a 'master gel', to which the other 15 gels were aligned. Gels were assigned to either the FeC or FeD group and any spots with differences in area and density between groups were identified by ANOVA. Spots of interest were hand-cut from the SDS-PAGE gels. In-gel digestion and trypsinisation of the cut spots was performed on a Proteome Works System, Mass PREP Station Robotic Handling System and extracted peptides were analysed on a nano LC-MS/MS system using Q-Trap. The total ion current data was searched against the MSDB database using the MASCOT search engine (Matrix Science) with the following search criteria: allowance of 0 or 1 missed cleavages; peptide mass tolerance of 61.5 Da; fragment mass tolerance of 61.5 Da, trypsin as digestion enzyme; carbamidomethyl fixed modification of cysteine; methionine oxidation as a variable modification; and charged state as 2 + and 3 + . The highest MASCOT results were further selected for best matches with criteria of protein identifications having a Mascot score higher than 40 (threshold) and more than one peptide match. The identification was considered only with a combination of the highest Mascot score and maximum peptide coverage. Statistical Analysis All data was analysed using the Statistical Package for Social Sciences (SPSS, Inc, Chicago, IL, Version 18.0). Differences between groups were assessed using an independent t-test, unless indicated otherwise in the text. Values are expressed as mean 6 SEM. P,0.05 was considered as significant. Maternal Weight Gain and Reproductive Performance There was no difference in the body weights of dams in the iron deficient and control iron groups either at the commencement of pregnancy or at the time of embryo collection (day 13 of pregnancy; Table 1). Weight gain and average daily food intake (calculated from food remaining in hopper) was similar between the two groups and iron intake was, as expected, 6-fold higher in the control group (P,0.001). While the number of male embryos collected did not differ with maternal diet, those fed an iron deficient diet produced significantly fewer embryos in total (P,0.03). Microarray 2524 genes were differentially regulated between control and iron deficient embryos (P,0.05). Of these, 979 genes were upregulated with iron deficiency and 1545 genes were downregulated. Despite the severity of the maternal insult, the magnitude of the expression changes was generally modest and the greatest expression changes were for sex-determining region Ybox 4 (SOX4; 2.34-fold up-regulation with iron deficiency; P,0.04) and microtubule-associated protein 1B (Map1b; 2.21-fold up-regulation with iron deficiency; P,0.02). The greatest downregulations in gene expression occurred with myosin VC (Myo5c; 0.63-fold change; P,0.01) and S100 calcium-binding protein A6 (S100a6; 0.69-fold change; P,0.01). The 20 genes exhibiting the greatest fold-changes up or down in relation to control are listed in Table 2. A full list of differentially regulated genes is provided in Table S1. Real-time PCR The expression of ten genes and the three transcription factors, identified by microarray analysis to be differentially expressed in the iron deficient embryo, was also confirmed using real-time PCR. In line with the key processes identified by the pathway analysis, these genes had a range of functions including regulation of cell growth and proliferation and cancer development (PSAT1, Hint, SDCCAG10, p53 and C-myc), cell cycle (Ube2c, Hint, GMPS and p53), apoptosis (Hint), protein synthesis and folding (SDCCAG10, TOMM34, eEF1G), cardiac muscle growth (USMG5) and developmental processes (TBX3, FGFR1). SP1 and c-myc were downregulated in embryonic tissue in response to maternal iron deficiency ( Figure 2). The expression of p53 was increased by more than 5-fold in the iron deficient embryos. These results are supported by findings that iron induces the expression of c-myc [31] and SP1 [32], while inhibiting p53 expression via the induction of mdm2 [33]. The change in expression of these transcription factors would therefore have implications for the many downstream target genes identified by the pathway analysis (e.g. see above; Pole-3: Table 3). Although the fold-changes identified by microarray were generally small (though highly significant), for seven out of the ten genes we showed good agreement between array and real-time PCR in terms of statistical significance and the direction of change in expression. Only Hint, eEF1G and FGFR1 failed to show consistency between the two methods, as although significant differences in expression were noted between controls and iron deficient embryos, these genes were up-regulated by rtPCR compared to down-regulated in the arrays. Proteomics Proteomics analysis identified 7 proteins which were significantly up-regulated in FeD embryos compared to FeC, and 11 proteins which were down-regulated with iron deficiency ( Table 4). Many of the identified proteins play important roles in cell proliferation (alpha-enolase, ADP-ribosylation factor-like 3, nucleophosmin, prohibitin, dihydrofolate reductase), protein transport and folding (ADP-ribosylation factor-like 3, nucleophos-min, vasolin-containing protein, Chaperonin containing TCP1, subunit 5), cytoskeletal remodelling (ADP-ribosylation factor-like 3, Chaperonin containing TCP1, subunit 5) and the proteasome complex (SUG1, Proteasome subunit alpha type 3-like). Dihydrofolate reductase (microarray: 0.88-fold change, P,0.001; proteomics: 0.8-fold change, P,0.04) and proteasome alpha-3 (microarray: 0.92-fold change, P,0.02; proteomics: 0.8fold change, P,0.01) also showed significant changes in gene expression with the microarray, in the same direction as the proteomics analysis. In addition, Chaperonin containing TCP1, subunit 8, was also significantly down regulated in the microarray (0.9-fold change, P,0.004; subunit 5 protein: 0.8-fold change, P,0.004). Alpha enolase was identified as both an up-regulated and downregulated protein. This is probably due to the membrane translocation of the enzyme which is allowed through posttranslational modifications such as acylation or phosphorylation [34]. Discussion The aim of this study was to explore the underlying mechanistic pathways which may explain the development of metabolic disease following in utero exposure to iron deficiency in a well-established rat model. We have previously suggested that a limited number of gatekeeper processes could explain the common phenotype which manifests in offspring of both prenatal iron and protein restriction in two different strains of rat [24]. However, the current study is much more specific and aimed to establish if additional, or alternative, diet-specific pathway responses occur with iron deficiency alone in RHL rats. Such responses may be associated with the reduced nephron endowment and increase in blood pressure previously found in this cohort [24] compared to the offspring of iron-replete controls, although this study did not set out to establish definitive causality. This study was well powered to confidently identify differences in gene and protein expression between groups, and in excess of 2500 embryonic genes were found to be differentially expressed with maternal iron deficiency compared to exposure to a control diet. This offered a much greater and varied pool of genes to work with than the original gatekeeper study, which only considered the 153 gene changes reflected by both diets, or both strains. Pathway analysis of microarray data allows this mass of data to be refined so that the specific processes affected by maternal iron deficiency can be identified. In the previous gatekeeper study [24], aspects of cell cycle regulation represented four of the seven most significant pathways. It is noteworthy that the pathway most significantly impacted by iron deficiency in RHL embryos was the initiation of mitosis. This to some extent confirms the findings of our earlier study [24]. Specific genes affected in this pathway included cdk1, cdk7, Cyclin B1, Cyclin B2 and Cyclin H. Cdk1 and cyclin B form a complex initiating the onset of mitosis, shuttling back and forth between the nucleus and cytoplasm. Their gene expression, the initiation of mitosis and nucleocytoplasmic transport of cdk and cyclins were significantly affected by iron deficiency. Phosphorylation of the BH3-only protein BAD was the second most impacted process with iron deficiency. This is important during development when neuron numbers are controlled to ensure the correct architecture of the nervous system (Konishi et al, 2002). Iron deficiency specifically affected the WNT signalling pathway's role in development. During embryogenesis, WNT proteins are involved in regulation of cell fate and patterning. The two genes demonstrating the greatest up-regulation with iron deficiency were SOX4 and Map1b. SOX4 is a transcription factor involved in the regulation of embryonic development and determination of cell fate. The protein may function in the apoptosis pathway leading to cell death and tumorigenesis. Map1b is involved in microtubule assembly, and plays an important role in development and function of the nervous system [35]. The two most down-regulated genes with iron deficiency were Myo5c and S100a6. Myo5c is involved in transferrin trafficking, and therefore plays an essential role in iron uptake and the regulation of cell proliferation. It is also likely to power actin-based membrane trafficking in a number of tissues. S100 proteins are involved in the Table 3. Microarray and Real-time PCR determination of gene targets in whole embryonic tissue exposed to a maternal iron deficient diet, relative to a control iron diet (FC = Fold change); n = 8. Table 4. Proteins identified to be significantly differentially regulated by mass spectrometry following 2D gel electrophoresis. regulation of cellular processes such as cell cycle progression and differentiation. S100a6 may indirectly play a role in the reorganization of the actin cytoskeleton and in cell motility [36]. It was not surprising that a number of other iron metabolism genes were also down-regulated, to a lesser extent (Table S1; e.g. pirin, calreticulin, ferric-chelate reductase). Maternal iron deficiency may be expected to have a major impact upon all iron-regulated pathways in the embryo. However, we observed no change in expression of the main iron storage and transport proteins (transferrin, ferritin, transferrin receptor, hepcidin). This may indicate that the programming effects of maternal iron deficiency may not be solely or simply mediated by a gross reduction in iron supply to the developing embryo or fetus. Other mechanisms such as endocrine imbalance across the placenta and resetting of epigenetic marks are known to be involved in programming responses to undernutrition [37,38]. The iron requirements of the day 13 embryo are likely to be small in comparison to that of the mother and may be largely met, even in the face of maternal deficiency, which may explain why no change in expression of the transport proteins was observed. Previous work by the authors determined that during pregnancy the maintenance of iron stores are prioritised towards fetal needs at the expense of the mother. It was demonstrated that despite a significant decrease in maternal liver iron content from day 0 of pregnancy in iron-deficient dams, hematocrit (Hct) levels were maintained throughout the first half of pregnancy, falling by day 21 [39]. Fetal liver iron and Hct levels measured at day 21.5 mirrored maternal concentration at the same point of gestation, i.e. they were decreased with iron-deficiency. At day 21.5 of gestation, placental and maternal liver transferrin receptor (TfR) expression was increased with iron-deficiency. An elevation in placental TfR was also found in iron-deficient day 20 placenta (FC: 3.21, P,0.03; unpublished data). Fetal liver TfR expression was unchanged by maternal iron deficiency throughout pregnancy [39], while TfR2 expression was decreased. In this study, day 13 embryos were too early in development to measure equivalent changes. Fetal transferrin expression is low in early-mid gestation and does not begin to increase until around day 18 [40]. The proteomics analysis identified only a limited number of proteins which were differentially expressed with prenatal iron restriction. This is partially due to sensitivity and methodology issues, as the separation of the proteins are limited by the pI range and size of gel. It may also be due to post translational modifications allowing a range of spots for a protein, therefore diluting the potentially significant changes in expression. Despite these limitations, there was a remarkable similarity between the microarray and proteomic results in terms of the processes and pathways affected. These proteins could be broadly categorised by function, including cytoskeletal remodelling, cell proliferation and the proteasome complex, which were also identified by the earlier gatekeeper protein analysis [24]. However, the actual proteins with significantly differing expression altered between the two studies. For example, actin-related protein 3 and tubulin a-1 chain were key gatekeeper proteins associated with cytoskeletal functions, whereas ADP-ribosylation factor-like 3, dihydropyrimidinase-related protein 2 and chaperonin containing TCP1 were identified in this role with iron deficiency. Only SUG1, a subunit of the proteasome complex, featured as a significantly affected protein in both studies. This emphasises the importance of the complex, which is related to metabolic regulation and cell cycle progression, in nutritional studies. A small number of proteins were also significantly differentially regulated in the same direction at the gene level as shown by the microarray. Further processes which the proteomics highlighted in this study were protein folding, unfolding and transport. Failure of normal folding, accumulation of denatured proteins or failure of the proteolytic machinery of a cell can lead to a build up of potentially damaging polypeptides which could cause cellular dysfunction or trigger apoptosis. The proteasome complex and molecular chaperones function together as a quality-control system to selectively eliminate abnormal proteins. A number of chaperone proteins that were down-regulated following exposure to a prenatal FeD diet are involved in folding of cytoskeletal components (chaperonin) and other targets of interest such as p53 (nucleophosmin). As this study used whole embryos, the gene and protein changes noted are expressed within a heterogenous cell population. Therefore a limitation of this study is that it cannot be concluded that impacted processes are related specifically to development of any specific organs, systems or tissues. Further work will be needed to isolate the location of the key genes and proteins affected by iron deficiency in tissues of interest, such as the kidney. The dilution effect conferred by using whole embryos may be the reason for the modest fold-change values found in this study, and for generalised processes such as cell division being the most highly impacted by the dietary insult, rather than tissue-specific effects. As considered in our previous study [24], criteria concerning the validation of microarray techniques are still being addressed by investigators in the field [41,42]. However, we are satisfied that this study was adequately powered and controlled to reveal only robust results. Importantly these findings corroborate earlier observations as well as changes in protein expression in the current study. Iron deficiency is the greatest micronutrient deficiency among humans, impacting on pregnant women in both developed and developing countries. Iron is essential for a variety of metabolic processes, and in the embryo clearly plays a key role in cellular proliferation and regulating cell-cycle proteins. This thorough study has shown that maternal iron deficiency impacts significantly on genes and proteins which regulate cell proliferation, differentiation and apoptosis in the embryo. These findings may provide important indicators of the primary mechanisms which link fetal exposure to maternal undernutrition to the development of cardiovascular, renal and metabolic disorders later in life. Supporting Information Table S1 Summary of genes which changed statistically significantly in expression in response to the iron deficient diet. (XLSX)
2017-06-17T02:48:29.981Z
2012-10-26T00:00:00.000
{ "year": 2012, "sha1": "2a828e88191f1be8ffbed21a7ca29b3b47d730ed", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0048133&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a828e88191f1be8ffbed21a7ca29b3b47d730ed", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225455242
pes2o/s2orc
v3-fos-license
LONG-TERM VISUAL LOCALIZATION IN LARGE SCALE URBAN ENVIRONMENTS EXPLOITING STREET LEVEL IMAGERY : In this paper, we present our approach for robust long-term visual localization in large scale urban environments exploiting street level imagery. Our approach consists of a 2D-image based localization using image retrieval (NetVLAD) to select reference images. This is followed by a 3D-structure based localization with a robust image matcher (DenseSfM) for accurate pose estimation. This visual localization approach is evaluated by means of the ‘Sun’ subset of the RobotCar seasons dataset, which is part of the Visual Localization benchmark. As the results on the RobotCar benchmark dataset are nearly on par with the top ranked approaches, we focused our investigations on reproducibility and performance with own data. For this purpose, we created a dataset with street-level imagery. In order to have independent reference and query images, we used a road-based and a tram-based mapping campaign with a time difference of four years. The approximately 90% successfully oriented images of both datasets are a good indicator for the robustness of our approach. With about 50% success rate, every second image could be localized with a position accuracy better than 0.25 m and a rotation accuracy better than 2°. INTRODUCTION Modern vehicle-based and portable mobile mapping systems with multi-camera sensor systems combined with state-of-the-art georeferencing techniques enable a large-scale acquisition of accurate street level imagery. The resulting georeferenced collections of indoor or street level imagery covering large building complexes, entire cities or even states provide a powerful basis for urban infrastructure management. They furthermore bear a great potential for accurate visual localization and 6DOF pose estimation -even in areas with no or only poor GNSS coverage. Such a universally applicable visual localization would, for example, enable highly accurate Augmented Reality (AR) applications with robust absolute (re-)localization in largescale indoor and outdoor environments without a need for an additional positioning infrastructure, such as GNSS or WiFi. Furthermore, visual localization could be used to significantly improve existing inaccurate positioning in street canyons. In our previous work, we had discussed the concept and exploitation of 3D image spaces (Nebiker et al., 2015), consisting of collections of accurately georeferenced RGB-D images. Capturing such 3D image spaces requires high quality mobile mapping systems and advanced georeferencing techniques . When it comes to keeping the data up to date, the cost of exclusively using high-quality capturing systems would be enormous. Hence, there should be a solution for integrating images captured by non-geospatial experts with consumer devices such as smartphones. However, these consumer devices do not contain precise positioning sensors. This so far limited georeferencing accuracies to a few meters in outdoor environments and even prevented reliable indoor * Corresponding author positioning. Visual localization using existing 3D image spaces as a reference, not only promises to address the task of sensor positioning but also the task of determining the sensor pose. Only if both tasks can be solved reliably and accurately, can the new imagery be integrated into the existing database and used for measurement and asset management tasks. In addition to database updating and asset management, there is a high demand for real-time device pose estimation for augmented reality applications, where 3D image spaces have a great potential for serving as reference data. First investigations of visual localization using 3D image spaces showed that large temporal differences and the associated changes in scene content and appearance are one of the main challenges in long-term visual localization (Rettenmund et al., 2018). In this paper, we investigate and demonstrate the capabilities of state-of-the art visual localization methods in large-scale urban environments. For this, we first introduce our processing pipeline, which is built on top of our highly scalable street level imagery database. We then introduce our long-term visual localization approach emphasising robust and accurate long-term matching. We subsequently evaluate our approach, first using the 'RobotCar Seasons' dataset of the long-term visual localization benchmark and second using our own Basel Bench50 dataset. We finally discuss the results, which demonstrate the capability of our visual localization approach to reliably and accurately determine 6DOF image poses in urban spaces. RELATED WORK Pushed forward by innovations such as augmented reality and autonomous driving, the field of visual localization is in rapid evolution. In order to establish a possibility for comparing the results of different visual localization approaches Sattler et al. (2018) created a benchmark with several datasets, each with some distinct characteristics. They also give an overview on the various strategies for visual localization. Coarsely, they classify the approaches into 3D-structure based, 2D-image based, sequence based and learning based visual localization. In 3Dstructure based localization, there is a three-dimensional representation of the environments such as a point cloud or a 3D model. By searching corresponding points in the structure and the image, basic geometric principles can be applied to calculate the image pose. However, the big challenge of this approach is to determine matching points over long time periods and in varying conditions. For speeding up the process, there are several methods, that prioritize points close to a reliable match (Sattler et al., 2012) or augment feature points with additional visibility information (Svärm et al., 2017). However, all these approaches rely heavily on the existence of a sufficient number of matching points. When using 2D-image based methods, the goal is to determine the most similar from the collection of reference images. Because it is quite likely that two similar images have been captured from the same location, this pose is being used as a result. There are methods that use hand-crafted features for describing the image's contents such as DenseVLAD (Torii et al., 2015), while others include some neural networks as NetVLAD (Arandjelovic et al., 2016). The drawback of this method is the requirement of a big number of reference images with different viewpoints to reach good results. To reduce the false positive rates of single image localization approaches Sattler et al. (2018) propose the use of multiple images in the form of a sequence in the correct order. To estimate the relative poses of the images visual odometry or visual SLAM algorithms can be applied. Current visual odometry and visual SLAM algorithms use feature-based methods such as ORB-SLAM (Mur-Artal et al., 2015) and ORB-SLAM2 (Mur-Artal, Tardós, 2017) or direct methods such as LSD-SLAM (Engel et al., 2015) and DSO (Wang et al., 2017). Known relative poses allow modelling the cameras of the image sequences as a generalized camera (Pless, 2003), i.e. as a camera with multiple centres of projections. The absolute pose from 2D-3D matches can be estimated by using approaches for multi-camera systems (Lee et al., 2015) and camera trajectories (Camposeco et al., 2016). Learning based localization methods were first introduced by Kendall et al. (2015). The main idea is to train neural networks, so that they directly regress the pose of an image. While this looks promising in some small-scale experiments, this approach is hard to scale to real-world problems. Mueller et al. (2018) showed that it is possible to improve the performance by integrating synthetically generated views of the test site in the training dataset. However, this requires a quite detailed 3D model for rendering these views. Furthermore, Sattler et al. (2019) point out, that the pose regression of these networks is very similar to image retrieval followed by applying a slight pose offset. In recent publications, those getting the best results combine the strengths of the different approaches, e.g. Sarlin et al. (2019). Instead of using neural networks as a "magic black-box", they are just used as parts of the localization pipeline, where they actually generate some benefits. Thus, network architectures such as L2-Net (Tian et al., 2017;Tian et al., 2019), D2-Net (Dusmanu et al., 2019) or SuperPoint (DeTone et al., 2018) are used to generate point descriptors, which help to generate better matches for the use in structure-based image orientation tools such as COLMAP (Schönberger, Frahm, 2016). Widya et al. (2018) skip the step of keypoint detection by just using an intermediate layer of a convolutional neural network as feature map. This generates a regular grid of feature vectors. By using image-retrieval, the number of image pairs for matching can be reduced. This helps to minimize the computational costs. Overview The reference images of previous mobile mapping campaigns are stored in a large-scale cloud-based architecture and accessible through an applications programming interface (API), which serves the image metadata. By querying the database with the approximate position from the navigation sensors, we get the spatially nearest neighbours. For each image, the pose, an URL to download the actual image and the camera's intrinsics are returned. If the raw orientation of the image is known, the resulting list of images can be filtered further by removing the images, whose projection centres are near to the assumed position, but point to the opposite direction ( Figure 1). Because image matching is the part of the processing workflow, that consumes most of time and resources, we search for the reference images that are most similar to the query image. NetVLAD (Arandjelovic et al., 2016) uses a neural network with the VGG-16 architecture (Simonyan, Zisserman, 2015) to create a global descriptor for each image. Comparing these descriptor vectors is much faster than matching all feature vectors for all keypoints in an image. Once we have identified the reference images, that are most similar to the query image, we can perform feature matching on a much smaller number of image pairs (Figure 1). To get better geometric conditions, it is important to have the keypoints evenly distributed over the whole image. The DenseSfM approach by Widya et al. (2018), achieves this by using an intermediate feature map of the VGG-16 network as descriptors. Hence, there is a regular grid of feature vectors, that spans all of the image. To increase the accuracy, the features get relocalized into the fullsize image by searching for the pixels, that had the biggest influence on the respective descriptor. For the actual image orientation process, we use COLMAP v3.6 (Schönberger, Frahm, 2016), which allows us to use the known exterior orientation parameters of the reference images. Hence, we get a COLMAP model, where only the query image needs to be aligned with respect to the fixed reference images. First, we generate a COLMAP model with the processed features and matches as well as the camera specifications and the exterior orientation parameters (EOP) of the reference images. We then use the functions of COLMAP to reconstruct the 3D coordinates of the feature matches by fixing the known EOPs of the reference images and to register the query image via the matched feature points (Figure 1). Robust Long-Term Matching While trying to achieve better results using the traditional SIFT features (Lowe, 2004), we reached a limit of handcrafted feature descriptors for long-term matching. While these features are designed to be invariant for slight changes in illumination and orientation, they fail miserably in robustly establishing long-term correspondences. Especially the changing appearance of the environment caused by seasonal changes such as snow covered or wet roads as well as shadows or strong sunlight leads to serious problems ( Figure 2). By matching the same images multiple times with different parametrization of the matcher, we showed that relaxing the restrictions for outlier filtering results in more matches. However, the additional matches have a high probability of getting eliminated during the bundle adjustment, which in other words just means they are outliers that got rejected for some reason. To overcome these limitations, feature descriptors should incorporate some semantic information. When a neural network is applied to create image features, semantics get somewhat implicitly integrated into the descriptor vector. Thus, we compared different types of trained descriptors. Among LF-Net (Ono et al., 2018), SuperPoint (DeTone et al., 2018), DenseSfM (Widya et al., 2018) and SOSNet (Tian et al., 2019), DenseSfM achieved the best results on our dataset. ROBOTCAR BENCHMARK To test the performance of our approach, we chose to process a dataset of the long-term visualization benchmark by Sattler et al. (2018). This benchmark provides various image collections along with the corresponding orientation values, which can be used as reference data, as well as some images, where no further data is provided, which are used as query images. Of the datasets provided, 'RobotCar Seasons' is the one, which is most similar to our imagery. It consists of image sequences captured during the Oxford RobotCar experiment by Maddern et al. (2017) and has similar characteristics to street level imagery acquired by mobile mapping systems. Evaluation Strategy The long-term visual localization benchmark uses a joint evaluation of position and rotation. The calculated poses are compared to the ground truth poses as follows: For the positions, the Euclidean distance is used. And for the rotations, the minimal angle required to align the two rotations is computed. The formulae used for the calculation can be found in Sattler et al. Reference Data The Oxford RobotCar platform was used to capture a long-term dataset for autonomous driving use cases. The RobotCar is equipped with different cameras, 2D and 3D LiDAR, as well as an inertial and GNSS navigation system (Maddern et al., 2017). The imagery selected for the RobotCar Seasons benchmark dataset, was captured at intervals of one Meter using three synchronized global shutter Point Grey Grasshopper 2 cameras. The intrinsics of the cameras as well as their relative poses are known (Sattler et al., 2018). The cameras with a resolution of 1024 x 1024 pixels (1MP) were mounted to the left, rear and right of the car. A more detailed description of the configuration and specification of the cameras can be found in Maddern et al. Test Site As the whole RobotCar Seasons dataset contains 11'934 query images, we decided to process only one traversal. As the traversals cover various weather conditions, we choose the 'sun' traversal, which is most similar to typical street level imagery and to the images we intend to process in this pipeline. The sun subset of RobotCar Seasons consists of 1380 query images. Collection of Approximate Values In our use case we always have at least very coarse approximate values for the image poses, e.g. from the last GNSS position fix or from WiFi or cellular network IDs. Therefore, we first had to derive initial values from the available original RobotCar data (Maddern et al., 2017). For the calculation of the query image poses in local COLMAP model coordinates, we only used the rear camera, because it has the smallest offset to the inertial and GNSS navigation system. We purposely did not aim at exact approximate values for our use case. For this reason, the offsets of the lever arms and relative orientations of the cameras were not considered. The datasets of the different sensors of the RobotCar are not time synchronous. Therefore, for each image pose of the rear camera the GPS position with the smallest time difference was searched and assigned to the images. A 2D similarity transformation was then calculated between the corresponding images in the global coordinate system and the COLMAP model coordinates. The transformation parameters were applied to the RobotCar positions of the query images in order to obtain the approximate image poses in local COLMAP model coordinates. Results Our results for the investigated 'Sun' subset of the RobotCar Seasons dataset are shown in Table 2. With 89.3% in the Coarse accuracy class (position error < 5 m and orientation error < 10°), 70.2% in the Medium class (< 0.5 m and < 5°), and 47.0% in the High class (< 0.25 m and < 2°), our results are on a competitive level with those of HF-Net (Sarlin et al., 2019), the leading approach at the time of writing. Our results are significantly superior to exclusively neural network based global descriptors, such as NetVLAD (Arandjelovic et al., 2016), which in itself is part of our processing pipeline. The nearly 90% of successful image localizations in the Coarse class and only 10% 'failed' localizations are a good indicator for the robustness of our approach. Improving the results in the Medium and High class proved to be a challenging undertaking, requiring careful attention to calibration parameters and errorfree source code. However, with a 47% success rate in the High class, nearly every second single image is localized with a position accuracy better than 0.25 m and a rotation accuracy of better than 2°. BASEL DATASET As our RobotCar Benchmark results are nearly on par with the top-ranked approaches, we further investigated the reproducibility and performance of our approach with our own street level imagery data. Test Site and Data 5.1.1 Acquisition System: The street level data used in the subsequent investigations was captured using our vehicle-based multi-view stereovision mobile mapping system, which was presented in several of our previous publications, including Cavegn et al. (2018). Depending on the system setup, there are several stereo camera systems, a panoramic camera and a GNSS/INS positioning system. All sensors are mounted on a rigid frame that guarantees a stable relative orientation of all stereo systems and the positioning system. With its included positioning sensors, this system delivers the pose of the images by means of direct georeferencing. This 'standard' georeferencing can be improved by post processing the trajectory and including ground control points. An additional improvement can be achieved by image-based georeferencing using bundle adjustment (Cavegn et al., 2016). We treated the image poses from advanced direct georeferencing as known reference values, when visually localizing single images of the sequences. Used Datasets: We subsequently used two series of mobile mapping imagery that had been captured in the city of Basel (Switzerland) in two independent campaigns, which were four years apart: a) a road-based mapping campaign in Summer 2014 using a car-based mobile mapping system and b) a rail-based mapping campaign in Summer 2018, where the system had been mounted on a tramway. As can be seen in Figure 3, the Basel dataset covers a dense urban area with multi-storey buildings, narrow streets, partly dense vegetation, overpasses etc., which makes accurate georeferencing a challenge. The two campaigns were georeferenced using state-of-the-art GNSS-based direct sensor orientation. No additional image-based co-registration between the image sequences of the two mapping campaigns was applied. As shown by Cavegn et al. (2018), we thus can expect the trajectory and subsequent absolute pose accuracy of our reference data sets to be in the order of a few decimetres. In order to have independent reference and query image datasets, we chose the street level imagery from the rail-based campaign b) as reference data set and the imagery from the road-based campaign a) as query dataset. Query images were selected by using spatial operations to filter road segments, that are situated next to tramlines, or do even have tram lines included in the lanes (e.g. see Figure 3, top right, bottom left and bottom right). Then we randomly selected images of the image sequences on these street segments. With a visual verification, we removed the images, that are impossible to localize. Reasons for this could be, that the image is only showing a single wall without any characteristic features or that there are other vehicles that block the view on the environment. We selected 50 query images for our Basel Bench50 dataset (Figure 3). The geometric resolution of the images depends on the sensor. The reference images have resolutions between 5 and 12 MP, while all query images have a resolution of 2 MP. Other than in the RobotCar benchmark, in the Basel Bench50 reference poses are known to the authors, which subsequently enables a more sophisticated and detailed evaluation. Evaluation In order to evaluate the results of the investigations on our own data, we used the same precision classes as for the RobotCar dataset (Table 1). We computed the Euclidean distance for the positions and the minimal angle for the rotations between the processed and the ground-truth poses. In addition, we can examine our results for systematic deviations based on the known image poses. Therefore, we generated Scatterplots where the positional differences were plotted against the rotation differences. Results The results of the Basel Bench50 test are shown in Figure 3. With 92% of the localized images in the Coarse accuracy class (position error < 5 m and orientation error < 10°) and 56% in the High class (< 0.25 m and < 2°) the results on our own data outperform the results made on the RobotCar Seasons dataset. The Medium class (< 0.5 m and < 5°) with 60% of oriented images shows a drop of 10% compared to the results on the RobotCar Seasons dataset. The reasons for the better results on our own data in the classes High and Coarse could be due to the better image quality, better geometric resolution, the known camera intrinsics and calibration parameters. However, it should be noted that all images that were verified as 'impossible' to localize were previously removed from the dataset. There is no indicator of systematic deviations to explain the decrease in the Medium class ( Figure 4). The most obvious reason is that our dataset contains local differences between the two campaigns (road and rail-based campaign) because no coregistration of the two campaigns had been done. An indication of this is the accumulation of image poses with positional differences of about 0.8 m and rotational differences of about 0.5° (Figure 4). CONCLUSION In this paper we presented our approach for long-term visual localization in urban environments. Our approach combines a 2D-image based and a 3D-structure based visual localization strategy. We first use image retrieval (NetVLAD) to find the most similar images. Then we extract and match densely distributed features by DenseSfM. We subsequently use the SfMsoftware COLMAP to reconstruct a sparse point cloud of the matched features of the reference images and register the query image to these feature points. We subsequently evaluated our Visual Localization approach using two test data sets. On the RobotCar Seasons dataset of the long-term visual localization benchmark, we achieved results that are nearly on-par with the top ranked methods with 89.3% in the Coarse class (position error < 5 m and rotation error < 10°), 70.2% in the Medium class (< 0.5 m and < 5°) and 47.0% in the High class (< 0.25 m and < 2°). On our own street level data, we were able to outperform the results from the RobotCar dataset in the precision classes High and Coarse by 10% and 3% respectively. The success rate for the Medium class was around 10% lower than the RobotCar dataset. The 92% of successful image localizations in the Coarse class and only 8% 'failed' localizations are a good indicator for the robustness of our approach. Improving the results in the Medium and High class proved to be a challenging undertaking, requiring careful attention to calibration parameters, consistent and accurate reference data, as well as error-free source code. With a 56% success rate in the High class, more than every second single street-level image is localized with a position accuracy better than 0.25 m and a rotation accuracy of better than 2°. These results demonstrate the enormous potential of long-term visual localization in combination with accurately georeferenced street level imagery. In this combination, visual localization not only provides accurate and ubiquitous positioning but a powerful 6DOF pose determination method. This could make visual localization an ideal absolute positioning backend for future Augmented Reality applications -in outdoor and indoor environments alike. In order to make such an ubiquitous and instant 6DOF positioning service a reality, our main current limitation, the very high computational cost and required time for processing an accurate image pose, needs to be overcome. OUTLOOK In our future work we will test our visual localization workflow with a large (500 images), representative and co-registered dataset without previously removing images classified as 'impossible' to localize. This should show the full accuracy potential of our approach. We will also address the current processing power and time requirements of our feature extraction and matching approach. For this, we will be investigating other feature descriptors with the goal of enabling real-time applications in the longer run. With regard to the use of our approach in very challenging environments, such as railway tracks, the robustness has to be increased even further. For this purpose, we intend to use sequential information of consecutive images. We expect that the use of image sequences will lead to significantly more robust visual localization results than from single images only.
2020-08-06T09:08:17.839Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "c9b98fed0f6dcf0a4fc60a644418826fcd9cb44a", "oa_license": "CCBY", "oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2020/57/2020/isprs-annals-V-2-2020-57-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ef0cdfd0c5a62bc07c87805a8a494f86286e3e78", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
266292902
pes2o/s2orc
v3-fos-license
A Secure Framework for Communication and Data Processing in Web Applications † : Web applications are widely used, and the applications deployed on the web do not always satisfy all the security policies. This may arise due to less secure configurations, less knowledge in security configurations, or due to insecure coding practices. Even though a lot of practices are available, a lot of security loopholes are still available for hackers to steal information. A secure web application framework is discussed here which incorporates solutions to major security loopholes that attackers may use for stealing information or compromising systems. The security framework proposed here ensures an encrypted data transfer making the data safe and server-side vulnerability detection and avoidance for major attacks like SQLinjection (SQLi) and Cross Site Scripting (XSS). The client side of the framework is responsible for validations, encryption, and session management through a JavaScript module. The server side of the framework is responsible for decryption and validation, data management, and URL management. The framework deployed with PHP showed a good outcome when tested with the Arachni web application security scanner. The framework will be further studied for performance with huge workloads. Further, the work will be extended to cover other attacks. Introduction Nowadays, web applications are used in almost all fields like education, banking, finance, advertising, and many more.Since the usage of Internet applications has increased, the secure data that get communicated over the Internet have also increased.This has attracted a large set of hackers for financial benefit.The Internet has also become a place where a lot of web services and IoT devices communicate with each other with networking capability.Two main types of attacks mostly happen in web applications and services: injecting malicious code and logic vulnerabilities, as stated in [1].Injection attacks happen by adding or injecting malicious code as a part of a query, script, or a command.This allows hackers to steal information or even change the behavior of the application.Logic attacks happen when there is a loophole that allows the session to be taken or information to be retrieved without proper authentication management, or when there is open data in the HTTP request that can affect the entire logic of the system.Even though there are a lot of measures taken for securing the web, most of the developers and deployer are taking things lightly.A lot of sites are even running without an SSL certificate.This helps hackers to easily steal information from web applications.There are a lot of contributions from different people in this.This work first discusses the major contributions towards securing web application, then the recent major attacks happening on web applications and the proposed secure web framework for application development. Literature Review Ref. [2] developed micro level dedicated web security services to handle vulnerabilities in the cloud platform.This work mainly focused on general data protection regulations that are being put forward by different countries.Getting proper agreement for data processing, privacy of collected data, and safe handling of information during communication are some of the goals of these services.These micro services extract the keywords from the request and identify the security level that needs to be applied to the services.Governance over the security is done based on the security requirements of the users. Ref. [3] worked on XSS attacks on web applications.These are malicious codes that are injected into web applications.The study focused on XSS vulnerabilities in PHP source codes.This followed static and dynamic algorithms to identify XSS attacks.This work suggests an automated approach to use static as well as dynamic approaches like genetic, particle swarm, and ant colony algorithms to find the vulnerabilities in the code. Ref. [4] proposed a method to evaluate the security and privacy in top cloud players.This considered major security parameters like confidentiality, availability, privacy, and accountability for evaluating security weightage.Weightage values were assigned from lowest to highest under seven categories.Based on the weightage values, overall security values were calculated and ranked.This uses service level agreements (SLA) for defining and identifying complex security requirements. Ref. [5] created a benchmarking system for service providers to check and make informed decisions on the security loopholes in the system.This not only checks the security levels of the system, but also compares them with similar web services and provides a detailed view of the other factors that may affect the security of the system.This works in two phases; in the first phase, the framework checks whether the target service is well qualified for security deployment.In the second phase, the unsecure behavior of the service is analyzed, and the cause of such vulnerability is detected.The framework was tested with Denial-of-Service Attacks in different frameworks and found to be good. Ref. [6] designed a prototype called Det-Logic for identifying logical loopholes in web applications.This has three phases.In the first phase, the specification is extracted; in the second phase, an attack vector is generated based on the information from the first phase and in the third phase, the responses are compared for any vulnerabilities and reporting is made accordingly.Three logical attacks are addressed here.The first is parameter manipulation, where the attacker changes some parameters of the website which may change the whole logic of the system.The second is access control, in which the attackers may access some resources due to a less robust authentication control mechanism.The third is workflow bypass, where a hacker can bypass some of the operations and move to the next operations inserting malicious code in between.Det-Logic design can identify logical vulnerabilities in the application. Ref. [7] presented a security auditing approach for common injection attacks like XSS, SQL, XML, and XPath injection.The method extracts a small slice of code which is more appropriate to do security audits.The tool was developed for Java-based application, but it can be further applied to other languages. Ref. [8] developed a framework that automatically verifies the security policies between web services communication.This creates a set of web service communication processes based on integration standards.These multi-party interactions are developed based on component models where integration and verifications are done at the early stages.This ensured an end-to-end secure information flow. Ref. [9] developed a framework that can detect attacks on web servers based on the knowledge base and behavior of the attacker using an artificially intelligent system.This has a training phase where the system learns from the users about the possible attacks and stores this information in a knowledge base.Later, a comparison with the similarity index of the knowledge base is performed, and access to the systems is provided.Similarly, ref. [10] developed a framework called AppMine, which also works on the knowledge base of the user's requests.This uses a machine learning framework with different machine learning algorithms to train the knowledge base. The Common Attacks on the Web Applications Ref. [11] Open Web Application Security Project is a non-profit organization working on the security of applications and how well they can be secured.It publishes current threads, tools, and mechanisms to make applications secure.As per the recent reports of OWASP, injection, broken authentication, sensitive data exposure, XXE, broken access control, security misconfiguration, cross site scripting (XSS), insecure deserialization, using components with known vulnerabilities, and insufficient logging and monitoring are top in the list.This work mainly focuses on injection, broken authentication, Broken Access Control, and XSS.The other attacks will be studied in the future. Thread Modelling Application Overview: The application is intended to provide secure data communication and processing from the client end to the server end.It is developed for maintaining sensitive information at an institutional level.The business logic and data access logic lie on the server end.Users can use the service through any web browser.The actors in the application are regular users consuming the services and administrators who access the services through the web platform. The entry point to the application is restricted through 443 (SSL).All other ports are disabled for security reasons.All access is done through valid user credentials, which are validated from client and server side.The exit point involves killing the sessions and cookies through logout or a timer based on inactivity. Threads: This framework takes care of the following attacks: SQL injection attacks-exposure to executing commands in the database, thereby getting access or modifying or deleting data; Broken authentication attacks-attackers may gain access by compromising username/password, session keys, or user identity; Broken access control-allows unauthorized users to access restricted file contents or database content; Cross site scripting-injects script codes in the system, allowing the attacker to track or gain access to the entire system. Injection Attacks This is a kind of attack where the attacker can inject a piece of code that may change the usual behavior of the application or even steal information from the application.The most common of these is the SQL injection attack, where injecting a piece of SQL code into the inputs of the application can break the SQL server and fetch some information.One example of such an attack is the following: select * from user_details where user_id='some user' and password='password' An input to the application's username " ' or 1=1;--" may break this SQL query by taking all records from the table due to the 1=1 condition in the input field, as per the query below: select * from user_details where user_id=" or 1=1; This usually happens when data is provided in the SQL statement itself from the applications.The most common solution to this is separating data from the application logic and putting forward restrictions on data.SQL injection attacks are prevented by eliminating the character that can break the PHP code/SQL code.This is an additional Eng.Proc.2023, 59, 1 4 of 10 task along with common techniques, by using prepared statements, constructed standard procedures, and input validations. Broken Authentication This happens when an attacker gains control over the system partially or fully.These are very common in web applications due to improper session management or credentials in the applications.Usually, the session ID is available in a cookie file in the browser, or can be attached to the URL.In both cases, if a session is left open and if someone gains access to the system, they can hijack the session and use it. A case study based on this is session ID hijacking.In this case, the hackers may steal the user's valid session ID through the URL/browser when the user is inactive.The hacker may then copy the session ID and use it as if they were the same user as the original one. Broken Access Control There must be a proper access control mechanism to access any resource from the server.There are many attacks happening nowadays due to broken access control.Some of these provide access to documents even if the session is logged out.This happens due to improper session management.Some resources may be available without any authentication mechanism.This helps the hackers to access the data in the system or even sometimes inject malicious code that can change the behavior of the system. One example of this attack is when a hacker passes some additional parameters to the URL or HTTP parameters that allow the user to access the restricted content on the page with restricted user login.They may even get access to a file system. Cross Site Scripting (XSS) This is also a kind of injection attack where the attacker adds scripts to the browser.When this information is sent to the server, this can steal the server information or provides hacker fed information for processing.This can even do more on the client side, where it can take any information from the DOM or from the session variables. Ref. [12], a web application security company, provides a brief insight of the current web security issues in the Spring 2021 web vulnerability report.They conducted a survey over the world with different sets of organizations from January 2020 to December 2020.The findings of the research conclude that there are 27% high-severity vulnerabilities, 63% have medium-severity vulnerabilities, 25% are XSS vulnerable, and 26% have WordPress vulnerabilities.Also compared with the 2019 statistics, there is an increasing trend found in Remote Code Execution (RCE) and Vulnerable JavaScript libraries. XSS cheat sheets are used for cross-site scripting attacks.A filter based on cheat sheets, which contains the most common attack scripts is used.These scripts in the code will be stopped by the security framework designed in the preprocessing stage. The Proposed Framework for Web Security This section describes the methods adopted in the framework deployed in PHP for overcoming the above-mentioned attacks.This uses JavaScript and PHP as its development environment.This logic can be extended to other languages too. JS Module This module takes care of data security between the client and the server.Even though there are mechanisms like SSL, there are possibilities that the SSL can be spoofed.This is not a kind of attack, but this is a man in the middle who can retrieve the information we communicate with the help of spoofing tools like SSLStrip, CarbonCopy, or similar tools.Since there are a lot of open SSL options available, attacks on this are also common.So, to secure the data, the data when sent from the client itself are encrypted by the module introduced.Further, to kill the unclosed session data after a period, this invokes a method to kill sessions and cookies by monitoring idle activity.The validation control helps in checking the input data and basic malicious codes.Figure 1 represents the proposed framework for secure web applications.This describes the process involved in JS and PHP modules.introduced.Further, to kill the unclosed session data after a period, this invokes a method to kill sessions and cookies by monitoring idle activity.The validation control helps in checking the input data and basic malicious codes.Figure 1 represents the proposed framework for secure web applications.This describes the process involved in JS and PHP modules. PHP Module The major issues with web applications are processing SQLi attacks.To overcome this issue, all the inputs are sanitized with predefined sets of possible knowledge bases.Further, this uses a PDO model to take care of these attacks.The PDO by itself cannot provide full security.So, all the data that need to be processed are used under utf8 encoding format alone as a first step, the special characters that can break the code are blacklisted from execution.This can possibly reduce SQLi attacks. This framework blocks all inputs from clients other than the intended data fields.Further, XSS cheat sheets [13] are implemented with the framework to block XSS attacks. The session data that is maintained in the system is not always static data.This works with session information generated with a unique random number, which serves as the session information of the user.So, session hijacking is not possible.This framework also checks for spoofing attacks and authentication on each resource request. As per recent studies, Remote Code Execution (RCE) is one of the top attacks that happen in web applications.Through RCE, a hacker can gain access to a system to run random code.Usually, a firewall in the system is configured to block incoming connections, allowing specific ports like 80 and 443.But the outbound connections are usually open in the systems.Taking this as an advantage, through RCE, attackers use reverse shell to run programs in the target machine.Once a RCE is done, a reverse shell, which listens to a port will be added to the target machine.A reverse shell is a shell session established on a compromised connection that is initiated from a remote computer or server and not from the native host.Attackers with a successful connection exploit a remote command execution, and use a reverse shell to get access to interactive shell session on the target machine and continue their attack.A reverse shell also can be the sole mechanism to gain remote shell access across a NAT or firewall.If a web application like PHP passes a parameter sent to the server through a GET request to the PHP include() function without any kind of validation, then the attacker can execute some miscellaneous code to get access of the system or data of the system. One example of such an attack uses the following: ncat -l -p port_no PHP Module The major issues with web applications are processing SQLi attacks.To overcome this issue, all the inputs are sanitized with predefined sets of possible bases.Further, this uses a PDO model to take care of these attacks.The PDO by itself cannot provide full security.So, all the data that need to be processed are used under utf8 encoding format alone as a first step, the special characters that can break the code are blacklisted from execution.This can possibly reduce SQLi attacks. This framework blocks all inputs from clients other than the intended data fields.Further, XSS cheat sheets [13] are implemented with the framework to block XSS attacks. The session data that is maintained in the system is not always static data.This works with session information generated with a unique random number, which serves as the session information of the user.So, session hijacking is not possible.This framework also checks for spoofing attacks and authentication on each resource request. As per recent studies, Remote Code Execution (RCE) is one of the top attacks that happen in web applications.Through RCE, a hacker can gain access to a system to run random code.Usually, a firewall in the system is configured to block incoming connections, allowing specific ports like 80 and 443.But the outbound connections are usually open in the systems.Taking this as an advantage, through RCE, attackers use reverse shell to run programs in the target machine.Once a RCE is done, a reverse shell, which listens to a port will be added to the target machine.A reverse shell is a shell session established on a compromised connection that is initiated from a remote computer or server and not from the native host.Attackers with a successful connection exploit a remote command execution, and use a reverse shell to get access to interactive shell session on the target machine and continue their attack.A reverse shell also can be the sole mechanism to gain remote shell access across a NAT or firewall.If a web application like PHP passes a parameter sent to the server through a GET request to the PHP include() function without any kind of validation, then the attacker can execute some miscellaneous code to get access of the system or data of the system. One example of such an attack uses the following: ncat -l -p port_no which listens to the outbound port number specified in a Linux system.Then, the attacker can communicate through this port and do remote command executions.Some of the solutions to the above-said issues can be done as follows. Usually, websites communicate through HTTP which is purely clear text format which can be read easily by others.To keep it safe from others, it is necessary to add SSL or TLS to the application.As an additional layer of security, HTTP Strict Transport Security (HSTS) can be added which always forces us to use HTTPS in the browser. Further, it is a good practice to use only the same source for all web content.For example, a header with header ('Access-Control-Allow-Origin: *') will allow all links to communicate on the website, which opens the possibility for a click-jacking attack, providing a way for hackers to inject a document or a script loaded to the site which can interact with a resource from another origin.To overcome this, same origin policy must be enforced and wherever external sites are used, that must be added to the access control.The possible blocks where such attacks may happen are a java script with <script src=". .."></script>, a CSS script added with <link rel="stylesheet" href=". ..">, displaying images with <img> sources, playing media files through <video> and <audio> tags, and any external objects embedded with <object> or <embed>. Another problem in web applications is the referrer policy.Once a referrer is available for a web application, there are possibilities to send data to the referrer.For example, if our application displays an advertisement on a page run by a malicious user, if we get into the site through the malicious site, the malicious site becomes the referrer.If, at any point, we use some sensitive information on the site, there are chances that this may be retrieved by the malicious site.This can be prevented by restricting the amount of information that flows through the referrer by setting a proper referrer policy. The policies followed for the framework is listed in Listing 1. Listing 1. Policy enforced under the framework. Enable SSL/TLS for the site Ensure HTTPS access alone Add HSTS for additional enforcement of security layer Allow content from same domain alone by ensuring Access-Control-Allow-Origin: https://mydomain Allow only the required methods on the site.Most recommended with POST Access-Control-Allow-Methods: POST Configure the referrer policy Referrer-Policy: no-referrer Configure the embedded objects and iframes to access same domain/deny access X-Frame-Options: DENY (or) SAMEORIGIN Configure directory browsing restrictions, proper redirects for HTTP error pages Configure URL Management Data Sanitization Results and Discussion The basic setup for web security systems was analyzed by setting up this environment on specific sites with and without SSL encryption; the data was tested with different malicious codes from different sets of users.All the policies enforced in the application were working fine with the expected outcome. Data from the sites were found to be encrypted, since we used standard encryption between the client and server through AES.The sites were found to automatically delete the session information and cookie information after the set interval of time.SQL and XSS injections were tried on the sites and we found that they were discarded by the system. The Arachni web application security scanner [12] was used to check web application vulnerabilities based on the framework.Arachni is one of the best open source technologies to check almost all kinds of web application attacks.Refs.[14][15][16] have conducted studies on different web application security scanners, considering different attacks and parameters; from these studies, it was found that Arachni is the one of the best tools that scans almost all kinds of security audits and provides accurate results. The architecture of the web servers was made with the techniques deployed in [17,18].This architecture provides good hardware utilization with its best performance for the users considering power optimization. Figure 2 shows the issues by type, trust, and severity on partial implementation of the designed framework.The blue bar in Figure 2 represents the trusted connections, the light red bar represents the untrusted connections, and the orange line represents the severity in connections that get attacked.This shows that the severity level of attacks is very minimal in the platform. The Arachni web application security scanner [12] was used to check web application vulnerabilities based on the framework.Arachni is one of the best open source technologies to check almost all kinds of web application attacks.Refs.[14][15][16] have conducted studies on different web application security scanners, considering different attacks and parameters; from these studies, it was found that Arachni is the one of the best tools that scans almost all kinds of security audits and provides accurate results. The architecture of the web servers was made with the techniques deployed in [17,18].This architecture provides good hardware utilization with its best performance for the users considering power optimization. Figure 2 shows the issues by type, trust, and severity on partial implementation of the designed framework.The blue bar in Figure 2 represents the trusted connections, the light red bar represents the untrusted connections, and the orange line represents the severity in connections that get attacked.This shows that the severity level of attacks is very minimal in the platform.Figures 3 and 4 represent the severity possibilities and issues by type, respectively.The test machine was set up without SSL, resulting in a strict transport security header issue; further, the X-Frame options header was not initially enabled.The HTTP trace allowed for XSS attacks.In Figures 3 and 4, the informational issues occurred since the server responded with 200 (OK) and 404 (Not Found) status codes.Even though this is not a severe issue, this will give an insight of the web application.The non initialized X-Frame header resulted in Low security issues.The non initialized SSL resulted in medium scale security issues.By further refining the framework, most of the loopholes were overcome.Figures 3 and 4 represent the severity possibilities and issues by type, respectively.The test machine was set up without SSL, resulting in a strict transport security header issue; further, the X-Frame options header was not initially enabled.The HTTP trace allowed for XSS attacks.In Figures 3 and 4, the informational issues occurred since the server responded with 200 (OK) and 404 (Not Found) status codes.Even though this is not a severe issue, this will give an insight of the web application.The non initialized X-Frame header resulted in Low security issues.The non initialized SSL resulted in medium scale security issues.By further refining the framework, most of the loopholes were overcome.As seen in the Table 1, all the tests done on the machine passed the vulnerability tests and were found to be a good framework for web application development which has high security concerns.The system is being further studied for other attacks on the web platform.The tick mark in the Table 1 shows that the test passed for the respective attack parameters at different weightage of users.As seen in the Table 1, all the tests done on the machine passed the vulnerability tests and were found to be a good framework for web application development which has high security concerns.The system is being further studied for other attacks on the web platform.The tick mark in the Table 1 shows that the test passed for the respective attack parameters at different weightage of users. Conclusions Nowadays, securing the web is very important since all kinds of communications and transactions are happening through this web platform.Even though individual solutions like [19][20][21] are available for providing security in web applications, application developers take this lightly.This work proposed a frame for secure application development with PHP where SQL injection attacks, broken authentication, broken access control, and cross site scripting were studied, and safety mechanisms were implemented. The framework will be further extended for the other possible web application attacks in the future.Also, this framework will be extended further to other scripting languages.In the future, this model will be documented for all developers, even for non-PHP developers; the scalability and performance on a large scale will be measured since there is a little preprocessing required for framework implementation. Figure 1 . Figure 1.The web security framework. Figure 1 . Figure 1.The web security framework. Figure 2 . Figure 2. Issues by type, trust, and severity on partial implementation of framework. Figure 2 . Figure 2. Issues by type, trust, and severity on partial implementation of framework. Figure 3 . Figure 3. Severities based on possible impact.Figure 3. Severities based on possible impact. Figure 5 Figure 5 represents the further refinement of the framework.Informational security, SSL and header frames were added to this refinement.The results show that the risk level is very low compared to Figure 2. The red bar in Figure 5 disappeared since risk is minimal and the orange line indicating severity became very low.SSL addition made the application trustworthy. Figure 5 Figure 5 represents the further refinement of the framework.Informational security, SSL and header frames were added to this refinement.The results show that the risk level is very low compared to Figure 2. The red bar in Figure 5 disappeared since risk is minimal and the orange line indicating severity became very low.SSL addition made the application trustworthy. Figure 5 . Figure 5. Issues by type, trust, and severity on further refinement. Figure 5 . Figure 5. Issues by type, trust, and severity on further refinement. Table 1 . Security pass test for the defined attacks. Table 1 . Security pass test for the defined attacks.
2023-12-16T17:18:41.806Z
2023-12-10T00:00:00.000
{ "year": 2023, "sha1": "9c70d7841b8afa57b18e12f972661f077c4e7fd5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-4591/59/1/1/pdf?version=1702266404", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "c9339551f635d987a48337140a0bfc14bb48202a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
81491069
pes2o/s2orc
v3-fos-license
Medical disorders in pregnancy and pregnancy outcome: a retrospective analysis Medical disorders, including hypertensive diseases, diabetes, thyroid disorders may exist prior to pregnancy or may manifest themselves for the first time during pregnancy (e.g. gestational diabetes, gestational hypertension). The outcome for a particular pregnancy will depend on the nature of the disease, the severity of the disease process at onset of pregnancy, gestational age at onset of disease, time of obstetrical and medical management and the quality of obstetric and medical management used. Management of pregnancies with pre-existing medical disorders should begin before conception. These women should be evaluated to determine the severity of the disorder and to establish or to rule out the presence of possible target organ damage. In addition, they should be counselled regarding the potential adverse effects of the disease on pregnancy outcome and the effects of pregnancy on their disease. These women should be instructed regarding the importance of early beginning of prenatal care and compliance with frequent prenatal visits. ABSTRACT INTRODUCTION Medical disorders, including hypertensive diseases, diabetes, thyroid disorders may exist prior to pregnancy or may manifest themselves for the first time during pregnancy (e.g. gestational diabetes, gestational hypertension). The outcome for a particular pregnancy will depend on the nature of the disease, the severity of the disease process at onset of pregnancy, gestational age at onset of disease, time of obstetrical and medical management and the quality of obstetric and medical management used. Management of pregnancies with pre-existing medical disorders should begin before conception. These women should be evaluated to determine the severity of the disorder and to establish or to rule out the presence of possible target organ damage. In addition, they should be counselled regarding the potential adverse effects of the disease on pregnancy outcome and the effects of pregnancy on their disease. These women should be instructed regarding the importance of early beginning of prenatal care and compliance with frequent prenatal visits. All pregnant females should have a regular antenatal check-ups and investigations so that various medical disorders can be detected early in time and can be managed effectively. Hypertensive disorders in pregnancy (HDP) remain a major global health issue not only because of the associated high adverse maternal outcomes but there is a close accompaniment of significant perinatal morbidity and mortality. 1,2 A high risk pregnancy may be identified by using a scoring system such as the system developed by Hobel et al. 3 Risk scoring system may be defined as a formalized method of recognizing, documenting and cumulating antepartum, intrapartum and neonatal risk factors in order to predict complications for the fetus and new born. Objectives of this study were • To study the demographic details of patients with medical disorders in pregnancy. • To study the spectrum of different complications in pregnancy due to the medical disorders • To study the effect of comorbidities of advance maternal age on pregnancy • To study the foetal outcome in medical disorders in terms of time of delivery, weight at birth, need of NICU admission METHODS This retrospective study was conducted at the Department of Obstetrics and Gynaecology of tertiary care hospital by reviewing all medical records of pregnant patients with medical disorders admitted for delivery from January 2016 to December 2016. All data was retrieved and entered in preformed, structured, validated proforma regarding information of sociodemographic factors, high risk factors, and the antenatal, intranatal and postnatal events during this pregnancy, neonatal outcome. Data collected was analysed using simple statistical measures like percentage and proportion. Inclusion criteria • All females booked with our institute and diagnosed to have medical disorder preconceptionally or antenatally. Exclusion criteria • Unbooked patients with medical complications • Pre-existing medical conditions leading to abortion in first trimester. RESULTS High risk pregnancy is multifactorial in most of the cases and so many high-risk patients had more than one directly or indirectly contributing high risk factors contributing to the antenatal or perinatal morbidity. In the present study majority of the mothers were in the age group of 20-35 years (91%), while remaining 7% were in the age group of >35 years. 64% of females were from urban area and 35% were from rural area. (Table 1). 43 females were primigravida, 42 were multigravida and 14 females were grand multigravida. 65% females delivered after completing the term and 34% delivered pre-term out of which 5% delivered before completing 28 weeks of gestation. Among the five pregnancies which were terminated before 28 weeks of gestation 4 had preeclampsia and 1 had uncontrolled gestational diabetes mellitus. Most common medical disorder was pregnancy induced hypertension and its complication like pre-ecclampsia, eclampsia, seen in 43% of the females; followed by anaemia and hypothyroidism seen in 20% females respectively. Out of the 43 females suffering from PIH, 4 females had eclampsia and 7 females had severe pre-ecclampsia for which they were managed medically and by terminating the pregnancy. 12 out of 20 anaemic females required blood transfusion before delivery. Others were treated with parenteral iron post-delivery. Hypothyroidism was diagnosed during routine antenatal investigations (serum TSH in first trimester) and was treated symptomatically with thyroxin as per the recommendations. We had 8 females with twin pregnancies out of these 5 females develop severe preeclampsia 2 had moderate anaemia and 1 had hypothyroidism. On analysis of outcome of pregnancy in relation to various medical disorders it was see that maximum perinatal morbidity was seen in females suffering with hypertensive disorders (53.4%), with 17 (41.8%) IUGR and 6 (11.6%) intra uterine demise respectively. Following PIH, higher rate of perinatal morbidity was seen in anaemic (50% IUGR) and hypothyroid (35% IUGR) females. Details of outcome of pregnancy in relation to the medical disorders is summarised in Table 4. DISCUSSION Preeclampsia leads to increased perinatal morbidity and mortality due to associated IUGR and fetal hypoxia. Mendez L et al found that abnormal RI of umbilica Any medical disorder in pregnancy presents a significant risk to foetal well-being, such as premature birth, small for date infant or still births and early neonatal death. Identification of patients at risk for these complicated pregnancies with poor outcome is fundamental to antenatal care. It is seen that these conditions are multifactorial and a female may suffer from more than one medical disorders resulting in high risk pregnancy and poor outcome of it. In Our study the majority of the mothers were in the age group of 20-35 years (91%). In a study done for high risk scoring system for prediction of pregnancy outcome previously at author's institute also showed that maximum high risk pregnancies are seen in 19-35 years of age (95%). 4 Numerous studies have been conducted to see the perinatal outcome in patients with hypertensive disorders, hypothyroidism, anaemia and gestational diabetes mellitus respectively. Zareen N et al have conducted a study where they have compared the high risk pregnancies with low risk pregnancies. 5 Anaemia 98 (60.49%), pregnancy induced hypertension 24 (14.8%) were identified as the major risk factors. 5 In the present study most common medical disorder was pregnancy induced hypertension and its complication like preecclampsia, eclampsia, seen in 43% of the females. Hypertensive disorders in pregnancy are associated with significant perinatal morbidity and mortality especially in the developing world. In a study conducted by Kwame Adu-Bonsaffoh et al, the major adverse perinatal outcomes determined among women with HDP include intrauterine growth restriction (6.3%), intrauterine fetal death (6.8%), preterm delivery (21.7%), low birth weight (24.7%) and birth asphyxia or neonatal respiratory distress (15.2%) among other complications. 6 There were 12 (7.40%) stillbirths and 5 (3.08%) early neonatal deaths in high risk group, while there was 1 (0.84%) stillbirth and no neonatal death in low risk group (p=0.004, RR=1.72). 2 There were 58 (35.80%) neonates with low birth weight in high risk group, while the same were only 4 (3.33%) in in low risk group. 5 It was observed that perinatal mortality was twice as high in high risk group compared to low risk group. In our study perinatal morbidity was seen in females suffering with hypertensive disorders (53.4%), with 17(41.8%) IUGR and 6 (11.6%) intra uterine demise respectively. The other major medical disorder seen in the developing countries is anaemia. Among pregnant women anaemia prevalence of 58%-89.6% has been documented in the country. [7][8][9] The risk of prematurity and LBW is higher in anaemic women. Lone FW conducted a study to see the effect of anaemia on pregnancy outcome. 10 It was observed that the risk of preterm delivery and LBW among exposed group was 4 and 1.9 times higher among anaemic women, respectively. In present study about 50% of the babies delivered by anaemic mother were IUGR. CONCLUSION Medical disorders in pregnancy are multifactorial and presents with great potential to adversely affect maternal and foetal outcomes. If the condition is detected early, it is easy to treat with very little detrimental effects to the mother and foetus. Hence, these conditions need early detection, prompt initiation of treatment, regular followup. Most importantly sufficient education of the patients regarding the awareness of the danger signs and their seeking of medical facilities in time will improve the maternal and foetal outcome.
2019-03-18T13:57:56.982Z
2018-05-26T00:00:00.000
{ "year": 2018, "sha1": "89692e2fa3f87cf012fe976493f9bc5452c7a402", "oa_license": null, "oa_url": "https://doi.org/10.18203/2320-1770.ijrcog20182361", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2fc85b174b7b334f3bebf478550c5ce52f8aeb3f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253163915
pes2o/s2orc
v3-fos-license
Maintained activity in ankylosing spondylitis patients treated with TNFi and/or NSAID for at least 12 weeks: a cross-sectional study in Brazil Background The aim of this study was to evaluate disease activity among patients with axial spondyloarthritis (AS) treated with tumor necrosis factor inhibitors (TNFi) and/or nonsteroidal anti-inflammatory drugs (NSAIDs) for at least 12 weeks in private outpatient settings in Brazil. Methods This was a cross-sectional, real-world study conducted in 17 Brazilian private health care institutes. Patients were selected if diagnosed with AS or axial radiographic spondyloarthritis (AxSpA) and treated with NSAIDs or TNFi for at least 12 weeks within the last 26 weeks prior to enrollment. The data were collected from interviewed-based and self-administered questionnaires from patients and physicians. Disease activity was defined as active (≥ 4), low /suboptimal (≥ 2 and < 4) and inactive (< 4) by Bath AS Disease Activity Index (BASDAI) and/or very high (≥ 3.5), high (≥ 2.1 to < 3.5), low (≥ 1.3 to < 2.1), and inactive (< 1.3) by AS Disease Activity Score (ASDAS-CRP). Both patients and physicians’ perceptions of disease control were assessed using a numeric rating scale (NRS; 0—inactive to 10—very active disease). Results The cohort included 378 patients with a mean age of 46 years, and the median time since diagnosis until enrollment was 5.4 years (interquartile range 2.7–10.5). Most patients were treated with TNFi alone (74%), followed by TNFi in combination with NSAID (15%), and NSAID alone (11%). About half AS patients showed active disease and 24% of patients showed low activity/suboptimal disease control despite having been treated for at least 12 weeks. Although TNFi showed better disease control than NSAID, inactive disease was experienced by few patients. The NRS (mean [standard deviation]) score for disease perception was 4.24 (3.3) and 2.85 (2.6) for patients and physicians, respectively. Conclusion This real-world study showed that most AS patients on TNFi and/or NSAID had not achieved an adequate disease control, as almost 75% of them exhibited active disease or low activity/suboptimal disease control. There remains a need for improved disease management among patients with AS. Supplementary Information The online version contains supplementary material available at 10.1186/s42358-022-00270-3. Background Ankylosing spondylitis (AS) is a chronic inflammatory disease that primarily affects the axial joints [1]. It is estimated to affect 0.02 to 0.8% of the Latin American population [2,3]. AS is characterized by an insidious onset of inflammatory low back pain, with or without peripheral arthritis or extra-articular manifestations [4]. AS is not only associated with a significant clinical and economic burden [5] but with impaired quality of life as well. National and international guidelines for AS recommend nonsteroidal anti-inflammatory drugs (NSAIDs) as the first-line of pharmacological treatment of AS [6][7][8]. Biological disease-modifying antirheumatic drugs, such as tumor necrosis factor inhibitors (TNFi) and interleukin-17 (IL-17) inhibitor, are recommended for patients with high disease activity with AS after at least 2 NSAIDs, with current practice starting with TNFi [6,7,9]. Monitoring of disease activity, function, mobility, and radiographic progression is highly recommended to investigate whether treatments are leading to complete clinical remission or low disease activity [6,8]. The disease monitoring includes measuring disease activity by using composite indices for disease activity (Ankylosing Spondylitis Disease Activity Score [ASDAS] or Bath Ankylosing Spondylitis Disease Activity Index [BAS-DAI]) and laboratory tests (C-reactive protein [CRP]) and imaging, and patient-reported outcomes (PRO) capturing patient perspectives [6,7]. Over the last decade, the management of AS has changed dramatically. However, a few clinical trials also showed that not all patients could achieve complete clinical remission or adequate disease control [10][11][12]. Similarly, a few observational studies also showed high disease activity and low activity/suboptimal control following treatment in patients with AS especially in middleincome countries like Brazil [13,14]. However, limited data are available on disease activity among patients with AS in real-world settings. Therefore, this cross-sectional study, INVISIBLE-BRA-ZIL (Making the INVISIBLE visible), aimed to evaluate disease activity among AS patients treated with TNFi and/or NSAID for at least 12 weeks in Brazilian private health care institutes. Study design and participants The INVISIBLE-BRAZIL study was a multi-center, observational, cross-sectional, noninterventional study conducted among patients with AS treated with tumor necrosis factor inhibitors (TNFi) and/or NSAIDs. This study was conducted in 17 Brazilian private health care institutes from June 2019 to June 2020. Eligibility criteria included patients with a diagnosis of AS or axial radiographic spondyloarthritis (AxSpA) according to physician evaluation (modified New York criteria or ASAS classification criteria were not mandatory), aged ≥ 18 years old, treated with at least one TNFi and/or NSAID for at least 12 weeks in the last 26 weeks prior to study enrollment. Patients on interleukin-17, or those who had any severe concomitant disease that might influence rheumatic disease evaluation such as neoplasia, noncontrolled psychiatric disease were excluded. Additionally, patients who were not able to read, understand, and complete the questionnaires and/or who were participating in any other study including administration of drug or procedure were excluded. All patients (participants) and site investigator (physician) were asked to complete the PROs questionnaires. Patients were treated with standard of care according to physician's decision and the treatment was retrospectively assessed. Data source Data were collected by the physician, directly from the patients during the single study visit from interviewedbased and self-administered questionnaires, and from patients' medical records. Patients and their physicians answered the reported outcomes questionnaires to assess disease activity and their perceptions about the disease. Data unrelated to PROs were retrieved from patients' medical charts. All data were entered into an electronic case report form (eCRF), which constituted the database. Disease activity assessment Disease activity was assessed by two PROs that are commonly used and recommended by standard guidelines [6,7]: Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) [15], which was entirely self-reported, and ASDAS-CRP [16], which included an inflammatory marker in addition to the self-reported questions. All patients answered BASDAI, but ASDAS-CRP was only evaluated for those with CRP test results in the 30 days prior to the survey. The BASDAI Index [6,17] consists of the assessment of five AS symptoms (fatigue, back pain, peripheral joint involvement, enthesitis points, and stiffness) that are evaluated on a numeric rating scale (NRS) varying from 0 to 10 (one being no problem and 10 being the worst problem). The score is obtained by considering the 2 questions regarding stiffness as a single component (average scores of both), and then the average of the 5 partial scores is calculated. Cut-off used to classify disease activity as active was score ≥ 4, and inactive was < 4; an additional analysis was carried out to assess low activity/ suboptimal disease control, using the exploratory threshold value of score ≥ 2 and < 4 determined by the study of the disease and references [12,[18][19][20]. Furthermore, both patients and physicians' perceptions of disease control were assessed using an NRS (0-inactive to 10-very active disease): according to patients' perceptions, how active was their rheumatic disease during the last week and according to physicians' perceptions, how was the disease activity of the patient at the time of medical visit. Statistical analysis Descriptive analyses were used. Data were described as measures of central tendency (means, medians) and dispersion (standard deviation [SD], interquartile range [IQR]) for continuous variables, and absolute number and percentage for categorical variables. Any missing data was considered as missing information, and no data imputation method was performed. Chi-squared test was used to compare frequencies, and analysis of variance (ANOVA) was used to compare means of two or more independent groups when continuous variable followed normal distribution. Spearman (ρ) was used for evaluating the correlation between two continuous variables that had no normal data distribution. A Pearson correlation coefficient of 0.2 is considered small effect, 0.5 (medium) and a 0.8 or higher high correlation [22]. A P-value < 0.05 was considered statistically significant. Also, the ability of physicians or patients to predict real control of disease was analyzed graphically using a receiver operating characteristic curve. The area under the curve (AUC) was displayed for each analysis. The study sample size was based on statistical precision and allowance of an outcome to have sufficient generalizability. Our sample included 378 individuals, which was adequate to achieve a robust estimation of the population mean (95% confidence interval, level of significance 0.05). It was based on an acceptable error of 5%, a Brazilian population of 200.4 million inhabitants with an AS prevalence of 0.5% varying from 0.08 to 1.4% [23], and assuming that about 50% of AS patients in our study cohort would have had BASDAI < 4 based on literature data [12,18]. Analysis was done using SAS 9.4 (SAS Institute, Cary NC). Ethical committee approval The study was conducted in accordance with the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines and with the ethical principles laid down in the Declaration of Helsinki and local ethical regulation. In addition, the study had ethics committee approval of all participating research centers. All patients provided written informed consent prior to participating in the study. Patients' characteristics Overall, 386 patients were screened, of whom 378 (97.9%) were eligible and completed the study; 4 patients were not eligible (they were not treated for at least 12 weeks) and data from 4 patients could not be evaluated during monitoring activities, so they were not included in the study cohort. Of the enrolled participants, 213 (56.3%) were male; the mean (SD) age was 46.4 (13.1) years, and the majority of them were employed (58.5%), overweighted and obese (70%), and had no smoking history 294 (77.8%). Mean age of symptoms' onset was around 32.6 (SD 13.5) years, and patients were diagnosed at mean age of 39.2 (SD 13.8) years. The median time from diagnosis to enrollment was 5.4 years (IQR 2.7-10.5). In total, 277 (73.3%) patients were screened for the presence of HLA-B27 antigen, of whom 181 (65.3%) were positive. Moreover, 36.2% (137) of the patients had undergone CRP evaluation in the past 30 days prior to enrollment, 67.9% of patients had CRP levels lower than 1 mg/dL. Similar characteristics were found for treatment groups, though TNFi group had a higher presence of male and employed people, and had longer time from AS diagnosis ( Table 1). The most used TNFi was adalimumab (126, 45%), followed by etanercept (54, 19%) and infliximab (48, 17%); whereas the most commonly used NSAIDs were Tables 1 and Supplementary Table 2 detail the treatment characteristics in terms of treatment duration, commonly used dosage and frequency of TNFi and NSAIDs, respectively. Table 4 shows the proportion of patients with active and inactive disease, classified according to BASDAI and/ or ASDAS-CRP scores (as illustrated at Fig. 1), and the mean score per treatment type. About half of AS patients had active disease at the time of study enrollment despite having been treated for at least 12 weeks; moreover, only 26-27% had inactive disease, while the others presented with low disease activity/suboptimal disease control. Only about one-third of these patients had inactive and well controlled disease, with those treated with TNFi alone (~ 34% with inactive disease) with better disease control than those treated with TNFi in combination with NSAID (~ 7-9%) or NSAID alone (~ 5%); >73% of patients on TNFi in combination with NSAID or NSAID alone had active disease. Perception of disease control Patients perceived their disease to be more active than their physicians. For both patients and physicians, the disease was moderately active (mean NRS scores ≥ 5) for patients treated with TNFi in combination with NSAID or NSAID alone. For patient under TNFi alone, both patients and physicians reported moderate-to-low disease activity; but score mean for disease activity reported was 2.85 for physicians and 4.24 for patients (Table 3). Both patients and physicians' perceptions were highly correlated to BASDAI and ASDAS-CRP scores (Table 4). Both patients and physicians were able to predict disease activity as active or inactive. All AUC were higher than 0.8 for BASDAI and ASDAS-CRP cut-offs for disease activity (Fig. 1). Discussion This noninterventional and observational study described the disease activity of patients with AS treated with TNFi and/or NSAID for at least 12 weeks in the past 26 weeks in Brazil. The demographics of the study sample were comparable to previous studies [14,[24][25][26], as most patients were male and typically had disease onset before the age of 40-45 years. As expected, HLA-B27 presence among patients with AS was high in this study cohort, and these data are in line with previous studies in Latin America [2], although the HLA-B27 frequency was lower than that in studies outside Brazil, which showed as high as 91% frequency of HLA-B27 whereas it was 65% in the current study [24]. Both NSAID and TNFi are efficacious, prescribed for AS control, and the results are as expected from the phase III studies of the evaluated drugs. In the INVISI-BLE study, despite having been treated with TNFi and/or NSAID for at least 12 weeks, half of the patients exhibited moderate to high disease activity, whereas ~ 23% showed low activity but still low activity/suboptimal control (not very inactive disease). The majority of patients included in this study was using TNFi alone (mainly adalimumab) and although showing better control of the disease than patients using NSAIDs alone or in combination with TNFi, more than half of the patients using TNFi alone still have not reached adequate disease control (36-38% with active disease plus 27 − 19% with low activity/suboptimal control). A few real-world studies have shown that disease activity scores and CRP decrease among patients with AS after initiating biological therapy [24,[27][28][29]. However, corroborating with our findings, it has also been reported that 20-40% of AS patients on TNFi showed an inadequate response or become intolerant to the treatment over time [30]. Additionally, other studies showed that TNFi is efficacious in reducing disease activity but still might not lead to good or adequate disease control after second and third of anti-TNF treatment [10,31,32]. Overall, groups treated with NSAIDs (alone or in combination with TNFi) had worse disease activity results than TNF alone. Although NSAIDs have been proven to be efficacious in symptom reduction and in reducing inflammatory serum biomarkers, they do not always lead to adequate symptom control [33]. Moreover, a transversal unicentric study (N = 152) [14] reported a significantly higher proportion of patients with AS with low disease activity and inactive disease in patients treated with TNFi than in those treated with NSAIDs. In the current study, most patients under NSAID presented inadequate disease control, worse than those using TNFi. This might suggest that many patients under NSAID might be eligible for switching therapy for biologics, as indicated in disease guidelines. Although the first line of treatment is NSAIDs in AS, the number of patients using NSAIDs was relatively low in this cohort. As this study did not require patients to be under first-line of therapy, most patients might have switched their NSAID to biologics therapy over time, which would explain the higher number of included cases under biologics. Besides, NSAID group showed higher disease activity than the biologics, which might be reflecting patients under their first line of therapy that are failing NSAID and are now eligible for initiating biologics. However, regardless the treatment received, the proportion of AS patients with low disease activity or inactive disease in the current study is lower than in previous studies, which showed around 75% of AS patients [34] with low disease activity or inactive disease and around 50% of patients with restricted inactive disease after AS treatment [14]. This study included clinics with a high standard care (e.g., national guidelines are followed, detailed standard operating procedures are develop and followed, etc.) in Brazil, which could not be generalized to the entire country; however, it is reasonable to believe that patients in different settings may have worse disease control. Even though included patients were on assumed great care, many still presented with disease activity, bringing to the attention that there might be very complex mechanisms for patients to be unable to achieve adequate treatment. Some possible explanations for treatment failure may be due to [1] non-compliance and non-adherence by patients -studies have shown that lack of knowledge about the disease and consequences of poor compliance could be the reason for this patient behavior; [2] sporadic and not routine use of PROs by healthcare providers -it has been reported that time constraint, insufficient knowledge and lack of integration of PROs into clinical system, are some of the barriers for the implementation of PROs in the clinical practice [35]; [3] lack of effective communication between healthcare providers and patients [36] -healthcare providers, including physician, improving communication with patients can further improve overall management of the disease [37,38]; and [4] clinical inertia -failure of physicians to initiate, change or intensify therapy when required especially when there is evidence of disease activity for chronic disease such as AS [39,40]. AS is a multidimensional inflammatory disease requiring overall management of the disease including morbidities, complications, and disease progression. Adequate care is possible but requires a broad and multi-disciplinary effort. Healthcare providers must strive to get the right and early diagnosis, and also to treat the right patient, at the right time, with the right treatment, at the right dose. Patients should be encouraged to selfmanagement and self-advocacy through effective listening and empathy by healthcare providers. Physicians are essential for that, but pharmacists, physiotherapists, community or family healthcare providers, and many other professionals can help supporting and engaging the correct and adequate AS treatment, pharmacological and non-pharmacological. A suboptimal management of chronic disease such as AS can further increase risk of subsequent adverse health outcomes such as fatigue, pain, impaired function, and psychosocial problems; nevertheless, misdiagnosis and other factors that may lead to undesirable treatment outcome may occur. [41,42]. For chronic pain disease such as AS, therapeutic decisions and assessment of disease activity rely on PROs in addition to physicians' clinical evaluation [43] as they are reliable and effective in reflecting changes in disease activity over time [44]. A qualitative study assessed PROs in patients with AS indicated that PROs measures should be routinely used in outpatient settings to help improve shared decision-making discussions between patients and physicians [37]. The patient perspective is critical to make continuous improvement in the treatment of AS by encouraging appropriate treatment switching and escalation. In this study, there was a slight patient-physician discrepancy regarding the perception of AS disease activity. The patients perceived their disease to be more active than the physicians; this is in keeping with data from a systematic review of literature [38,44]. A plausible explanation could be, patients solely subjective perception of pain and discomfort, so they tend to perceive more severe disease not only due to the disease status but also psychological distress and comorbidities [44]. This study has some limitations. Patients who regularly visited their physicians in clinics were more likely to be approached and enrolled in this study. The investigator selection bias was minimized by enrolling consecutive patients who fulfilled the eligibility criteria. Patients presenting with symptoms were more likely to get a medical consultation and to be included in this study, as they were visiting the clinics. Few CRP were available to allow ASDAS-CRP evaluation, leaving only BASDAI for disease activity determination; and because it relies solely on patient's perception, it could have inflated disease activity in this study. It is well stablished that a psychological distress is commonly a trigger and an aggravating factor to nociplastic pain; therefore, stress or other problems in patient care caused by the COVID-19 pandemic may influence the results, including the perception of the disease (overall more active to patient than to their physician) and the proportion of AS patients with inactive or low disease activity (lower in this study than in previous ones). Also, race/ethnicity data was not collected, which may limit the understanding of these population [45,46]. The treatment was retrospectively assessed to reduce physician bias with regard to treatment selection and indication; however, patients and disease characteristics are key points for physician's choice of treatment, so comparison among treatment groups within this study should be done with caution. The strength of this study is the nature of it, which reflects real world scenario, where physicians were not biased in assigning treatment, and patient's outcomes reflects what is happening in clinical practice. The INVISIBLE-BRAZIL study has highlighted the importance of seeking for better disease control, improving monitoring and treatment selection. Moreover, controlling disease manifestation is important to maintain or improve patients' quality of life. Conclusion In this Brazilian real world study, half of patients with AS treated with TNFi and/or NSAID exhibited active disease or low activity/suboptimal disease control, despite being treated for at least 12 weeks. Results from this study raise the need for a widespread use of disease monitoring and PROs can improve physicians' understanding of disease activity and aid treatment decision-making, which can further improve patient satisfaction and management of the disease. More studies are needed to understand the factors associated with the inadequate disease control. Study limitation The results of inadequate disease control of the present study are probably impacted by the profile of patients included in the study considering that it is known that individuals with painful sensitization and fibromyalgia that fulfill criteria for spondylarthritis show better treatment outcomes than patients with symptoms without an inflammation biomarker. The treatment duration can also play an important bias in the assessment of disease activity even when in this study the treatment with TNFi, NSAID or combination of both were, at least, equal or superior to 6 months. Other limitation of this study is regarding that it was not assessed pre study treatment for AS as well as the questionnaire obtaining process and timing are potential biases. AS ankylosing spondylitis ASDAS-CRP ankylosing spondylitis disease activity score BASDAI bath ankylosing spondylitis disease activity index CRP C-reactive protein IQR interquartile range NRS numeric rating scale NSAIDs nonsteroidal anti-inflammatory drugs PRO , patient-reported outcome SD standard deviation TNFi tumor necrosis factor inhibitors
2022-10-28T14:39:13.272Z
2022-10-28T00:00:00.000
{ "year": 2022, "sha1": "7dffa4b43e0ade70f0bdd397c81a3f816b80f29b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "7dffa4b43e0ade70f0bdd397c81a3f816b80f29b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
99281724
pes2o/s2orc
v3-fos-license
On The Time Resolution of the Atomic Emission Spectroelectrochemistry Method The time resolution of the atomic emission spectroelectrochemical (AESEC) flow cell has been investigated by numerical simulations. The results demonstrate that the time resolution of the AESEC electrochemical flow cell may be simulated numerically based on the consideration of electrolyte flow patterns and ion transport in the cell. The residence time distribution (RTD) closely approximates a log-normal distribution for both experiment and simulation. Time resolution may be improved by increasing the flow rate, however this also leads to marked heterogeneities in the flow field near the surface. An optimum flow rate of 3 cm3 min−1 was determined. The problem may be avoided somewhat by using a mask to cover all the surface except for a small portion near the center of the flow cell. © The Author(s) 2015. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 License (CC BY, http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse of the work in any medium, provided the original work is properly cited. [DOI: 10.1149/2.0991602jes] All rights reserved. Corrosion, dissolution and passivation occur spontaneously on the surface of metals in the presence of aggressive electrolytes. In order to accurately predict the evolution of these systems it is necessary to have real-time kinetic data so that the rate laws of the different elementary reactions may be identified. Atomic emission spectroelectrochemistry (AESEC) is an analytical technique that allows monitoring the dissolution of a large number of elements, simultaneously, in real time, during the reaction of a material with an aggressive electrolyte. References [1][2][3][4][5][6][7][8][9][10][11] describe the history, the instrumentation, and typical applications of this technique. Briefly, an inductively coupled plasma atomic emission spectrometry (ICP-AES) is used to continuously monitor the concentration of dissolved elements downstream from an electrochemical flow cell. Within the time resolution of the system, the instantaneous concentration flowing out of the cell (C M (t)) and the instantaneous dissolution rate at the working electrode/electrolyte interface (v M (t)) are directly related to each other as follows: v M (t) = f e C M (t) [ 1 ] where f e is the flow rate through the cell. The use of the technique is therefore very similar to that of a rotating ring disk electrode (RRDE) in which the electrochemical detection at the ring is replaced by the ICP-AES down stream from the flow cell. The major advantage of this technique is that we may simultaneously follow any number of species dissolving from the working electrode and usually with simplified quantification and sample preparation. On the other hand, as compared to the RRDE, AESEC has significantly lower time resolution. Several other research groups have proposed similar couplings in recent years. A quasi-identical electrochemical flow cell was used by Mercier et al. to investigate the corrosion 12 and the anodization of Al 13,14 also using ICP-AES detection. Voith et al. 15 have proposed an electrochemical flow cell using atomic absorption spectroscopy. Homozava et al. [16][17][18] and Ott et al. 19 proposed a capillary flow cell coupled to inductively coupled plasma mass spectrometer (ICP-MS) as a means of following multielement corrosion and dissolution with improved spatial resolution. Along similar lines, the scanning droplet cell has been coupled with an ICP-MS [20][21][22][23] or with a UV-vis spectrometer 24,25 to perform time and potential resolved dissolution analysis of different materials. A numerical simulation of electrolyte flow in the scanning droplet flow cell has been attempted to optimize the geometry of the flow cell 26 or to separate the contributions of diffusion and kinetically controlled reactions. 27 The poor time resolution mentioned above may be a problem for all of these techniques, for example when one would like to quantitatively compare elemental dissolution with electrochemical current transients. The latter are essentially instantaneous on the time scale of these experiments while the concentration transients are broadened due to the hydrodynamics in the flow cell. This comparison allows the determination of the rate of oxide film formation when the cathodic current is negligible, [5][6][7][8] or the determination of the cathodic current when oxide formation is negligible. 10,11 The correlation of cathodic current and dissolution rates is also important for assessing the dissolution of oxides and/or conversion coatings induced by a cathodic current. 8,9 To this end, Ogle and Weber 2 proposed a numerical convolution method that puts the electrochemical data on the same time resolution as the spectroscopic data so that a point by point comparison could be made. The extensive diffusion layer and low convection rate result in a significant broadening of the concentration transients, C M (t), with respect to the instantaneous dissolution rate transient at the working electrode surface, leading to a convolution integral relationship between v M (t) and f e C M (t): where h(t) is the residence time distribution (RTD) 28,29 of the electrochemical flow cell (which is equivalent to the transfer function between the electrochemical and the spectroscopic measurement). 30 As the electrochemical current is essentially instantaneous on the time scale of these experiments, Eq. 2 describes the relationship between the measured dissolution rate and the electrical current when only an anodic dissolution is measured. Recently a deconvolution routine has been proposed. 11 Eq. 2 is completely general for all of the coupled techniques mentioned above. The RTD (h(t)) however is specific for each flow cell design. It may be defined experimentally as the normalized concentration transient at the outlet of the cell following a delta function of dissolution at the working electrode/electrolyte interface. The delta function of dissolution was simulated by applying a short anodic pulse to a suitable electrode material such as copper or stainless steel in H 2 SO 4 and HCl 2 and by Mercier et al. 14 for Al in H 2 SO 4 . In all cases, the experimental h(t) was closely approximated by a simplified version of the log-normal function with two adjustable parameters: In Eq. 4, τ is the time of the peak maximum and β is related to the peak width. These parameters are equivalent to the mean and standard deviation of an ordinary Gaussian distribution by simply substituting ln(t) for t. As the RTD is a key factor in the interpretation of AESEC data, it is of primary importance to understand the physical origin of this distribution. In previous publications, it was tacitly assumed that the broadening of the concentration transients was due entirely to the hydrodynamics of the flow cell. However, the electrochemical kinetics of anodic dissolution may also contribute to the experimental broadening as may the aspiration and nebulization system of the ICP spectrometer. An important application of AESEC is precisely to determine the contribution of electrochemical kinetics and this requires an independent knowledge of the contribution of hydrodynamics. The objective of this work is therefore to compare the experimentally determined RTD obtained with the Cu/CuCl 2 − system with that determined by numerical simulation assuming that mass transport in the flow cell is uniquely responsible for the broadening of h(t). These experiments and simulations were conducted as a function of flow rate so as to determine an optimum flow rate for an enhanced time resolution while maintaining the most uniform distribution of flow vectors at the surface of the working electrode to ensure the uniformity of diffusion controlled reactions. It will also be demonstrated that significant improvement in both factors may be obtained by using a mask to expose only a central area of the surface. Experimental Materials.-Spectropure copper (Johnson Mathhey, 99.9995%) was used as a working electrode. Prior to each experiment the surface of the sample was manually ground with 4000 grit SiC paper then rinsed with ethanol (analytical grade, VWR Prolabo) and purified water. The experiments were performed in deaerated 1 M HCl (analytical grade, VWR Prolabo) solutions in purified water (Millipore system, 18 cm 2 ). A flow of nitrogen was used to deaerate the solutions. The deaeration started 1 hour prior to each experiment and was continued until the end of the experiment. AESEC parameters.-The emission intensity for Cu was monitored at 324.754 nm. The detection limit defined as two times the standard deviation of the blank solution was 2.7± 0.2 μg L −1 under the conditions of these experiments. Fig. 1 illustrates the basic design of the electrochemical flow cell. The cell is divided into a working electrode and a counter electrode compartment, labeled WEC and CEC respectively in Fig. 1. The two compartments were separated by a permeable membrane which allowed ionic current flow but prevented bulk mixing of the electrolytes. The membrane also promoted a uniform current density on the working electrode. The surface area of the working electrode exposed to the electrolyte was determined by the geometry of the o-ring and was measured as 0.51 ± 0.01 cm 2 in the standard configuration. The volume, V, of the WE electrode compartment was determined experimentally to be 0.27 ± 0.02 cm 3 . In this work the flow rate was varied between f e = 1 to 5 ml min −1 controlled by a peristaltic pump upstream from the flow cell. The minimum renewal time of the electrolyte in the cell (= V/ f e ) varied between approximately 3 and 27 seconds. The electrolyte flow entered at the bottom of the cell and existed at the top so as to favor the removal of any gaseous species that may form during the experiment. In a second configuration, only the central part of the surface was exposed to the electrolyte by using a mask consisting of an insulating and impermeable electroplating tape (3M 470, 0.18 mm thick) that covered the periphery of the working electrode but left uncovered a circle with 1.3 ± 0.1 mm radius (or 0.053 ± 0.005 cm 2 ) of the Cu working electrode. The areas of the surfaces in the standard and the masked configuration were measured from observation of the reactive zone of the Cu electrode obtained by optical microscopy after the experiment. The flow rate of the electrolyte was 1.1 ± 0.1 cm 3 min −1 , 3.1 ± 0.1 cm 3 min −1 and 5.0 ± 0.1 cm 3 min −1 . The "standard" method of measuring h(t) for the AESEC flow cell was by approximating a delta function of Cu dissolution via a 1 s anodic, galvanostatic pulse of 1 mA to a pure Cu electrode and measuring the Cu concentration as a function of time downstream from the flow cell. (Note that the procedure was altered for Fig. 3, where an anodic potentiostatic pulse of 1.5 s was used. This should not change the measured distribution of h(t).) This experiment was performed under a variety of conditions including variable flow rate, with and without the use of a mask. Assuming a faradaic yield of 100%, this pulse would correspond to 10 −7 mol for Cu oxidation to CuCl 2 − . The n = 1 reaction has been well characterized in previous publications 2, 3 and was verified in this work in the next section (Fig. 3). The concentration vs. time profiles were then normalized (Eq. 3) so that the area under the transient was equal to unity. These experimental h(t) distributions were then fitted to an empirical log-normal distribution, defined by Eq. 4. The time offset due to transport through the capillaries has been removed for all data. The t = 0 for the concentration transients was defined as previously described, 2 as the data point immediately preceding the first point that rises above background. The application of the anodic pulses was performed with a Gamry Reference 600 potentiostat with Ag/AgCl saturated reference electrode and a Pt foil counter electrode. The analog potential and current signals were routed into the measuring circuit of the ICP-AES to guarantee the same time scale for the measurements of Cu current, potential and total current. The data acquisition rate of all signals from ICP-AES was 10 points per second for Fig. 3 and otherwise 1 point per second. Numerical Model For the simulation of the electrochemical process the experimental flow cell of Fig. 1 was modeled as shown in Fig. 2. The modeling approach combines numerical models for the electrolyte flow and ion transport. To describe the flow in the cell, the mass and momentum conservation of the electrolyte flow is modelled by the incompressible Navier-Stokes equations, solving for the fluid velocity u and the pressure p: where ρ is the electrolyte density and ν is the kinematic viscosity. The Reynolds number (Re) of the system defined as with L the characteristic length of the geometry. R e is less than 1 indicating a laminar flow defined by the small cell geometry (Fig. 2) and the small (less than 5 cm 3 min −1 ) rate of electrolyte flow. We assume that the concentration gradients of the dissolved species in the flow and their evolution in time, do not in any way influence the hydrodynamic flow in the cell, in other words we assume dilute solution theory. Therefore, Eqs. 5 and 6 are decoupled from the species transport equations, and are solved for a stationary state solution. The density and viscosity of the electrolyte were assumed to be constant ρ = 10 3 kg m −3 and ν = 10 −6 m 2 s −1 characteristic for water as a main electrolyte component. For the solution of the species concentration in the electrolyte, based on the multi-ion transport and reaction model (MITReM), 31,32 β ± τ ± τ we can state a balance equation for each species in the system: where C i is the concentration of species i. The source term on the right hand side of Eq. 8 comes from the homogeneous reactions, where v r is the rate of reaction r and s ir is the stoichiometric coefficient of species i in this reaction. The flux N i is given by convection, diffusion and migration as follows: where D i is the diffusion coefficient, z i is the charge, F is the Faraday's constant, R is the universal gas constant, T is the temperature and U is the potential. The flux perpendicular to an electrode is given by the heterogeneous reactions: The RTD experiment was simulated as a 1 s pulse of uniformly distributed species flux, injected perpendicular to the surface of the working electrode. Again, the 1 s duration is sufficiently short to be considered a Dirac delta function of dissolution. By injecting a mass pulse instead of applying a current pulse we avoid considering migration in Eq. 9. This pulse is integrated in time (time-accurate simulation) and over the surface of the electrode to obtain the total amount of the substance injected into the electrolyte. Following injection, the dissolved species undergo mass transport by diffusion when close to the electrode, and mainly by convection when further from the electrode until it is ultimately removed from the cell. For the diffusivity of the medium, D i is assigned to 10 −9 m 2 s −1 . 31,32 Two geometries of the working electrode were used: (i) when the whole surface is active as defined by the geometry of the flow cell (and the o-ring in the experimental set-up); and (ii) when only a circle of 1.3 mm radius in the center of the surface is exposed (Fig. 2). In the experimental set-up, this is performed by masking the surface with an impermeable eletroplating tape leaving uncovered the central surface. The simulated area includes the flow cell, its input and output channels. The species injection pulse occurred at 1 s -2 s ( t = 1s) with the computing time of 300 s. The simulated flow rates were 1.00, 1.25, 1.50, . . . , 5.00 cm 3 min −1 . For the experiments with the mask, the flow rate was 3 cm 3 min −1 . In all cases, the amount of substance injected into the cell, the mass pulse, was 10 −7 mole. The total quantity of Cu dissolved during the experiment was determined by integrating the ICP-AES transient prior to normalization using Eq. 1. This quantity was expressed as electrical charge using Faraday's law with the assumption of n = 1 for the anodic dissolution of Cu. Fig. 3D compares this value with the integral of the current transient. The results clearly demonstrate that Cu dissolution occurs via an n = 1 process consistent with thermodynamic calculations for this electrolyte. 3 Numerical simulation of the residence time distribution.-A schematic diagram of the electrochemical flow cell used for the numerical simulation is given in Fig. 2. This model is identical to the experimental flow cell in every respect except that the O-ring at the edge of the sample compartment is not taken into account. The working electrode (WE) is placed at the bottom of the cell with or without the mask. For the purpose of the numerical simulation, several different planes are defined in Fig. 2. The v and h1 planes cross the cell through the center of the feed capillary in the vertical and horizontal dimensions respectively. The h2 plane lies 0.2 mm above the WE. Experimental determination of the residence time distribution.- A series of numerical simulations of the velocity distributions in the cell was performed at various flow rates as shown in Fig. 4. The "steady state" velocity vector distributions ( f ) in the h1 and the v plane are shown to the left and right respectively. It is noted that a markedly increased flow rate is observed near the entrance to the cell which ultimately leads to the formation of a vortex at higher flow rates. The RTD (h(t)) of the flow cell was measured by the copper pulse dissolution experiment under galvanostatic conditions as described in the Experimental section, and compared with the simulated response to a pulse release from the working electrode, as shown in Fig. 5. The numerical simulation assumed the pulse to be perfectly distributed over the active surface area at t = 1 s. A full 3D video of the simulation of the injection and removal of ions in the cell for different flow rates is presented in the "supplementary materials" for this article. It is clear from Fig. 5 that the experimental results are in reasonably good agreement with the numerical simulation supporting the assumption that mass transfer in the flow cell does indeed control the residence time distribution. Note that Fig. 5 gives the residence time distribution on a log t axis to better demonstrate the symmetry of the log-normal distribution. The same data is given on a linear time axis and in an integral format in the "supplementary materials" for this article. Fig. 6 shows the simulated h(t) distribution in a wide range of flow rates in the feed capillary ( f e ). Each curve in Fig. 6a was fitted to the log-normal distribution (Eq. 4). The results are shown in Fig. 6b. It is clear that the increase of f e from f e = 1 cm 3 min −1 to f e = 3 cm 3 min −1 results in a dramatic decrease of the position of the peak maximum, τ (in seconds), and the half-width, which is inversely proportional to β (unitless). Further increase of f e up to 5 cm 3 min −1 shows only modest enhancement of the time resolution yet leads to an increased significance of the vortex at the entrance of the cell. Therefore, we conclude that f e = 3 cm 3 min −1 is an optimum value of time resolution versus flow uniformity. Optimization by use of mask.-To avoid problems associated with a possible vortex at the entrance of the cell or the stagnant zones at the edges of the working electrode, an attempt was made to limit the active surface area to a small surface in the center of the flow cell, where the flow distribution is relatively constant. The results for the residence time distribution with and without the mask are shown in Fig. 7 with the fitting parameters for each curve in Table I for f e = 3 cm 3 min −1 . The presence of the mask does not change the position of the peak maximum but reduces the half-width of the RTD thereby improving the time resolution of the experimental data. Obviously this does not completely eliminate the more stagnant areas in the flow cell, however it does create a situation where the reacting surface is not in contact with the stagnant electrolyte, which is probably of considerable benefit for the uniformity of the electrode reaction itself. It also prevents the dissolved species from diffusing into some of the stagnant areas at the entrance of the cell and along the edges, although clearly there is a stagnant zone at the outlet of the cell. It is of interest to observe the concentration distribution at the active surface for the different times: t 1 , t 2 and t 3 (3, 10 and 30 s), before, at, and after the maximum as indicated in Fig. 7. This is shown in Fig. 8 without the mask and in Fig. 9 with the mask. Marked heterogeneities are observed on the periphery of the flow cell without the mask indicated by the stagnation of the substance near the input of the flow cell. The problem with the stagnation of the substance in the confined zones at the edge of the cell was resolved using the mask. Discussion Considering the flow rate, f e , as a purely instrumental parameter, the sensitivity of the technique is inversely proportional to f e by Eq. 1, and in addition, the nebulization / aspiration system is more efficient at lower flow rates. However, the flow rate will also play a role in the reaction rate and mechanisms especially if certain electrode processes are diffusion limited. Therefore it is important to optimize sensitivity, time resolution and the uniformity of electrolyte flow on the surface. The increase of the flow rate, f e , increases the time resolution of the experiment by decreasing the time required to wash out dissolved species from the flow cell. However, it increases the heterogeneity in the velocity vector distribution visible by the propagation of the vortex like behavior of the flow at the input of the flow cell (Fig. 4). From the tested flow rates of 1.00, 1.25, . . . , 5 cm 3 min −1 , the optimal flow rate was found to be 3 cm 3 min −1 , balanced between the essential requirements of time resolution and homogeneity of the flow. Heterogeneities in the flow rate inside the cell may cause differences in the mass transport of species between the center and the periphery of the cell. This can result in composition gradients between the two zones of not only metal cation concentrations but other species involved in the electrochemical reactions such as pH, Cl − , O 2 , etc. This work has not dealt specifically with the optimization of the flow cell geometry which is difficult to vary due to the use of an oring to define the active surface area. Some obvious improvements are imaginable, for example by making the outlet flush with the working electrode surface. However this is not possible when using an o-ring to define the active surface. Therefore in this work, we have focused on optimizing the flow rate and limiting the surface with a mask. It would be possible to make modifications using a 3D printing technique such as proposed by Kollender et al. 26 In this work, the use of the mask improves both the homogeneity of the flow at the active surface and the time resolution, however it may significantly reduce the sensitivity due to a lower surface area of the anode and therefore a lower absolute quantity of material dissolved for a given dissolution rate. The decrease of the anode area from 0.51 cm 2 without the mask to 0.053 cm 2 would reduce the sensitivity by a factor of 9.6 and that can be critical for low dissolution rates. Therefore, the mask can be used for active systems and/or if a significant impact of flow heterogeneities on the electrochemical behavior of the studied system is suspected. Conclusions The results demonstrate that the time resolution of the AESEC electrochemical flow cell may be simulated numerically based on the consideration of the electrolyte flow patterns and ion transport in the cell. The residence time distribution (h(t)) closely approximates a lognormal distribution for both the experiment and the simulation. Time resolution may be improved by increasing the flow rate, however this also leads to marked heterogeneities in the flow field near the surface. Based on both the experiment and the simulation, the optimal flow rate was determined as 3 cm 3 min −1 . The problem with flow heterogeneity may be improved by using a mask to cover all the surface except for
2019-04-08T13:09:55.827Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "5efaee48d2cd6908a314793ce4260f0dc4e784ec", "oa_license": "CCBY", "oa_url": "http://jes.ecsdl.org/content/163/3/C37.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c027be3dbc61fe6e6b87536e228ea4d4c5dbca9f", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
16832753
pes2o/s2orc
v3-fos-license
Protective effect of anti-oxidants on endothelial function in young Korean-Asians compared to Caucasians Summary Background Previous studies show that Asians have an impaired blood flow response (BFR) to occlusion after a single high fat (HF) meal. The mechanism is believed to be the presence and susceptibility to high free radicals in their blood. The free radical concentration after a HF meal has not been examined in Asians. Further the BFR to heat after a single HF meal in Koreans has not been measured. Material/Methods This study evaluated postprandial endothelial function by measuring the BFR to vascular occlusion and local heat before and after a HF meal and the interventional effects of anti-oxidant vitamins on improving endothelial function in young Korean-Asians (K) compared to Caucasians (C) with these assessments. Ten C and ten K participated in the study (mean age 25.3±3.6 years old). BFR to vascular occlusion and local heat and oxidative stress were assessed after a single low fat (LF) and HF meal at 2 hours compared to baseline. After administration of vitamins (1000 mg of vitamin C, 800 IU of vitamin E, and 300 mg of Coenzyme Q-10) for 14 days, the same measurements were made. Results This study showed that the skin BFR to vascular occlusion and local heat following a HF meal significantly decreased and free radicals significantly increased at 2 hours compared to baseline in K (p<.001), but not in C. When vitamins were given, the BFR to vascular occlusion and local heat before and after HF meal were not significantly different in K and C. Conclusions These findings suggest that even a single HF meal can reduce endothelial response to stress through an oxidative stress mechanism but can be blocked by antioxidants, probably through scavenging free radicals in K. Since endothelial function improved even before a HF meal in K, endothelial damage from an Americanized diet may be reduced in K by antioxidants. Background The vascular endothelium plays an integral role in vascular growth, vasoregulation, and vasoprotection. [1]. A balance between endothelial derived relaxing and contracting factors maintains vascular homeostasis [2]. Studies show that different ethnic populations have different genes that can alter endothelial function [3,4]. For example, people from Korea and other Asians have a "thrifty" gene that was developed in populations where food supply was limited due to famine, thus allowing fat to be stored easily when only limited foods are available [5,6]. Today, Koreans are eating more high fat (HF) foods than a decade ago and the mortality rate due to diabetes mellitus (DM) has doubled during the last decade, increasing from 17.2 per 100,000 persons in 1995 to 24.5 per 100,000 persons in 2005 [7]. "Thrifty" genes seem to heighten the susceptibility to endothelial dysfunction when eating HF foods or gaining body weight [3,4,6]. A standard test of endothelial function is the response to vascular occlusion for 4 minutes. A recent study in this lab showed that Koreans had a lower blood flow response (BFR) to 4 minutes of vascular occlusion and 6 minutes of local heating than Caucasians. This finding suggests that the Korean population has lower endothelial response to stress than Caucasians [8]. In another study, it was found that blood flow was reduced even further in Asians with a single HF meal, again implying reduced endothelial response to stressors due to fats in the diet [6]. The mechanisms of postprandial HTG-induced endothelial dysfunction have been suggested that HTG significantly stimulates leukocytes to produce free radicals increased oxidative stress [6,9]. The detrimental effect of free radicals inducing potential biological damage is termed oxidative stress [10]. High free radical concentration in the body increases cellular oxidation and can biodegrade nitric oxide (NO) and prostacyclin into inactive forms [11]. Both compounds are released by vascular endothelial cells to relax blood vessels. Free radicals bioconvert NO into peroxynitrite, thereby reducing the bioavailability of NO as a vasodilator [12]. This is believed to be one of the mechanisms associated with reduced circulation in older people and diabetics [11]. One purpose of the present investigation was to quantify the effect of a HF meal on the BFR to heat and occlusion in Koreans compared to Caucasians. Unlike other studies measuring flow mediated vasodilatation at rest and after a HF meal, we wanted to measure free radicals in the blood to see how this correlated to flow mediated vasodilatation and the BFR of the skin to heat. In addition, we wanted to assess the effects of vitamins on these vascular responses. Numerous studies have examined the benefits of vitamins in reducing free radicals in the body [11,13,14]. However, little is known about measuring the effects of a HF meal and vitamins on the level of free radicals and endothelial function. A standard method of assessing endothelial function is post ischemic reactive hyperemia by measuring the rise in blood flow after occlusion [15][16][17]. Another method of assessing endothelial function is the skin BFR to local heat mediated by the endothelial cells by the increase in skin blood flow [18,19]. High concentrations of free radicals neutralize nitric oxide and prostacyclin and should reduce blood flow response to vascular occlusion and local heat. Malondialdehyde (MDA) can be used to assess the degree of feeding-induced oxidative stress. MDA is a highly reactive three carbon chain aldehyde produced as a byproduct of the decomposition of a lipid hydroperoxide and is commonly used as an indicator of lipid peroxidation [20]. The purpose of this study was to evaluate postprandial endothelial function as related to enhanced oxidative stress in a single HF and LF meal and more importantly the interventional effects of vitamins on scavenging free radicals and improving endothelial function in Korean-Asians. These data were compared to Caucasians. This was done by measuring skin BFR to vascular occlusion and local heat and analyzing MDA levels in both groups after ingestion of a LF and HF meal before and after vitamin intervention. Subjects Healthy young subjects were recruited by word of mouth and assigned to one of two groups based on self-reported ethnicity. Subjects did not have diagnosed cardiovascular disease, hypertension (>140/90 mmHg) or diabetes, were non-smokers, were not taking any medications that would affect the cardiovascular system, and did not have any known peripheral circulatory diseases. The subjects included 10 Koreans and 10 Caucasians. They were in the age range of 20-35 years old and with a BMI of less than 40. The general characteristics of the subjects are shown in Table 1. All protocols and procedures were approved by the Institutional Review Board of Loma Linda University and all subjects signed a statement of informed consent. 30 minutes prior to measurements. The reported reliability from the manufacturer is ±5% from day to day. The Laser's calibration was checked just before, in the middle and at the end of the study; there was no calibration drift observed. Measurement of skin temperature Skin temperature was measured with a thermistor manufactured by BioPac Systems (BioPac Inc., Goleta, CA). The thermistor output was amplified with an SKT 100 Thermistor amplifier (BioPac Systems, Goleta, CA) and the output was then digitized at 1000 samples per second on a BioPac MP150 data collection system at 24 bits of resolution (BioPac Systems, Goleta, CA). Control of skin temperature Skin temperature was controlled with a thermode. The thermode consisted of a plastic box with a port on each end so that water could move through the box. The box was approximately 5×2.5×2.5 cm in size. Further details on the technique and the reliability and validity are published elsewhere [21,22]. Test meals Isocaloric LF and HF meals (726 kcal) were given to the subjects at the study site in the morning after an overnight fast. The nutritional composition of HF meal was 50.1 g total fat, 14 g saturated fat, 443 mg cholesterol, 22.3 g protein, 43.8 g carbohydrates. The nutritional composition of isocaloric LF meal was 5.1 g total fat, 1g saturated fat, 0 mg cholesterol, 31.3 g protein, 135.8 g carbohydrates. To assure consistency across subjects, meals were ordered from the same commercial restaurant. Measurement of Malondialdehyde Malondialdehyde (MDA) was measured using the 2-ThioBarbituric Acid Reactive Substances (TBARS) assay kit (Oxford Biomedical Research, Inc., Oxford, MI, USA). This assay is based on the reaction of a chromogenic reagent, 2-thiobarbituric acid, with MDA at 95°C. One molecule of MDA reacts with 2 molecules of 2-thiobarbituric acid via a Knoevenagel-type condensation to yield a chromophore with absorbance maximum at 532 nm. Procedures All subjects were asked to abstain from dietary supplements for 1 week prior to starting the study and avoid engaging in physical exercise 48 hours prior to the experimental meal. Subjects fasted 12 hours prior to participating in the study. The order of HF and LF breakfast meals was randomly assigned and separated with a one week wash out period. Subjects began their experiment between 8-10 am and entered a thermally neutral room (22°C) and rested comfortably for 20 minutes. In the first experiment, the test breakfast meal was given to subjects and endothelial function was assessed by two methods at baseline and after 2 hours following the meal. The first was occlusion of the circulation on upper arm for 4 minutes followed by release and measurement of skin blood flow for 2 minutes. The second was assessment of the skin blood flow response to local heat by applying a thermode of 42°C to the forearm. Each procedure was performed on different arms. Since the metabolic effect of a meal peaks at 2 hours on endothelial function, the measurements were repeated after 2 hours following the meal. A 10 mL blood sample was taken at baseline and after 2 hours following the meal for measuring oxidative stress. After a wash out period of 7 days, the same procedure was performed but the subjects were given the other test meal and data was collected at baseline and after 2 hours following the meal. A 10 mL blood sample was taken after 2 hours following the meal. Finally, to see if antioxidants can reduce the damage to endothelial cells from a HF meal, after a wash out period of 7 days from the last previous meal, a final series of experiments had the subjects ingest anti-oxidant supplements daily of 1000 mg of vitamin C, 800 IU of vitamin E, and 300 mg of Coenzyme Q-10 for 14 days. After 14 days following anti-oxidant supplements intake, on the eighth day, measurements were repeated in baseline and followed by ingestion of a HF meal. A 10 mL blood sample was taken at baseline and after 2 hours following a HF meal. The blood sample was not drawn more than 2 times per week and was assayed for oxidative stress. Statistical analysis Data was summarized as Means and standard deviations. Baseline characteristics of Caucasians and Koreans were compared using an independent t-test (Table 1). A mixed factorial ANOVA was conducted to compare the BFRse to 4 minutes of vascular occlusion and 6minutes of local heat before and after two different meals with or without vitamins in Koreans and Caucasians. Also, a paired t-test was conducted to compare the MDA concentration before versus after the meals with or without vitamins. The level of significance was set at p=0.05. Effect sizes were calculated to account for group variability. A post power analysis with an effect size of 1.6 according to the change in skin blood flow between Korean and Caucasian groups, an alpha level of.05, and a sample size of 20 indicated that the power was.91. results Data is presented as baseline measurements and the results after a LF and HF meal and pre and post vitamins intervention as given below. Baseline The results of skin blood flow during 4 minutes of vascular occlusion and then the first 2 minutes following occlusion at baseline are shown in Figure 1A. There was a significant difference in the skin BFR after 4 minutes between Koreans and Caucasians (p=0.016). The mean blood flow rapidly increased to a peak of 332. The results of BFR to local heat during 6 minutes at baseline are shown in Figure 1B. The skin blood flow was significantly lower in Koreans than in Caucasians after 60 seconds of heat exposure (p=0.003). The peak blood flow after heat exposure was 251±77.9 flux in Koreans and 413.7±132.1 flux in Caucasians at 240 seconds, respectively. To determine the level of lipid peroxidation and oxidative stress following meals, MDA concentration was measured using TBARS assay. When comparing the concentration of MDA at baseline, there were no significant differences between Koreans (4.6±1.9 µM) and Caucasians (4.6±1.4 µM) ( Figure 2). Low fat meal The results of skin blood flow during 4 minutes of vascular occlusion and then the first 2 minutes following occlusion at baseline and at 2 hours after the ingestion of the LF meal in both Koreans and Caucasians are shown in Figure 3A and B. When comparing blood flow at baseline and at 2 hours, the skin blood flow was not significantly different in both Koreans and Caucasians. The results of skin BFR to local heat during 6 minutes at baseline and at 2 hours after the ingestion of the LF meal in both Koreans and Caucasians are shown in Figure 3C and D. There were no significant differences in the skin blood flow in both Caucasians and Koreans. High fat meal When comparing BFR after 4 minutes of vascular occlusion at baseline and at 2 hours after the ingestion of the HF meal, the total skin blood flow significantly decreased at 2 hours compared to baseline in Koreans (p<0.001). The peak blood flow after occlusion was 332.4±75.8 flux at baseline and 266.2±61.7 flux at 2 hours at 260 seconds respectively ( Figure 4A). However, there were no significant differences in Caucasians after 2 hours compared to baseline ( Figure 4B). When applying 6 minutes of local heat at baseline and at 2 hours after the ingestion of the HF meal, the skin blood flow after 90 seconds of heat exposure significantly decreased at 2 hours compared to baseline in Koreans (p<0.001). The peak blood flow after heat exposure was 224.9±48.5 flux at baseline and 173.5±52.3 flux at 2 hours at 240 seconds respectively ( Figure 4C). However, there were no significant differences in Caucasians after 2 hours compared to baseline ( Figure 4D). In the MDA measurements, there was a significantly higher mean MDA concentration at 2 hours than at baseline Results after the intake of vitamins for two weeks In contrast, when comparing the BFR after 4 minutes of vascular occlusion at baseline and at 2 hours after the ingestion of the HF meal after the intake of vitamins for 14 days, there were no significant differences in both Koreans and Caucasians ( Figure 5A and B). Also, when comparing BFR to 6 minutes of local heat at baseline and at 2 hours after the ingestion of the HF meal after the intake of vitamins for 14 days, the skin blood flow was not significantly different in both Koreans and Caucasians ( Figure 5C and D). Comparison of baseline with vitamins and without vitamins When comparing the baseline BFR to 4 minutes of vascular occlusion between 14 days of vitamins and no vitamins in both ethnic groups, the baseline blood flow was significantly higher when vitamins were given compared to no vitamins in Koreans (p=0.024). The peak blood flow after occlusion was 414.7±92.9 flux with vitamins compared to 332.4±75.8 flux without vitamins at 260 seconds, respectively ( Figure 6A). However, no significant differences were observed in Caucasians ( Figure 6B). CR In regards to the baseline BFR to 6 minutes of local heat, the skin blood flow after 90 seconds of heat exposure significantly increased with vitamins compared to no vitamins in Koreans (p=0.029). The peak blood flow after heat exposure was 355.2±128.6 flux with vitamins versus 224.9±48.5 flux without vitamins at 240 seconds, respectively ( Figure 6C). However, there were no significant differences in the skin blood flow in Caucasians ( Figure 6D). Comparison of high fat meal and high fat meal with vitamins When comparing the BFR to 4 minutes of vascular occlusion at 2 hours after a HF meal with a previous intake of vitamins for 14 days to no vitamins in both ethnic groups, the skin blood flow was significantly higher when vitamins were given compared to no vitamins in Koreans (p=0.001). The peak blood flow after occlusion was 378.5±73.7 flux with vitamins and 266.2±61.7 flux without vitamins at 260 seconds, respectively ( Figure 7A). However, there were no significant differences in the skin blood flow in Caucasians ( Figure 7B). When comparing the BFR to 6 minutes of local heat at 2 hours after HF meal with a previous intake of vitamins Comparison of Koreans and Caucasians with high fat meal When comparing the BFR to 4 minutes of vascular occlusion at 2 hours after a HF meal, the skin blood flow was significantly lower in Koreans compared to Caucasians (p=0.001). The peak blood flow after occlusion was 266.2±61.6 flux in Koreans and 394.3±100.7 flux in Caucasians, respectively ( Figure 8A). However, there were no significant differences in the skin blood flow at 2 hours after a HF meal with a previous intake of vitamins for 14 days between Koreans and Caucasians ( Figure 8B). When comparing the BFR to 6 minutes of local heat at 2 hours after high fat meal, the skin blood flow significantly lower in Koreans compared to Caucasians (p=0.003). The peak blood flow after heat exposure was 173.5±52.3 flux in Koreans and 312.8±74.2 flux in Caucasians at 240 seconds, respectively ( Figure 8C). However, there were no significant differences in the skin blood flow at 2 hours after a HF meal with a previous intake of vitamins for 14 days between Koreans and Caucasians ( Figure 8D). For the MDA concentration, there were significant differences between Koreans and Caucasians (7.4±1.7 µM vs. 11.1±2.6 µM) at 2 hours after a HF meal (p=0.004). However, there were no significant differences between Koreans and Caucasians at 2 hours after a HF meal with a previous intake of vitamins for 14 days (Figure 2). discussion Hypertriglyceridemia (HTG) due to a HF diet is an independent risk factor of CVD and DM [23]. Furthermore, postprandial HTG induces endothelial dysfunction by enhanced oxidative stress [24]. Asians have a "thrifty" gene that heightens their susceptibility to a reduced endothelial response to stressors when eating HF foods or gaining body weight [3,4,6]. According to a recent study, even a single HF meal impaired endothelial response to stress in Asians but not in Caucasians [6]. A possible mechanism of endothelial dysfunction after the ingestion of a HF meal may be related to an increase in free radicals which affect NO production and reduce NO bioavailability [11,25]. In the present investigation, we have found that Koreans have a lower baseline BFR to vascular occlusion and local heat compared to Caucasians. We further showed that just one single HF meal decreased the BFR to vascular occlusion and local heat in Koreans, but not in Caucasians and also increased oxidative stress, as measured by MDA in both Koreans and Caucasians. In addition, pre-treatment with antioxidant vitamins (vitamin C, vitamin E, and Coenzyme Q-10) for 14 days improved baseline BFR to vascular occlusion and local heat significantly in Koreans but not in Caucasians. Also, the vitamins markedly eliminated the decrease in the BFR to vascular occlusion and local heat following the HF meal in Koreans, but did not increase vasodilation in Caucasians. There are several intriguing findings in these data. First, the fact that the vascular response to 2 stressors, heat and occlusion were increased before a HF meal simply by taking vitamins for 2 weeks (antioxidants) in Koreans but not Caucasians implies higher levels of harmful free radicals in the Korean group than the Caucasian group. This was confirmed in the MDA measurements. Since the Korean group had lower BMIs than the Caucasian group, it is not linked to obesity but must be linked to heredity of diet. The fact that endothelial function improved with antioxidants would seem to imply high free radicals due to their diet, which, as cited in the introduction, has been westernized in recent years. The higher concentrations of free radicals in the blood of the Koreans would not be surprising since, these Koreans had been in the United States for an average of 7 months and, unlike Caucasians, do not have the genetic makeup to allow for a westernized diet. With vitamin administration, free radicals fell and the BFR to the stressors examined here matched that of Caucasians. Thus there seems to be no permanent damage to endothelial cells that could be reversed by vitamins or perhaps diet. And yet these were young subjects. Older subjects, after years of HF meals may have permanent damage that cannot be reversed by vitamins. A second finding is that in Koreans, but not in Caucasians, a single HF meal can impair endothelial response to stressors and can be reversed by antioxidant vitamins. This finding in Koreans, but not in Caucasians, may also due to the influence of "thrifty" genes. This "thrifty" genotype is a genetic difference regulating lipid metabolism and fat storage, and differs depending on ethnicity [5,[26][27][28]. For example, one of the "thrifty" SNPs related to lipid metabolism, fatty acid-binding protein 2 (FABP2), has been associated with obesity because it enhances fat absorption. The allelic frequency of FABP2 is 55% in Asians and 27.1% in Caucasians. Thus, if Asians consume the same amount of fat, a higher body fat deposit at a lower or the same BMI will be observed in Asians [26,27]. South Korea is one of the countries where the socioeconomic environment has changed rapidly to reflect more westernization as well as its negative health consequences. For the adoption of a more westernized lifestyle, higher dietary fat consumption and less physical activity are common in Korea, and hence, these "thrifty" genes heighten the susceptibility to insulin insensitivity and CVD in relation to increased body fat and dyslipidemia in Koreans [6,27,28]. Increased body fat and dyslipidemia stimulate leukocytes to induce free radicals. The high concentration of free radicals neutralizes NO and prostacyclin, the 2 principal vasodilators, and reduces BFR to vascular occlusion and local heat [6,11]. These findings may explain why Koreans had a lower BFR to vascular occlusion and local heat and higher MDA concentration after ingestion of a single HF meal compared to Caucasians in this study. It should be pointed out that international studies among different Asian national populations in China, Korea, Philippines, Singapore, and Taiwan show increased risk of Type 2 DM and CVD at lower BMI than European populations [29]. Numerous studies have demonstrated that antioxidant vitamins play an important role in increasing endothelial function and decreasing oxidative stress [11,30,31]. To our knowledge, no study has examined the effects of antioxidant vitamins on endothelial function by ethnicity. In the present study, antioxidant vitamins of 1000 mg of vitamin C, 800 IU of vitamin E, and 300 mg of Coenzyme Q-10 were given to Korean and Caucasian subjects for 14 days. Most investigators use 400-800 IU of vitamin E, 500-1000 mg of vitamin C, and 60-300 mg of Coenzyme Q-10. This pre-treatment of vitamins restored decreased endothelial response to stressors following a HF meal in Koreans but didn't improve endothelial response to stressors in Caucasians. Moreover, vitamins restore impaired endothelial response to stressors by scavenging free radicals, but may not improve normal endothelial response to stressors [6,32]. Several studies showed that antioxidant vitamins improved endothelial function in people with DM, coronary artery disease, and smokers but not in healthy control groups [33,34]. In some studies antioxidant vitamins did not change oxidative stress status in healthy athletes [32]. However, in a study of young health men and women, the response to heat, unlike the response to occlusion, was increased by administration of large doses of vitamins in young, fit students at similar dosages to the present investigation and also for 2 weeks [11]. Previous studies have suggested that antioxidant vitamins C and E improve vascular defense against oxidative stress by reducing free radicals and protecting NO from inactivation, thereby exerting beneficial effects on vascular function and structure [11,31]. Especially, combined administration of vitamins C and E significantly increases endothelium-dependent vasodilation, while monotherapy with vitamin C alone is ineffective [30,31,35]. It is known that vitamins C and E have synergistic antioxidant actions, since vitamin E can have pro-oxidant properties and appropriate concentrations of vitamin C are necessary for the regeneration of vitamin E, thus increasing its antioxidant capacity [31]. These previous findings support the results from this study. Baseline BFR was significantly lower in Koreans than Caucasians by 17.54%. After stress with heat, the blood flow response was 44.54% less in Koreans and with occlusion it was 32.49% less than that seen for Caucasians. Certainly, some of the difference between the blood flow response to stress may be due to a lower baseline blood flow in Koreans. However, the difference with stress was proportionally much higher after heat and occlusion. Thus it would suggest that not only is the baseline BFR causing this difference but certainly some other factors are also involved to increase the difference with stress between Koreans and Caucasians. Perhaps the most telling evidence was the baseline blood flow after vitamin ingestion in the two groups of subjects. After taking the vitamins, there was no significant difference in baseline blood flow between the two groups. This shows that the difference in the blood flow response to stress is not only due to the difference in baseline blood flow but also due to higher oxidative level in Koreans, corrected by vitamin administration. In this study, ten young Korean subjects had been living in the United States, rather than in Korea. Although the mean period of stay in US was only 7.1±2.3 months in the Korean subjects, results may be different in Koreans who did not reside in the US. In addition, due to different diets especially HF diet and environment, Koreans who have been in the US for longer periods may have more reduced endothelial response to stressors than those with shorter residing periods. Therefore, further studies need to be conducted with Koreans for those who have been in the US for long periods and those who have not been to US. Moreover, in the present investigation, the reduced endothelial response to stressors was reversed by vitamins at baseline and even after a single HF meal in this young population. However, with longer periods of HF intake among young subjects or in the elderly, the recovery effects of vitamins on reduced endothelial response to stressors might be different. Prolonged high free radicals will probably cause permanent endothelial dysfunction. conclusions This study evaluated postprandial endothelial function by measuring the BFR to vascular occlusion and local heat before and after a HF meal and the interventional effects of anti-oxidant vitamins on improving endothelial function in young Korean-Asians compared to Caucasians. The skin BFR to vascular occlusion and local heat following a HF meal significantly decreased and free radicals significantly increased at 2 hours compared to baseline in Koreans, but not in Caucasians. When vitamins were given, BFR to vascular occlusion and local heat before and after HF meal were not significantly lower in Koreans than Caucasians. These findings suggest that even a single HF meal can reduce endothelial response to stress through an oxidative stress mechanism but can be blocked by antioxidants, probably through scavenging free radicals in Koreans. Koreans and other Asians historically have a diet which is healthier than Caucasians because of the large number of vegetables and low fat intake. However, the present investigation shows that a westernized diet is causing high oxidative levels in Koreans impairing endothelial function. Since Asians have "thrifty" genes, if they continue consuming a westernized diet, they need to take vitamin supplements or go back to their historic low fat diets. These levels of vitamins can not be achieved by diet alone. references
2016-05-04T20:20:58.661Z
2012-08-01T00:00:00.000
{ "year": 2012, "sha1": "71e1b19ef4e8da148438357bd8365953d9d3c234", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc3560689?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "71e1b19ef4e8da148438357bd8365953d9d3c234", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55456468
pes2o/s2orc
v3-fos-license
An efficient method for computing the eigenfunctions of the dynamo equation We present an elegant method of determining the eigensolutions of the induction and the dynamo equation in a fluid embedded in a vacuum. The magnetic field is expanded in a complete set of functions. The new method is based on the biorthogonality of the adjoint electric current and the vector potential with an inner product defined by a volume integral over the fluid domain. The advantage of this method is that the velocity and the dynamo coefficients of the induction and the dynamo equation do not have to be differentiated and thus even numerically determined tabulated values of the coefficients produce reasonable results. We provide test calculations and compare with published results obtained by the classical treatment based on the biorthogonality of the magnetic field and its adjoint. We especially consider dynamos with mean-field coefficients determined from direct numerical simulations of the geodynamo and compare with initial value calculations and the full MHD simulations. Introduction The generation and evolution of magnetic fields in cosmic bodies like the planets and stars is generally thought to be governed by induction processes due to motions in their electrically conducting fluid interior. The magnetic field B is described by the induction equation where Here u represents the velocity and η the magnetic diffusivity. In the framework of mean-field theory (e.g. Moffatt 1978;Krause & Rädler 1980), u and B are considered as mean, e.g. ensemble averaged, quantities, whereas the action of the smallscale turbulent flow on the mean magnetic field is parametrised by the so-called dynamo coefficients, α and β. They are, in general, tensors of second and third rank, respectively. We use the following compact notation of the mean field coefficients, which include the so-called γ, δ, and κ-effects, see e.g. Rädler (1980). Then, the operator D reads instead of (2), and acts on the mean magnetic field. Except for the additional α and β terms in the D operator, the induction and the dynamo equation are formally equivalent. Thus, the new method presented here equally applies to both. The dynamo region is located in a flow domain V with exterior vacuum E. In this work we assume V to be either a sphere ⋆ Corresponding author, e-mail: schmitt@mps.mpg.de or a spherical shell. The magnetic field B is continuous through the boundary ∂V and potential in E. In kinematic dynamo theory, all coefficients (u, α, β, and η) are assumed given and independent of the magnetic field. Thus the dynamo equation is linear in the magnetic field and can be solved by considering an eigenvalue problem with eigenvalues λ describing the time evolution proportional to exp(λt) of the magnetic field B. Many studies have been made of the eigenvalues of the dynamo operator for various celestial bodies and with many forms of the dynamo coefficients (e.g. Bullard & Gellman 1954;Roberts 1960;Steenbeck & Krause 1969;Deinzer & Stix 1971;Roberts 1972;Roberts & Stix 1972;Gubbins 1973;Kumar & Roberts 1975;Schmitt & Schüssler 1989;Dudley & James 1989;Deinzer et al. 1993;Gubbins et al. 2000;Schubert & Zhang 2001;Livermore & Jackson 2004Jiang & Wang 2006. Often the coefficients are approximated by simple analytical functions of position, and their tensorial character is disregarded. Recently, the test-field method, developed by Schrinner et al. (2005Schrinner et al. ( , 2007) (see also Ossendrijver et al. 2001Ossendrijver et al. , 2002, allows one to determine all tensorial components of α and β directly from self-consistent numerical simulations (Brandenburg et al. 2008;Käpylä et al. 2009). These coefficients are sometimes strongly varying functions of position. This may introduce large errors because the dynamo operator ∇ × D involves differentiation of the dynamo coefficients, and these are only available as numerically determined tabulated values. In this paper we present a new method that does not require differentiation, so it is also applicable to numerically determined dynamo coefficients. The method is based on the biorthogonality of the electric current and the vector potential with an inner prod-uct defined by a volume integral over the fluid. This property has already been noted by Rädler & Bräuer (1987), Hoyng (1988), Fuchs et al. (1993), Hoyng & Schutgens (1995), and Rädler et al. (2002). The method is described in detail in Sect. 2. Extensive test calculations have been performed and compared with published results by other eigenvalue methods. Some of these tests are presented in Sect. 3. In Sect. 4 we apply the new method to eigenmodes of the dynamo operator with coefficients obtained from geodynamo models. Our conclusion are drawn in Sect. 5. Eigenvalue problem We expand the field B of the dynamo in a complete set of functions b i (r): Here and in the following we make use of the summation convention for two identical indices. The expansion functions are often eigenfunctions of some differential operator. Since this operator is, in general, not self-adjoint, the functions are not orthogonal. This problem is handled by using the adjoint setb k (r), with the following inner product The integration volume X can be either the whole space V + E or the fluid domain V alone. The base functionsb k (r) and b i (r) constitute a biorthogonal set. For a given set of functions b i , the adjoint setb k depends on the choice of the integration domain X, so in principle we have two different setsb k , one for X = V and one for X = V + E. Later we adopt the free magnetic decay modes, for which the base functions b i and their adjointsb k are known. But at this point there is no need to specify which set b i we actually use. Biorthogonal sets Starting from a setb k and b i that is biorthogonal on V + E, a very useful biorthogonal set on V is provided by the associated electric current = ∇×b and the vector potential a where ∇×a = b, with the inner product Here we have absorbed a factor of 4π/c in the definition of the current j. The relation (7) is derived with the help of the vector and a volume integration over V + E; a and b go fast enough to zero at infinity. Surface integrals vanish because the field and the vector potential are continuous through ∂V. Volume integrals containing currents are restricted to V since j = 0 in E. The inner product (7) is invariant under a gauge transformation a → a+∇ψ because V · ∇ψ d 3 r = V ∇ · (ψ) d 3 r = ∂V ψ · d 2 σ = 0, as currents and their adjoints run parallel to the boundary. Electric currents and vector potentials thus form a biorthogonal set on V. This is essential for the new eigenvalue method presented in Sect. 2.3. Classical eigenvalue method Inserting the expansion (5) in the dynamo eigenvalue equation (4) yields Subsequently, we take the inner product (6) based on V with the adjoint magnetic field. This leads to A partial integration to shift the curl from the second to the first term, as done in (7) above and used in the new method below, is not possible because the surface term ∂V (Db i ×b k ) · d 2 σ need not vanish here. We mention as an aside that the magnetic field is often decomposed in its poloidal and toroidal components (see Appendix A) after which the dynamo equation is formulated in terms of the defining scalars P and T . If the dynamo coefficients possess certain symmetry properties, the solutions can be split into two independent subsets, describing magnetic fields symmetric and antisymmetric with respect to the equator. New eigenvalue method We start again with (8), which we uncurl to obtain Taking now the inner product (7) with the adjoint current results in The gradient term drops out as discussed in Sect. 2.1 above. The corresponding adjoint functionsb k here are different from those in Sect. 2.2 as they pertain to a different inner product. The matrices M ki and N ki have the same eigenvalues λ. The advantage of the new method using N ki in (11) instead of M ki in (9) is that no differentiation of the operator D is required, so even numerically computed or tabulated values of u, α, and β produce accurate results. Choice of b i and numerical handling of (11) For the set of base functions, we adopt the free magnetic decay modes whose magnetic fields b i are known analytically in V + E in terms of the defining scalars P and T as described in Appendix A. The decay modes are continuous through ∂V and potential in E, so they satisfy the boundary conditions. They are characterised by three numbers, the radial order n, the latitudinal degree l, and the azimuthal order m. Another advantage of the decay modes is that they are selfadjoint on V + E so that the adjoint functions are the complex The computation of the matrix elements N ki is now straightforward. Once we know the matrix elements, the eigenvalue problem (11) is solved numerically using LAPACK routines (http://www.netlib.org/lapack), and we obtain the eigenvalues λ k and eigenvectors {e ki }, such that is eigenfunction of ∇ × D with eigenvalue λ k . Each mode k contains, in general, a mixture of n, l, and m values. In the following we consider only velocities and dynamo coefficients that are independent of azimuth ϕ, but this is not a necessary constraint. Thus each value of m can be treated separately. Although we present only results for m = 0 here, we have tested and applied other values of m as well. We employ the robust Gauss-Legendre quadrature in r and cos ϑ to compute the matrix elements since the basis functions are heavily oscillatory in r for high values of n and in θ for high degree l. For the Gauss-Legendre integration we used 66 quadrature points here in the radial and 80 in the latitudinal direction, respectively. In general, this depends of course on the required resolution. Adjoint eigenfunctions We now show how one may construct the adjoint set of eigen-functionsB p of a set of eigenfunctions B i of the dynamo operator ∇×D. Although these adjoints are not needed in the present paper, they appear in applications. For example, let B be the actual magnetic field of the dynamo, then it is often advantageous to expand B in dynamo eigenfunctions, i.e., B(r, t) = i c i (t)B i (r). To find the coefficients c i (t) we use the adjoint set, to find This illustrates that we need the adjointsB p , and these may be constructed as follows. Let B k = e ki b i be the representation of B k in terms of the self-adjoint magnetic decay modes as above. Therefore f † = e −1 in matrix notation, and we find a unique biorthogonal set. Here, † indicates the Hermitean adjoint, f † = (f T ) * , where T indicates the transposed and * complex conjugation. Three important messages follow from this construction: (i) the adjoint of ∇ × B is ∇ ×B, that is, the adjoint operation commutes with ∇; (ii) to obtain the adjoint eigenfunctions, it is not necessary to know the explicit form of the adjoint dynamo operator ∇×D; and (iii) the eigenfunctions and their adjoints have the same boundary conditions because they are a linear combination of the decay modes and their complex conjugates, respectively. The eigenvalues are independent of azimuth m, and the eigenfunctions decouple in latitudinal quantum number l. For R α 0, the modes couple in radial number n, as they do between the poloidal and toroidal components. For R α = 4.493409458, 1 the first mode is a stationary dipole, while the overtones decay with the rates given by Hoyng & van Geffen (1993) (HvG93). We successfully reproduced the fundamental mode and the overtones. In Table 1 we consider the convergence in the eigenvalues as a function of the maximum radial number n max for some dipolar (l = 1) modes. Higher l modes behave similarly. We also reproduced the eigenfunction plots as provided by Krause & Rädler (1980). Spherical flows As a next test, we apply the spherical stationary t 1 s 2 (MDJ) flow of Livermore & Jackson (2004) which is given by with K −1 = √ 9009/572 and ǫ = 0.5 √ 143/1008 such that the rms poloidal to toroidal energy ratio is 0.5, and the flow has an rms value of u 0 . P m l andr are defined in Appendix A. Like Livermore & Jackson (2005) we consider the axisymmetric (m = 0) and equatorially antisymmetric magnetic field solution for a unit sphere (r 0 = 1) embedded in a vacuum. Table 2 shows the convergence in the eigenvalue with the largest real part as a function of truncation n max and l max for two magnetic Reynolds numbers R m = u 0 r 0 /η = 10 and R m = 100, together with the converged values given by Livermore & Jackson (2005) (LJ05). There is a difference of about one permille between their value and ours for R m = 10. α 2 and α 2 Ω-dynamos We reproduced the critical dynamo numbers R α further for the dipolar (l = 1) mode of an isotropic α 2 -dynamo with α rr = α ϑϑ = α ϕϕ = R α sin Nπ(r − r i ) and N = 1, 2 in a spherical shell of inner and outer radius r i and r 0 with r 0 − r i = 1 and r i /r 0 = 0.35 and 0.8 surrounded by a vacuum and either an insulating or a conducting inner core, as reported in Table 2 of Schubert & Zhang (2001). With this test we treated in particular two different aspect ratios of a spherical shell (a thick and a thin one) and two different molecular diffusivities (insulating or conducting) of the inner core. Finally we applied our method to an α 2 Ω-dynamo of Jiang & Wang (2006) who employ the classical eigenvalue treatment for the poloidal and toroidal scalars P and T expanded in spherical harmonics in the angular coordinates and in Chebychev polynomials in r-direction. We set u r = u ϑ = 0, u ϕ = R Ω r 3 sin 3 ϑ, α i j = β i jk = 0, except α rr = α ϑϑ = α ϕϕ = R α sin 2π(r−r i )/(r 0 −r i ) cos ϑ with r i = 0.5, r 0 = 1, embedded in a vacuum inside and outside, and R α = α 0 r 0 /η = 10 and R Ω = u 0 r 0 /η = 1000. Some results obtained by the new and the classical methods are compiled in Table 3. Numbers in parentheses (. . . , . . .) are the real and imaginary parts of complex eigenvalues. The real part denotes the growth rate, the imaginary part the frequency of the mode in units of η/r 2 0 . The modes are axisymmetric (m = 0), the fundamental mode is monotonously growing and symmetric (indicated by S ) with respect to the equator, and the fourth overtone is damped, oscillatory and antisymmetric (indicated by A). Modes with higher m are more strongly damped. n max refers to the maximum radial number of the decay modes (spherical Bessel functions) for the new method and to the maximum degree of the Chebychev polynomials for the code of Jiang & Wang (2006), respectively. Since the modes have smaller length scales in latitudinal than in radial direction, higher values of l than of n are required for convergence. We find remarkably similar convergence of the eigenvalues for both methods. This also applies to modes with higher m. Of course we have also verified that the eigenfunctions obtained with the two methods are identical. Geodynamo models Having proven that the new method works correctly and efficiently, we now apply it to determine the eigensolutions of the dynamo operator with mean-field coefficients obtained from self-consistent numerical simulations of the geodynamo. For a recent review of numerical geodynamo simulations, see Christensen & Wicht (2007). Schrinner et al. (2007) developed an efficient method of calculating all tensorial mean-field coefficients α and β and compared the results of mean-field and direct numerical simulations of the geodynamo. We plan to use the eigenmodes of the dynamo equation to decompose the magnetic field of the numerical simulations and to determine the statistical properties of the mode coefficients (Hoyng 2009) to analyse the working of the geodynamo. Benchmark dynamo We examine a quasi-steady geodynamo model which has been used before as a numerical benchmark dynamo (Christensen et al. 2001, case 1). The governing parameters are Ekman number E = 10 −3 , Rayleigh number Ra = 100, Prandtl number Pr = 1, and magnetic Prandtl number Pm = 5. The convection pattern is columnar with a natural 4-fold azimuthal symmetry and is stationary except for an azimuthal drift. The intensity of the fluid motion is characterised by a magnetic Reynolds number of R m ≃ 40, defined with a characteristic flow velocity, the thickness of the convecting shell, and the molecular magnetic diffusivity. The magnetic energy density exceeds the kinetic one by a factor of 20. In Schrinner et al. (2007), the mean-field coefficients are derived from the numerical simulation. We solved the dynamo equation with these mean-field coefficients by the new method and obtained the eigenvalues and eigenfunctions. Since the coefficients are spatially variable to a considerable degree, converged solutions require high truncation levels in n and l. The eigenvalues of the first two modes are shown in Table 4. Beyond n max ≃ 20 and l max ≃ 20, the eigensolution of the first mode does not change significantly and is displayed in Fig. 1. The convergence of the second mode requires a larger l max of about 32. The results for high values of n max may be affected by the spatial variation of the mean-field coefficients and would require more than 66 radial quadrature points to compute the matrix elements. A comparison of Fig. 1 with its counterpart Fig. 10 of Schrinner et al. (2007) shows that the field of the antisymmetric fundamental mode resembles the field of an initial-value meanfield dynamo calculation remarkably well as it does the axisymmetric component of the direct numerical simulation. The mode here grows slightly with a rate around λ 0 ≃ 4.2η/L 2 , the field of the initial value calculation decays slightly with a rate of approximately −0.25η/L 2 , while the solution of the direct numerical simulation is stationary 2 . Here L = r 0 −r i = 1 is the thickness of the spherical shell. The difference in these rates between the eigenvalue and initial value calculation comes from the higher numerical diffusivity of the latter at the chosen resolution of 33 radial and 80 latitudinal grid points. The difference is actually small, much less than one effective decay rate, because the relevant turbulent diffusivity, described by the β coefficient with values up to 33η, is much higher than the molecular one. Besides the true physical eigenmodes, we find growing unphysical spurious eigenmodes. Their eigenvalues depend strongly on the resolution, and their eigenfunctions are highly structured. We attribute their appearance to a locally confined inappropriate parametrisation of the mean electromotive force by Notes. The symmetry with respect to the equator is marked A for antisymmetric and S for symmetric. The eigensolution marked bold is displayed in Fig. 1. Fig. 1. Magnetic field structure of the fundamental antisymmetric eigenmode for the benchmark dynamo. Compare with Fig. 10 of Schrinner et al. (2007). For each plot the grey scale is separately adjusted to its maximum modulus with white as negative and black as positive. The contour lines correspond to ±0.1, ±0.3, ±0.5, ±0.7, and ±0.9 of the maximum modulus. the mean-field coefficients α and β (Schrinner et al. 2007). The spurious modes are present neither in the initial value calculation nor in the following example of a time-dependent dynamo, because of a higher numerical and molecular diffusivity, respectively. A time-dependent dynamo in the columnar regime The next example has stronger forcing with parameters E = 10 −4 , Ra = 334, Pr = 1, and Pm = 2. The numerical simulation by Olson et al. (1999, case 2) shows a highly time-dependent, but still dominantly columnar convection characterised by a magnetic Reynolds number of R m ≃ 88. The magnetic energy exceeds the kinetic energy by a factor of three. The magnetic field has a strong axial dipole contribution. Although chaotically time-dependent, the velocity field is symmetric and the magnetic field antisymmetric with respect to the equatorial plane. The mean-field coefficients are obtained as before by the test-field method of Schrinner et al. (2007). The coefficients are now of course also highly time-dependent. A time average yields coefficients that roughly resemble those for the benchmark dynamo, although there are differences in some profiles and amplitudes. Notes. The eigensolutions marked bold are displayed in Fig. 2. For the time-averaged dynamo operator the eigenvalues of the first two antisymmetric eigenmodes for various values of n max and l max are shown in Table 5. It seems that a value of n max ≃ 16 is sufficient for convergence, while l max ≃ 32 is needed. Figure 2 shows the eigenfunctions of these modes. The eigensolutions for λ 1 = (−6.298 , 0.0) and λ 2 = (−28.712 , ±5.364), values for n max = 16 and l max = 32, are symmetric with respect to the equator. An initial-value, mean-field dynamo calculation with the same mean velocity and dynamo coefficients shows a slighly decaying solution with a decay rate of approximately −5.9η/L 2 which is to be compared with the eigenvalue λ 0 ≃ −3.87η/L 2 of the fundamental mode. Again, the turbulent diffusivity exceeds the molecular one by a factor of up to 23 in this case. The difference in the decay rates is therefore much less than one effective decay rate. As for the benchmark dynamo, the profile of the antisymmetric fundamental mode is again remarkably similar to the solution of the initial value calculation and to the axisymmetric component of the direct numerical simulation. A decomposition of the actual magnetic field of the simulation by Olson et al. (1999, case 2) in eigenfunctions of the timeaveraged dynamo operator, i.e., B(r, t) = i c i (t)B i (r), shows that the antisymmetric fundamental mode contributes to about 75 percent and, together with the first antisymmetric overtone (see Table 5 and Fig. 2), to about 85 percent of the total magnetic energy. The variability in time of the magnetic field of the direct numerical simulation is reflected in the variability of the expansion coefficients. More details are presented in Schrinner et al. (2009). Conclusions and outlook We presented a new method for computing the eigenvalues and eigenfunctions of the induction and the dynamo equation. The method is based on the biorthogonality of the adjoint electric current and the vector potential with an inner product defined by a volume integral over the fluid domain. The advantage of the method is that the velocity and dynamo coefficients do not have to be differentiated. The method is therefore well-suited for spatially strongly variable dynamo coefficients. We tested the new method against the classical treatment and proved that it works correctly and efficiently. We applied it to two cases with dynamo coefficients derived from direct numerical simulations of the geodynamo. The obtained dynamo eigenmodes are promising candidates for decomposing the magnetic field of the numerical simulations and for analysing the statistical properties of the mode coefficients as proposed by Hoyng (2009). with defining scalars P(r, ϑ, ϕ) and T (r, ϑ, ϕ) and unit vector in radial directionr = (1, 0, 0) in spherical coordinates (r, ϑ, ϕ). The equation of free magnetic decay with constant magnetic diffusivity η then reads where ∆ H is the horizontal Laplacian The solutions are the free magnetic decay modes P lmn = exp λ P ln t x f l (p ln x) Y m l (ϑ, ϕ) , (A.5) T lmn = exp λ T ln t xg l (t ln x) Y m l (ϑ, ϕ) (A.6) with x = r/r 0 where r 0 is the radius of the sphere. The growth rates are given by λ P ln = −ηp 2 ln /r 2 0 and λ T ln = −ηt 2 ln /r 2 0 (A.7) and are independent of the azimuthal degree m. The constants p ln and t ln are p ln = j l−1,n and t ln = j l,n (A.8) where j l,n is the n-th zero of j l . The Y m l are the spherical harmonics and normalised to unity by taking using Ferrer's definition of the Legendre functions of first kind P m l with degree l and order m. For a sphere embedded in vacuum the radial functions are given by f l (p ln x) = a ln j l (p ln x) 0 ≤ x ≤ 1 a ln j l (p ln )x −l x ≥ 1 (A.10) with the spherical Bessel functions of first kind j l . This ensures regularity in the origin of the sphere, vanishing toroidal component at its outer boundary and smooth transition of the poloidal component to a potential field in the vacuum outside. For a spherical shell with inner radius r i (x i = r i /r 0 ) and outer radius r 0 (x 0 = 1) embedded in vacuum the radial functions inside the shell are given by f l (p ln x) = j l (p ln x) − y l (p ln x) j l+1 (p ln x i )/y l+1 (p ln x i ) (A.12) and g l (t ln x) = j l (t ln x) − y l (t ln x) j l (t ln x i )/y l (t ln x i ) , (A.13) and the constants in the arguments are the roots of j l+1 (p ln x i )y l−1 (p ln ) − j l−1 (p ln )y l+1 (p ln x i ) = 0 (A.14) for p ln and of j l (t ln )y l (t ln x i ) − j l (t ln x i )y l (t ln ) = 0 (A.15) for t ln . Here y l are the spherical Bessel functions of second kind. The magnetic field of the decay modes B i is obtained by inserting the spatial parts of the defining scalars P lmn and T lmn , respectively, into (A.1). Here we have comprised the three indices into one. The decay modes are self-adjoint on V + E, so that the adjoint functions are obtained simply by complex conjugation: B k = B * k and likewiseĴ k = J * k . Normalisation on V + E, i.e., (B * k , B i ) V+E = (J * k , A i ) V = δ ki , is thus straightforward. For a unit sphere the radial functions are normalised to unity by scaling the f l with a ln = 1 2r 0 l(l + 1) j 2 l−1,n j 2 l ( j l−1,n ) For a spherical shell the normalisation constants are more lengthy expressions, which we suppress here. The free magnetic decay modes form a complete and orthogonal set of functions, and they obey the boundary conditions of the magnetic field between the dynamo volume V and the exterior vacuum E. We mention for completeness that the poloidal decay modes are not self-adjoint on V, i.e., (B * k , B i ) V δ ki . If we like to work with an inner product defined on V, the adjoint functionsB k can be constructed by requiring (B k , B i ) V = δ ki , similar to the one described in Sect. 2.5.
2010-06-03T14:58:40.000Z
2009-11-17T00:00:00.000
{ "year": 2010, "sha1": "1697129aeb41c8bd1c374b9acbdc48c3991b843b", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2010/11/aa13702-09.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "1697129aeb41c8bd1c374b9acbdc48c3991b843b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
11475860
pes2o/s2orc
v3-fos-license
Compressed Sensing Recovery via Nonconvex Shrinkage Penalties The $\ell^0$ minimization of compressed sensing is often relaxed to $\ell^1$, which yields easy computation using the shrinkage mapping known as soft thresholding, and can be shown to recover the original solution under certain hypotheses. Recent work has derived a general class of shrinkages and associated nonconvex penalties that better approximate the original $\ell^0$ penalty and empirically can recover the original solution from fewer measurements. We specifically examine p-shrinkage and firm thresholding. In this work, we prove that given data and a measurement matrix from a broad class of matrices, one can choose parameters for these classes of shrinkages to guarantee exact recovery of the sparsest solution. We further prove convergence of the algorithm iterative p-shrinkage (IPS) for solving one such relaxed problem. ese classes of s rinkages to guarantee exact recovery of the sparsest solution.We further prove convergence of the algorithm iterative p-shrinkage (IPS) for solving one such relaxed problem. of this relaxation have been well-studied [3]- [11], as well as some alternative relaxations, such as the p quasinorm [5], [10]- [20].The nonconvex p quasinorm approaches present a tradeoff: closer approximation of sparsity for harder analysis and computation.Recent work has introduced generalized nonconvex penalties [21]- [27] that have thus far demonstrated strong empirical performance [21], [23], [25], [28].In this paper, we prove conditions that guarantee good performance of these generalized penalties. A. Compressed Sensing Compressed sensing seeks to represent a signal from a small number of linear measurements. We let the vector x ∈ R n represent the original signal.The linear measurements are e result of an applica ion of the short and fat measurement matrix A ∈ R m×n , with m n.One is given the measureme ts b := Ax and wants to recover x.Of course m n implies that Ax = b is an underdetermined linear system in x, so additional assumptions must be made about x.Thus one assumes that x is sparse, meaning that it has few nonzero entries.By considering the standard definition of p norms for vectors, w p p := i |w i | p ,(1) and taking the limit as p approaches 0 from above, we get the 0 penalty, w 0 , which counts the number of nonzero entries of w.One would like to find the sparsest vector w ∈ R n who ich suggests the following optimization problem: min w w 0 subject to Aw = b.(2) Unfortunately, this problem is known to be NP-hard (Non-deterministic Polynomial-time hard) in general [29, Sec.9.2.2].In other words, without making further assumptions on A s problem would be computationally intractable.For this reason, one relaxes the problem, rep acing the 0 penalty with other penalties. B. 1 relaxation The 1 relaxed version of the compressed sensing problem is as follows: min w w 1 subject to Aw = b.(3) In contrast to the combinatorial 0 problem, this problem minimizes a convex energy subj t to linear cons raints, and can be recast as a linear program.Extensive theory has been ies of solutions to convex problems [30].Further, a subproblem related to the 1 relaxation of compressed sensing has a closed-form solution, given by an application of a shrinkage operator: Definition I.1.Soft thresholding is given by the following formula: S λ,1 (x) i = s λ,1 (|x i |) sign(x i ) = max{|x i | − λ, 0} sign(x i ).(4) The role soft thresholding plays is as the proximal mapping of th 1 norm: S λ,1 (x) = prox λ • 1 (x) := arg min w λ w 1 + 1 2 w − x 2 2 pping, such as iterative soft thresholding [31], alternating direction meth hm [36].The explicit formula for (5) makes the use of 1 regularization particularly convenient. All of this suggests why the 1 relaxation of compressed sensing is nice to solve, but does not motivate it as the right problem to solve.In particular, one is interested in conditions under which the solution to he 1 relaxation (3) of compressed sensing equals or approximately equals the solution of the original 0 compressed sensing problem (2).The papers [1], [2] developed theory for the recovery of the 0 solution by the 1 problem.In the years the followed, getting looser conditions for exact 1 recovery received continuing interest [3]- [11], [16].One type of condition for recovery of the 0 solution from the 1 problem relies on the restricted isometry constants associated with the measurement matrix A. The restricted isometry constant of order k associated with the matrix A ∈ R m×n is the smallest δ k ≥ 0 such that the following holds for all x ∈ R n with x 0 ≤ k [37]: (1 − δ k ) x 2 2 ≤ Ax 2 2 ≤ (1 + δ k ) x 2 2 .(6) Note that when δ k > 1 the lower bound becomes trivial and the upper bound can be improved by rescaling A. Thus any measurement matrix, with appropriate rescaling, can k ∈ [0, 1).One of the best current 1 recovery results states that for sufficiently large n, a sparse vector x ∈ R n with x 0 = k can be recovered by 1 minimization as long as k < m/2 and the restricted isometry constant of order 2k associated with A satisfies δ 2k ≤ 1/2 [4]. C. p relaxations (0 < p < 1) A similar relaxation of the 0 problem that achieves recovery results in broader cases is p minimization for 0 < p < 1.In contrast to the 1 norm, the p quasinorms for 0 < p < 1 are not convex.Hence much of the longer applies, making solution uniqueness and convergence results more complicated.However, the loss of convexity comes with the benefit that p is better able to approximate the original 0 than 1 can.As a result, one can show that for any given measurement matrix with restricted isometry constant δ 2k < 1, there exists some p ∈ (0, 1) that will guarantee exact recovery of signals with support smaller than k < m/2 by the p minimization problem [13].It has also been demonstrated empirically that p minimization gives better sparse recovery results than 1 minimization [38]- [40], with improved robustness [14], [18], [19]. Consider the proximal mapping of the p quasinorm (to the p th power, for simplicity), that is, prox λ • p p (x) := arg min w λ w p p + 1 2 w − x 2 2 .(7) Unfortunately, ( 7) is a discontinuous mapping [41], and there is no closed-form express on for (7) for general p. (The expression given in [42] is incorrect.For the special cases of p ms of the solution of a cubic or quartic equation, explicitly but cumbersomely.)This prevents several efficient algorithms from being generalized from 1 to p minimization. D. Generalized shrinkage The need for an explicit proximal mapping motivates the approach of specifying a shrinkage mapping, and minimizing an implicitly-defined penalty function whose proximal mapping is the specified shrinkage [21]- [23], 7].In this work, we exten theoretical results for recovery of sparse signals to the case of penalty functions induced by two families of shrinkages, pshrinkage and firm thresholding (see Defs.II.1, II.2 below).In Section II we describe these shrinkage mappings, and how they are the proximal mappings of nonconvex penalty functions. In Section III we prove conditions for the exact recovery of sparse signals via minimizing such nonconvex penalty functions.In Section IV we demonstrate the stability of signal recovery to noisy measurements and approximately sparse signals, and n Section V we show the algorithmic convergence of iterative p-shrinkage (IPS). II. GENERALIZED SHRINKAGE PENALTIES As described above, nonconvex penalty functions have been shown both theoretically and empirically to give better results for compressed sensing than the 1 norm.In order to make use of any of several efficient algorithms, we wish consider penalty functions with exp icit proximal mappings.In this section, we consider two such families of functions. A. p-shrinkage and firm thresholding First we consider a shrinkage mapping, a version of which first appeared in [21], that has some qualitative resemblance to the p proximal mapping, while being continuous and explicit: Definition II.1.For λ > 0, the p-shrinkage map ng S p = S λ,p for p ∈ R is defined b S p (x) i = s p (|x i |) sign(x i ), where the shrinkage function s p = s λ,p is defined by s p (t) = max{t − λ 2−p t p−1 , 0}.(8) See Fig. 1 for example plots.When p = 1, p-shrinkage and soft thresholding coincide.The smaller the value of p, the less p-shrinkage shrinks p-shrinkage tends pointwise to hard thresholding: Defi lding mapping H λ is defined by H λ (x) i =      0 if |x i | ≤ λ, x i if |x i | > λ. (9) Hard thresholding is related to the proximal mapping of the 0 penalty function: H √ 2λ ∈ prox λ • 0 ,(10) the right side of (10) being tw -valued in components satisfying x 2 i = 2λ.Hard thresholding imposes no bia le when used with ADMM [43].Another shrinkage mapping we consider is firm thresh ewise-linear approximation of hard thresholding.Firm thresholding was first introduced in [44] in connection with the WaveShrink procedure for denoising and non-parametric regression.It was not known at the time to be the proximal operator of a given penalty function. Definition II.3.For λ > 0 and µ > λ, the firm thresholding mapping S firm = S λ,µ,firm is defined by S firm (x) i = s firm (|x i |), where s firm = s λ,µ,firm is defined by s firm (t) =            0 if t ≤ λ, µ µ−λ (t − λ) if λ ≤ t ≤ µ, t if t ≥ µ.(11) Note that S λ,λ,firm H λ , and lim µ→∞ S λ,µ,firm (x) = S λ,1 (x) pointwise.Thus both p-shrinkage and firm thresholding can be seen as generalizing both soft and hard thresholding. B. Shrinkag is to have them as closedform proximal mappings.This requires that the shrinkages actually be the proximal mappings of penalty functions.The following theorem guarantees this.It is pro d in [23,Thm. 1], and strengthens the e rlier result of Antoniadis [27,Prop. 3.2]. Theorem II.4.Suppose s : [0, ∞) → R is continuous, satisfies x ≤ λ ⇒ s(x) = 0 for some λ ≥ 0, is strictly increasing on [λ, ∞), and s(x) ≤ x.Define S(x) i = s(|x i |) sign(x i ), for each i.Then S is the proximal mapping of a penalty function G(w) = i g(w i ) where g is even, strictly increasing and continuous on [0, ∞), differentiable on (0, ∞), and nondifferentiable at 0 iff λ > 0 (in which case ∂g(0) = [−1, 1]). If also x − s(x) is nonincreasing on [λ, ∞), then g is concave on [0, ∞) and G satisfies the triangle inequality. Both p-shrinkage and firm thresholding satisfy all hypotheses of the theorem for all parameter values.The proof of the theorem constructs g using the Legendre-Fenche sform, this often does not produce a closed-form expression for g.We consider this as an accepta roximal mapping, which is much more useful for most of today's state-ofthe-art algorithms for compressed alty function G p induced by p-shrinkage, we can compute g p (w) numerically, and example plots are in Fig. 2. In addition to the properties guaranteed by Thm.II.4, it can be shown that lim w→∞ g p (w) − w p /p − C p = 0 for p = 0 and constant C p depending only on p. This includes p < 0, in which case it follows that g p (w) is bounded above.For p = 0, we have lim w→∞ g 0 (w) − lo nalty function G firm induced by firm thresholding, g firm does have a closed form: g firm (w) =     |w| − w 2 /(2µ) if |w| ≤ µ, µ/2 if |w| ≥ µ.(12) Note that g firm (w) is independent of λ, except that µ ≥ λ is required by the definition of g firm . Although the statement of Thm.II.4 excludes hard thresholding (being discontinuous), the construction in the proof does produce a penalty function G hard .It coincides with G firm for µ = λ.The part of the conclusion of the theorem that doesn't hold is that prox λ G hard (λ) is the entire interval [0, λ], while H λ (λ) is generally defined to take on a single value fr efinition ( 9)). C. Example To motivate the consideration of p-shrinkage and firm thresholding, we consider a generalization of an example appearing in the first compressed sensing paper [ ].We seek to reconstruct the 256 × 256 Shepp-Logan phantom image from samples of its 2-D discrete Fourier transform (DFT), taken along radial lines, thereby simulating both MRI and X-ray CT dat (the latter by way of the Fourier slice theorem).See Fig. 3. Since the phantom has a sparse gradient, we seek to solve the following optimization problem: min x G(∇x) subject to Fx = b,(13) where G is one of the penalty functions being compared, ∇ is a discrete gradient using forward differences and periodic boundary conditions, F is the 2-D DFT, and b contains the sample data. We solve (13) with ADMM, where the shrinkage mapping is p-shrinkage with p ≤ 1 or firm thresholding.See [25] for details, being also a straightforward generalization of the algorithm of [34]. With G = G 1 = • 1 , 18 lines are required for exact reconstruction, while using G = G −1/2 , 9 lines suffice, as shown in [21] the latter being the fewest that had been demonstrated at that time.In [22] (see also [23]), 6 lines were shown to suffice using the G induced by a shrinkage mapping that is a C ∞ approximation of hard thresholding.This is the fewest possible, since with 5 lines, there are fewer measurements than nonzero gradient pixels, so that the pha om will not even be local minimizer of the problem with G = • 0 .However, here we report that using G = G firm (with λ = 0.1 and µ = 2.5), 6 lines also suffice, and many fewer ADMM iterations are this example is an ideal case, using a very sparse image and noisefree measurements, this does demonstrate that p-shrinkage and firm thresholding induce penalty functions that can be useful for recovering sparse signals.Now we turn to a theoretical analysis of the sparse recovery performance of minimizing these penalty functions. III. EXACT RECOVERY In this section, w establish sufficient conditions for exact recovery of sparse signals from noisefree measurements by solving a minimization problem with penalty function G: min w G(w) subject to Aw = b. (14) Our objective is to determine sufficient conditions in the case where G is a penalty function induced by a shrinkage mapping; however, we will esta lish conditions for a somewhat more general class of penalty functions G.We shall assume that the measurement matrix A ∈ R m×n has the Unique Representation Property (URP), i.e., any m co umns of A are linearly independent. This implies that any vector in ker(A) has at least m+1 nonzero entries.The URP can be regarded as a generic property of matrices; for example, a matrix whose entries are independently and identically distributed samples drawn from any absolutely continuous probability distribution will have URP with probability 1. Remark III.1.The URP implies that the m rows of A are linearly independent.Thus an orthonormal basis for the span of the rows can be formulated as linear combinations of the rows of A. So if we multiply A by a product of elementary matrices, , corresponding to the necessary elementary row operations, the resulting product will have orthonormal rows.Since elementary matrices are invertible, Aw = b is equivalent to EAw = Eb.Also, since each elementary matrix is invertible, A T being full rank for | | = m implies EA T is full rank as well, and so A satisfying the URP implies EA satisfies the URP.Thus we can always transform the prob em so that the rows of A are orthonormal, i.e., AA T = I, and so without loss of generality, we a sume that the A given satisfies AA T = I. We shall also assume that G(w) = i g(w i ) with I) g(0 and g even on R; and II) g is continuous on R, and either strictly increasing and strictly concave on R, or strictly increasing and strictly concave on (0, γ] and constant on [γ, ∞) for some γ > 0. These conditions imply that g is nondecreasing and concave on [0, ∞), is everywhere nonnegative, and satisfie 2.The penalty functions G firm and G p (for −∞ < p < 1) satisfy the above conditions. Proof: It is clear from the expression (12) for g firm that G firm satisfies the condi et condition I, and that g p is differentiable on (0, ∞) with g p > 0. It suffices to prove that g p is on of g p , from [23].We have g p (w) = (f * p (w) − w 2 /2)/λ,(15) where f p = s p and f * p is the Legendre-Fenchel transform of f p .Since s p is con x ∈ ∂f * p (w) ⇔ w = f p (x) = s p (x).(16) Fix w > 0, and let x be such that w = s p (x).From (8), we must have x > λ, so w = x − λ 2−p x p−1 . If we define F (x, w) = x − λ 2−p x p−1 − w, we have that F (•, w) is C ∞ on (0, ∞), and ∂ k F ∂x k (x, w) = 0 for x ∈ (λ, ∞). Thus by the implicit function theorem, f * p is C ∞ on (0, ∞), hence g p is as well by (15). Returning to w = x − λ 2−p x p−1 , by ( 15), (16), and the differentiabi (λ/x) 1−p . (17) Thus g p (w) is decreasing in x on (λ, ∞), and since x is a strictly increasing function of w on (0, ∞), g p (w) < 0 on (0, ∞).Next we introduce the G Nullspace Property, a eneralization of the 1 Nullspace Property introduced in [46] for norms and implicitly in [11] for penalty functions belonging to a particular class.We denote {1, 2, . . ., n} = [n], and T c denotes the complement of T in [n]. Definition III.5.The G Nullspace Property (or G SP) of order k for the matrix A is satisfied when for all h ∈ ker(A)\{0} and T ⊂ [n] with |T | ≤ k , one has G(h T ) < G(h T c ). Proposition III.6.For a penalty function G satisfy ng the triangle inequality, the G NSP implies exact recovery. Proof: We simply observe that the proof of [11] wor s assuming only that the penalty function satisfies the triangle inequality. Definition III.7.Let the matrix A ∈ R m×n and the v ctor b ∈ R m be given.Let x be the sparsest solution to Aw = b, k = x 0 with 2k ≤ m, and T = supp(x).We say the G Restricted Nullspace Property (or G RNSP) of order k is satisfied if whenever w satisfies Aw = b and w 0 ≤ m, then for h = x − w, we have either h = 0 or G( P of order k for A. However, examining the proof of Proposition III.6 from [11] and applying Lemma III.3 shows that in fact G RNSP suffices for exact recovery.We assume 2k ≤ m to guarantee that the sparsest solution of Aw = b is unique, as URP ensures that a second solution must have more than m − k nonzero components. Proposition III.8.For penalty function G satisfying the ors with m or fewer nonzero components as in Lemma III.4.If 2k ≤ m and kg(2β) < ( + 1 − k)g(α) then x * = x. Proof: Let h = x * − x. Since x is supported on T , h T c = x * T c , and so for all t ∈ T c , |h(t)| is either zero or at least α.Also, since h ∈ ker(A), if h = 0, then h T c 0 ≥ m + 1 − k (otherwise we would have h 0 ≤ m, vio o, G(h T ) ≤ i T g(|x * i | + |x i |) ≤ kg(2β) < (m + 1 − k)g(α)(18) by assumption.Thus either h = 0 or G(h T ) < G(h T c ), so G RNSP is satisfied. Corollary III.10 (G firm exact recovery).Assume A ∈ R m×n satisfies URP and G = G firm , the penalty corresponding to firm thresholding.For given b, let x * be the global minimizer of ( 14) and x the sparsest feasible vector.Let k = x 0 .If 2k ≤ m and µ < min α m + 1 − k k 1 + 1 − k m + 1 − k , 2β ,(19) then x * = x. ], we have that either {x n } converges or its limit points form a continuum.(A continuum is a compact, connected set; here we also exclude the degenerate case of a singleton.)Since we already know that any limit point o {x n } will be a stationary point of F p , we complete the proof by showing that the stationary points of F p cannot form a continuum. Let F p , and suppose E is a continuum.Fix x ∈ E. For any > 0, it cannot be that N (x; ∩ E = {x}, otherwise x} would be both open and closed in E, contrary to E being connected.Thus there is a sequence of stati nary points x + v n with v n = 0, v n → 0. Since {v n / v n } is a sequence ch that {v n j / v n } does not tend to zero, though of course v n j → 0. First suppose that xj = 0.By considering a tail of v n j , we can assume that xj + v n j = 0 for all n.Then g p is differentiable at xj and xj + v n j , and since x and x + v n are fixed points, (a i ) denote the columns of A, if i = j, we have ∂ϕ/∂x i (x) = a i , a j , while ∂ϕ/∂x j (x) = λg (x j ) + a j 2 .Also, ϕ(x) = 0 and each ϕ(x + v n ) = 0.By differentiability of ϕ, we have ϕ(x + v n ) − ϕ(x) ∇ϕ(x) • v n v n → 0.(71) Since the first two terms of (71) are zero, ∇ϕ(x) • v n = o( v n ) as well.By continuity of ∇ϕ at x, it is straightforward to show that ∇ϕ(x + v n ) • v n = o( v n ) also.Now we consider second derivatives.∂ 2 ϕ/∂x i ∂x k (x) = 0, unless i = k = j, while ∂ 2 ϕ/∂x 2 j (x) = λg p (x j ).Now by the differentiability of ∇ϕ, ∇ϕ(x + v n ) − ∇ϕ(x) − ∇ 2 ϕ(x) v n = o( v n ),(72) so ∇ϕ(x + v n ) • v n − ∇ϕ(x) • v n − v n • ∇ 2 ϕ(x) v n = o( v n 2 ) (73) But from the above we have that the first two terms are o( v n 2 ), so p (x j )(v n j ) 2 ; since (v n j ) 2 / v n 2 d ontradiction. Thus we must have xj = 0.By choice of j, infinitely many v n j = 0, al p , we can then repeat the above argument using a smooth extension of g p to R. Since neither g p (0+) nor g p (0−) are zero, we will obtain the same contradiction.Therefore E cannot be a continuum, and the sequence {x n } defined by ( 59) is convergent to a stationary point of F p . VI. CONCLUSION We have shown that for given signals with reasonable sparsity assumptions and a broad class of measurement matrices, the families of penalti that is able to exactly recover the given data with the given measurement matrix.Further we have shown that these penalties behave well with respect to the addition of noise in the measurements, or only approximately sparse signals (as is often the case in practical settings).Finally, we have shown t generalized shrinkage penalties can be n advantageous alternative to standard 1 compressed sensing, or p compressed sensing. Further work could benefit from exploring in what generality these type of results hold.The theory of generalized shrinkage allows for an endles nal oof may apply to compressed sensing relaxations that arise in other ways.Generally spea results can be extended to handle nonconvex functionals may continue to be a fruitful area of research.Lastly, we make no claims that the approximations made in hese proofs give the tightest results possible, so further refinement of these results may be possible and interesting. Fig. 1 . 1 Fig. 1.Plot of several shrinkage functions, all with λ = 1.The smaller the value of p, the smaller the bias applied to large inputs.Firm thresholding removes the bias completely for large enough inputs, without the discontinuity of hard thresholding. Fig. 2 . 2 Fig. 2. Plot of penalty component function g induced by several shrinkage mappings, all with λ = 1.The smaller the value of p, the slower the growth of the p-shrinkage penalty function, being bounded above when p < 0. Both firm and hard thresholding have penalty functions that are quadratic near the origin, then constant. Fig. 3.The Shepp-Logan phantom, and the number of radial lines of Fourier samples needed to reconstruct the phantom perfectly using different p nalty functions. Lemma III. 3 .= 1 . 31 Assume A ∈ R m×n satisfies the URP and G satisfies (I,II) above.Then the global minimizer of (14) has m or fewer nonzero entries.Proof: Consider w such that Aw = b and w 0 > m.Define the matrix M to have the columns −w i e i .The set of vectors M v with supp(v) ⊂ supp(w) span a subspace of dimension greater than m.Since dim ∞ F = {i : v i = 0 and |w i | < γ} (taking γ = +∞ if the first case of assumption II holds).First suppose T = Ø.Then by assumption II, the function t → G(w + tM v) is strictly concave on an interval [−δ, δ], with δ > 0 chosen small enough that every (w + tM v) i has the same sign as w i for + δM v)}, and w is not a global minimizer.Otherwise, we havev i = 0 ⇒ |w i | ≥ γ.Let t 0 = sup{t : ∀i min{|(w−tM v) i |, |(w+tM v) i |} ≥γ}.Then taking t 1 = t 0 + δ with δ > 0 again small enough that every (w ± t 1 M v) i has the same sign as w i , then one of |(w ± t 1 M v) i | is less than γ for at least one i, giving a co 4.Assume A ∈ R m×n satisfies the URP.Then the magnitudes of nonzero entries of vectors y satisfying Aw = b with m or fewer nonzero entries are uniformly bounded below by some positive constant α and uniformly above by some positive constant β.Proof: By the URP every m columns of A can admit no more than one solution.Thus there are no more than n m vectors w satisfying Aw = b with m or fewer nonzero entries.Thus the set of nonzero entries of these vectors is finite and bounded below and above by α, β respectively.Neither constant depends on G in any way.Note that Lemma III.3 and Lemma III.4 imply that the global minimizer of the equalityconstrained G minimization problem has nonzero entries with magnitude bounded below by α and ab e by β. the original signal with Ax − b ≤ , and let T be the support of the k-sparse approximation of x.Let α S , α, and β be as in Lemma IV.1.Define α := α − x T c ∞ − 2 and β := β + .If A satisfies the URP, AA T = I, min S α S > x T c ∞ (requiring that x be nearly k sparse), and ) Theorem IV.5 (G stability).Assume A ∈ R m×n satisfies the URP, AA T = I, G satisfies (I,II) above, andG(v) ≤ C √ n v 2 for some constant C > 0.For given b, let x be the original signal with Ax−b 2 ≤ , let T be the support of its k-sparse approximation, and suppose min S {α S } >x T c ∞ .Let x * be the global minimizer of (37), where < min S {(α S − x T c ∞ )/(2 + A −1S)} λ 2 − 2 p g p (x j + v n j ) + A T (A(x + v n ) − b) j = 0(69)andλ 2−p g p (x j ) + A T (Ax − b) j = 0.(70)Define ϕ(x) = λg p (x j ) + A T (Ax − b) j .All derivatives of ϕ exist at every x = 0. Letting April 14 2015 DRAFT ACKNOWLEDGMENTThe authors would like to acknowledge the support of the UC Lab Fees Research grant 12-LR-236660 in conducting this research.The first author also acknowledges the support of NSF grant no.DGE-1144087, and would like to thank his graduate advisor, Professor Andrea L.Bertozzi, and his other LANL mentor, Brendt Wohlberg, for their guidance.The second author also acknowledges the support of the U.S. Department of Energy through the LANL/LDRD Program.S)}.Note that this is stronger than the bound on from Lemma IV.1, and it implies 2 + x T c ∞ < α.We see this from the following inequalities:We shall use this below to guarantee α > 0.Note
2018-05-08T18:26:45.213Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "49e0e948aa12f3785cbe160414d75fae70e0afa0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1504.02923", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "49e0e948aa12f3785cbe160414d75fae70e0afa0", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics", "Computer Science" ] }
202538891
pes2o/s2orc
v3-fos-license
Agent-based model for tumour-analysis using Python+Mesa The potential power provided and possibilities presented by computation graphs has steered most of the available modeling techniques to re-implementing, utilization and including the complex nature of System Biology (SB). To model the dynamics of cellular population, we need to study a plethora of scenarios ranging from cell differentiation to tumor growth and etcetera. Test and verification of a model in research means running the model multiple times with different or in some cases identical parameters, to see how the model interacts and if some of the outputs would change regarding different parameters. In this paper, we will describe the development and implementation of a new agent-based model using Python. The model can be executed using a development environment (based on Mesa, and extremely simplified for convenience) with different parameters. The result is collecting large sets of data, which will allow an in-depth analysis in the microenvironment of the tumor by the means of network analysis. INTRODUCTION At the cellular level, Systems Biology (SB) is mainly a complex natural phenomenon that today has become important for research in certain areas such as drug development and biotechnological productions and applications.Mathematical models of cellular level are built in bio repeated cycles at a portion of time to arrive at a decision.The primary idea behind the bio iterative cycles is to develop systems by allowing software developers to redesign and translate the cellular metabolism.The objective is bringing the desired decision or result closer to discovery in each iteration; these experiments on multi-scale iterative cycle of computational modelingbesides experimental validation and data analysis-would produce incremental samples for the high-throughput technologies.Also the behavior of the cell emerges at the networklevel and requires much integrative analysis.Moreover, due to the size and complexity of intercellular biological networks, computational model should be a considerate part of the production or application.In the following whole simulation challenges, SB would not be needless of developing integrated frameworks for analysis and data management. Alongside the intercellular level, many researches in SB also address the issue of cellular population.In evolutionary versions of these scenarios, modeling the interactions of these type of models across large multiscale would need agent-based modeling.Each agent has a Boolean network for its own expression, much like a gene.So a proper production or application for SB must support not only a compatible simulation method but also suitable methods for model parameter estimations which represent the experimental data (D.2011;An2008).Therefore, ABMs start with rules and mechanisms to reconstruct through the mathematical or computational form and observe the pattern of data.Processing the heterogeneous behavior of individual agent within a dynamic population of agents -which cannot be controlled by an overall controller-needs a higher-level system parallelism which ABMs supports.Biological systems include random behaviors and ABMs accommodate this via generation of population agents in the agent's rules.ABMs have a level of abstraction to create new cellular states or environmental variables without changing core aspects of the simulation.To aggregate the paradoxical nature of emergent behavior which could be observed from any agent in contrast to a conceptual rules of the model, ABMs reproduce emergent behavior.Emergent behavior has a range of stochasticity similar to real world Systems Biology (Bonabeau2002).Finding software platforms for scientific agent-based models require comparing certain software design parameters such as emulating parallelism, and developing schedulers for multiple iterations, which manage ABM run.Many references reviewed and compared different agentbased modeling toolkits.However, from the perspective of biotechnological application and biotechnologists, most of them share a key weakness.It is using complex languages which are not Python.Perhaps reimplementing ABMs in Python would be a wiser technical strategy since it is becoming the language of scientific computing, facilitating the web servers for direct visualization of every model step, debugging and developing an intuition of the dynamics that emerge from the model, also allowing users to create agent-based models using built-in core components such as agent schedulers and spatial grids (Villadsen and Jensen 2013). BACKGROUND Spontaneous tumor, which progresses from the initial lesion to highly metastatic forms are generally profiled by molecular parameters such as prognosis response, morphology and pathohistological characteristics.Tumors can induce angiogenesis and lymphangiogenesis, which plays an important role in promoting cancer spread.Previous studies have shown that the cancer stem cell (CSC) theory could become a hypothesis for tumor development and progression.These CSCs have the capability of both self-renewal and differentiation into diverse cancer cells, so one small subset of cancer cells has characteristics of stem cells as their parents.Hereditary characteristics play a certain role in malignant proliferation, invasion, metastasis, and tumor recurrence.In recent researches the possible relationship between cancer stem cells, angiogenesis, lymph-angiogenesis, and tumor metastasis is becoming a challenge.Due to many evidences and reviews such as (Li 2014;Weis and Cheresh 2011;Carmeliet, P., & Jain, R. K. 2000), metastasis is defined as the spread of cancer cells from the site of an original malignant primary tumor to one or more other places in the body.More than 90% of cancer sufferings and death is associated with metastatic spread.In 1971, Folkman proposed that tumor growth and metastasis are angiogenesis-dependent, and hence, blocking angiogenesis could be a strategy to intercept tumor growth.His hypothesis later confirmed by genetic approaches.Angiogenesis occurs by migration and proliferation of endothelial cells from original blood vessels (Weis and Cheresh 2011).Accordingly, translational cancer research has contributed to the understanding of the molecular and cellular mechanisms occurring in the tumor and in its microenvironment which causes metastasis and this could present a model relatively similar to physiology of human or at least has the capability of going through genetic manipulations that bring them closer to humans.Hence tumor modeling with a high spatiotemporal resolution combined with parametric opportunities has been rapidly applied in technology (Granger and Senchenkova 2010). Agent-based Models in Systems Biology A primary tumor model addressing the avascular growth state depends on differential equations.They are classified as "lumped models" to predict the temporal evolution of overall tumor size.Since lumped models just provide a quantitative prediction of tumor size over time with only a few parameters and very low computational results, they would not be enough for an explicit investigation of many other events such as spatiotemporal dynamics of oxygen and nutrients or cell to cell interactions; also, stromal cells which play a major role in cancer growth and progression in the interaction with tumor cells.The result is disregarding the mutations in the tumor microenvironment and metastasis.These shortcomings lead us to In Silico models of tumor microenvironment.In Silico (Edelman, Eddy & Price 2010) refers to computational models of biology and it has many applications.It is an expression performed on a computer simulation.In Silico models are divided to three main categories (Thorne, Bailey, & Peirce 2007;Soleimani et al., 2018) Complex Networks in Biological Models Another interesting approach is the network models.Networks follow patterns and rules and have a specific topology, which allows scientists to go through with a deeper investigation towards biology information extraction. Within the fields of biology and medicine, Proteinprotein interaction (PPI) networks, biochemical networks, transcriptional regulation networks, signal transduction or metabolic networks are the highlighted network categories in systems biology which could detect early diagnosis. All these networks need compatible data to be produced experimentally or retrieved from various databases for each type of network; but besides analyzing data structures for computational analysis, several topological models have been built to describe the global structure of a network (Girvan, Newman 2002;2004). MODELING AND SIMULATION Our approach to the generation of an ABM model considered different temporal and spatial scales, focusing first on mitosis as the main axis of tumor growth.This model and its analysis developed in Netlogo (section 3.1).Then large number of environment limitations were detected with a high number of agents and difficulties to analyze the microenvironment of emergent growth.The second approximation is presented in this paper (section 3.2) used dynamic networks based on a large number of interactive agents.This model makes it possible to help researchers in carrying out more detailed research on intercellular network interactions and metastasis in a multiple scale model (Grimm2005). Netlogo Model and Experimental Results Our ABM NetLogo model is designed as a selforganized model that illustrates the growth of a tumor and how it resists chemical treatment.This model in NetLogo (based on Wilensky's tumor model (Wilensky 1999)) permits us to change the parameters that affect tumor progression, immune system response, and vascularization.Outputs included the number of living tumor cells and the strength of the immune system which control cells.In this model the tumor has two kinds of cells: stem cells and transitory cells.In the model, tumor cells are allowed to breed, move, or die.The simulation presented cells' control with different and constant immune responses through killing transitory cells, moving stem cells and original cells. Figure 1 shows the steady state of a tumor metastasis visualization with 6 stem cells and the grow-factor =1.75, replication-factor=high, and apoptosis=low.As it could be seen, the growth of metastasis is more aggressive and through reducing apoptosis, there is a greater number of cells that do not die, amounting to near 200,000 cells (agents) (Tashakor, Luque & Suppi, 2017). The main problem with this implementation was the limitations of the execution environment (Java memory limitations) and loss of performance with a high number of metastasis cells.In addition, this model did not allow capturing in detail the interactions between the different parameters at the microenvironment level. Figure 1: stem cells evolution and metastasis visualization with grow-factor=1.75, apoptosis=low and replication-factor=high (near 200,000 cells in steady state). Python Model and Experimental Results To solve the aforementioned problems, we started a new development using Python +Mesa.Mesa is the agentbased Python project, which has started recently and it has rapidly found its place among the researchers.Mesa is modular and its modeling, analysis and visualization components are integrated and simple to use.This feature convinced us to re-implement our tumor model with Mesa network structure.Furthermore, it supports multi-agent and multi-scale simulations, which is suitable for creating dynamic agent-based models.This paper documents our work with the Python +Mesa to design an agent model based on the tumor model scenario. At the beginning, Mesa was just a library for agent-based modeling in Python but now it is a new open-source, Apache 2.0 licensed package meant to fill the gap in the modeling complex of dynamical systems. In Mesa, the Model class to define the space where the agents evolve handles the environment of an agent-based model.The environment defines a scheduler which manages the agents at every step. For studying the behavior of model under different conditions, we needed to collect the relevant data of the model while it was running; Data Collector class was defined for this task.The adapting modules of Mesa allow us to make changes in the existing ABMs for the purpose of conforming with the requirements of our future framework. Also, monitoring the data management issues when processing actions happen in parallel seems facile in Mesa, since each module has a Python part which runs on the server and turns a model state into JSON data (Masad & Kazil, 2015).The advantage of Mesa is its browser for visualization which allows the user to see the model while running in the browser. Tumor progression is a complex multistage process and the tumor cells have to acquire several distinct properties either sequentially or in parallel. The first problem using NetLogo framework for simulation was using behavioral space which supports multithreading.In this way, performance is limited to the number of cores at the local infrastructure. To solve the local processing problem, we utilized the parametric simulations using a HPC cluster in order to reduce the necessary time to explore a determinate model data space. Since we needed to distribute our model through the distributed architecture to explore the model states in the microenvironment scale, it was revealed that NetLogo also has scalability issues in designing graph networks. Although there are available extensions for scaling up the model in the form of graph, it still depends on the size of the tumor and it could be a very slow process, taking days to finish experiment considering the checkpoints and performance bottlenecks while using a HPC cluster.We had to translate and move our model to a new framework such as the style and structure of Mesa to facilitate our distributed executions.The newest changes in the ground of the old scenario is increasing the population of agents, obtaining graphbased representation of biological network and consideration of multi-state and multi-scale components (An2008). The initial design aspect of the multi-scale architecture of out tumor growth model in Python comes from the acute inflammation based on the key factors such as angiogenesis.Tumor angiogenesis is critical for tumor growth and maintenance (Kaur et al., 2005). Our new model strategy begins with an initial identification of a minor population of cells with the characteristics of "tumor-initiating" cancer stem cells and these cancer stem cells in the assumed tumor reside in close proximity to the blood vessels.Therefore, we chose an angiogenetic switch for our model which have to balance this dynamic.We had angiogenetic and antiangiogenetic factors. In the case of anti-angiogenetic, we will have the probability of quiescent tumor.In this agent-based model, tumor cells are affected, inflamed and turn quiescent. Table 1 shows the range of parameters we implemented to control the factors. Based on these factors, we can achieve a simulation evolution of metastasis and measure the tumor volume ratio along with the dynamics of the tumor under the influence of the factors.To select a network topology for tumor model, we selected a random graph.Barabasi-Albert model is a scale-free network and it is one of the most basic models since it describes most of the biological networksespecially in evolutionary models; but since there is a need to manage the cell interactions and stromal cells behavior within the tumor microenvironment and due to the time-dependency of the connections, we developed our Python agent-based model on Erdös-Rényi topology. The visualization of the model is a network of nodes (using the Mesa architecture) that shows the distribution of agents and their links.Considering the architecture of Mesa, the creation of agents occurs based on the assignment of nodes to a graph.A scheduler (time module) activates agents and stores their locations and updates the network.The total operation time is directly related to the number of steps necessary to deploy all the agents.Since each agent changes in three states, the process goes on until the tumor agent's volume appears as metastasis.The interactive visualization in Mesa helps us to identify insights and generate value from connected data.Considering the data analysis for life sciences such as tumor which is almost about connections and dependencies, the large amount of data makes it difficult for researchers to identify insights. Graph visualization makes large amounts of data more accessible and easier to read, as it has been illustrated in Figure 2. According to (Peskin 2009), we need a statistical method that leads to volume ratio measurements of tumor, as it could be seen in figure 3. To calculate tumor volume using (Monga.s 2000) we needed tumor width (W) and tumor length (L) which is presented in formula 1: This calculation which works pretty well for clinical issues, was made based upon the assumption that solid tumors are more or less spherical like the version we had before in Netlogo Wilensky's tumor model, but not proper for the metastatic diseases, which due to the phase transition and spreading dynamics will be defined by graph presentations in the future.According to Rai (2017), the model which have the same size and number of connections as of a given network could maintain the degree sequence of the given network.By generating a random network with a given average degree (K) and initial size of (N), we could construct degree sequence (m) and it is presented in formula 2: For volume calculation, we start by assuming that every cell inside the tumor has three states (normal, dead and metastatic) which corresponds to tumor by edges.With the given K degree from three to eight and N initial nodes, we constructed an ER (Erdös-Rényi) network, whose degree sequence of (m) could lead us to the tumor volume ratio. The pseudo-code using the dynamic of degree sequence under the influence of above factors producing tumor volume in graph network is described in Algorithm 2: In Figure 4, we executed the model with different angiogenesis control factor in (0.1 to 0.9) for 60, 360, 650, 100 and 1200 cancer stem cells (CSCs) to produce the tumor volume analysis based on the density distribution of the graph. CONCLUSION In summary, agent-based modeling in Python with Mesa project represents a valuable and time-saving simulation technique, which allowed us to re-implement our tumor model across a network based space.Mesa enable a modeler to quickly write down the code and quickly explore the results.Mesa also collect the results easily and help us in data integration which is very important in the case of phase transition and patternoriented modeling.We migrated successfully from Netlogo to Python-Mesa, which will allow us to reintegrate our system quicker and reiterate simulations taken on previous system with more data in considerably less time. In the future, we will extend the model to run on cloud infrastructure in parallel.This could be an impressive achievement for fast analysis purposes in clinics, both on the predictive diagnostic and therapeutic side.As advantages of this contribution, we have a scalable model that together with Python can be distributed over an HPC architecture eliminating the limitations of other environments (e.g.Netlogo due to memory limitations of the JVM).When Python + Mesa can be deployed over an HPC architecture, there will be a notable increase in scalability and performance since, in this type of environment, the simulation will not be limited by memory.One of the most important disadvantages is the model visualization and animation for oncologists.Netlogo has a cells visualization and animation that can be useful to medical environments.Mesa only has the visualization of the network of agents and their interconnections, so graphic functions must be developed to see the evolution of the cells and the degree of metastasis/apoptosis and other relevant parameters for the medical analyst.Another important aspect that must be addressed will be the functional validation of the whole model since at this moment only a verification and partial validation of the behavior of the affected cells has been carried out. Figure 2 : Figure 2: Graph visualization for three states (normal, dead and metastatic) shown in three colors
2019-09-04T15:33:09.000Z
2019-09-04T00:00:00.000
{ "year": 2019, "sha1": "91955a1699e672fde15c8ea27d588726b8f88edd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "91955a1699e672fde15c8ea27d588726b8f88edd", "s2fieldsofstudy": [ "Computer Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Computer Science", "Biology" ] }
22214220
pes2o/s2orc
v3-fos-license
Shape-parameterized diffuse optical tomography holds promise for sensitivity enhancement of fluorescence molecular tomography : A fundamental approach to enhancing the sensitivity of the fluorescence molecular tomography (FMT) is to incorporate diffuse optical tomography (DOT) to modify the light propagation modeling. However, the traditional voxel-based DOT has been involving a severely ill-posed inverse problem and cannot retrieve the optical property distributions with the acceptable quantitative accuracy and spatial resolution. Although, with the aid of an anatomical imaging modality, the structural-prior-based DOT method with either the hard- or soft-prior scheme holds promise for in vivo acquiring the optical background of tissues, the low robustness of the hard-prior scheme to the segmentation error and inferior performance of the soft-prior one in the quantitative accuracy limit its further application. We propose in this paper a shape-parameterized DOT method for not only effectively determining the regional optical properties but potentially achieving reasonable structural amelioration, lending itself to FMT for comparably improved recovery of fluorescence distribution. Introduction Fluorescence molecular tomography (FMT) in the near-infrared (NIR) range is becoming a powerful modality for mapping the three-dimensional (3-D) distribution of fluorochromes in live small animals [1,2]. Quantum yield and fluorescing lifetime reconstructed by FMT quantitatively reveal conditions of diseased tissues and provide information useful for diagnoses [3]. In FMT, the accuracy of reconstructed fluorescence distributions highly depends on the knowledge of the tissue optical heterogeneities for correct modeling of the light propagation [4,5]. The common approach is to assume a homogeneous optical background, which definitely exerts adverse effects on the reconstruction sensitivity and accuracy. Abascal et al. demonstrated that the normalized Born approximation approach tolerates uncertainty in the absorption heterogeneity to some extent, but fails to compensate for the effect of the unknown scattering heterogeneity in the model [6,7]. Another strategy is to fuse the structural information from anatomical imaging modalities such as X-ray CT and MRI to the modeling process [8][9][10], and to assign the experimentally measured or literaturepublished optical properties to different tissue regions [11]. This approach, despite considerably improving the images, usually leads to noticeable quantification errors because of the individual variations. The advanced method uses diffuse optical tomography (DOT) to estimate the optical property distributions within tissues in vivo, which are then incorporated into the FMT inversion to improving the quality of reconstructed images [12][13][14]. Tan et al. directly reconstructed optical heterogeneities using DOT and applied them to the FMT [13]. To improve accuracy, Correia et al. proposed a method that reconstructed the diffusion coefficient using continuous-wave (CW) DOT with the assist of a prior estimation of the homogeneous background absorption [14]. Although, it has been demonstrated that the simultaneous unique recovery of the scattering and absorption coefficients might not be feasible using a CW-DOT measurement, it is still an open problem whether this nonuniqueness might be effectively suppressed in a reasonably-regularized or well-posed inversion process, where a solution with acceptable accuracy is uniquely generated [15]. Furthermore, due to the strong scattering nature of most tissue types and limited availability of the data, the traditional voxel-based DOT involves a severely ill-posed inverse problem, and in general suffers from low resolution and accuracy [15,16]. To improve the inferior fidelity intrinsic to the conventional DOT, a regularization strategy is generally incorporated into its inversion formulation, which enhances the spatial resolution and the quantitative accuracy of DOT reconstruction by constraining the relevant optimization issue in terms of structural or anatomical a priori information [17][18][19][20][21]. Spatially varying Tikhonov regularization was used to reduce high frequency noises in the reconstructed images by Pogue et al [22]. Boverman et al. evaluated the effect of prior segmentation of breast into glandular and adipose tissues [23]. Yalavarthy et al. regularized the inverse solution by use of MRI-derived breast geometry [17]. In our previous work a structural-prior-based method was proposed for in vivo obtaining the background optical property sets of the tissue regions [24]. This method uses prior structural information from Xray CT and/or MRI modalities to impose either a hard or a soft constraint on the reconstruction process, referred to as hard-and soft-prior schemes, respectively. The major advantage of using a hard-prior scheme is that the total number of unknowns is dramatically reduced, making the inversion better-posed and thereby significantly enhancing the DOT reconstruction quality. However, its stability is critically dependent on the accuracy of the structural priors, and the performance degraded when incomplete or distorted structural priors are employed. In contrast, the soft-prior scheme is robust and unbiased in the presence of uncertainty in structural priors, but exhibits an inferior performance in the quantitative accuracy to the hard-prior one as the confidence in the prior structural information is high. In multi-modality imaging applications such as XCT-DOT or MRI-DOT, it is possible to segment the small animal into a small number of sub-domains with constant piecewise optical properties [25,26]. The whole domain to be imaged can be fairly parameterized by decomposing the surfaces that bound those sub-domains over a spatial basis and therefore describing them in a finite number of shape coefficients. With this strategy the DOT issue reduces to a "coarse-grain" version that jointly recovers the shape-coefficient and opticalproperty sets associated with the sub-domains, referred to as shape-parameterized DOT. Since the number of unknowns is greatly reduced, which in turn alleviates the ill-posedness of the inverse issue, an acquisition of the domain optical structure can be achieved with improved accuracy. In a broad sense, the shape-parameterized inversion methodology in tomography regime provides an effective way of combating the ill-posedness of the voxel-based scheme through parametric decomposition of the sub-domain surfaces and homogeneity assumption on the sub-domain physical properties, and has been successfully applied to FMT [27], bioluminescence tomography [28], electrical impedance tomography [29], and DOT [30,31]. An alternative to the above shape-parameterized DOT is the parametric level-set method that implicitly defines the object shape to be the zero level set of a parameterized Lipschitz continuous object function and estimates both the piecewise constant optical properties in the anomaly and background and the weight coefficients in the object function expansion [32,33]. This method has been adopted to counter the ill-posedness of DOT inversion [33,34], and recently used to estimate 3-D complex tubular structures in breast tissue [35]. Nevertheless, application of the method to capturing the highly heterogeneous optical background of a realistic domain (such as the whole body small animal) involves a whole blind, multi-level and multi-geometry problem and the technical aspects, in particular the choice of the object function and its expansion basis, need to be crucially tackled. In this paper, we consider employing a shape-parameterized DOT scheme for effectively obtaining the background optical structure (i.e., the geometrical and optical priors) to improve the sensitivity of FMT. To enhance the robustness of the reconstruction, an iterative two-step scheme is proposed for joint estimation of the optical properties and shape coefficients, where, alternately, a hard-prior regularized optical-reconstruction is used in the first step for acquiring the optical properties of the sub-domains with the aid of some structural priors, and a shape-reconstruction is followed in the second step for achieving reasonable structural amelioration based on the spherical harmonics parameterization of the interior sub-domains [27,30,36]. Since the introduction of some imperfect structural priors is inevitable in practice to the optical-reconstruction due to uncertainties in the image segmentation or performance limitations of the anatomical imaging modality, a geometrical adjustment of the sub-domains is crucial to improving accuracy of the optical-reconstruction, and accordingly the global convergence of the shape-reconstruction in the second step could also be accelerated with the improved optical-reconstruction -both the steps promote each other in such a robust way to approach the true optical structure of imaging domain. For a methodological validation, simulations and experiments are conducted where the background optical structure obtained with the shape-parameterized DOT is put to use for sensitivity enhancement of FMT. The comparative investigations demonstrate the notable advantage of the proposed method over the previously developed hard-or soft-prior DOT schemes. Parametric description of domain shapes It is feasible to fairly parameterize a closed domain by expanding its boundary over the spherical harmonics basis of different degrees, depending on the regularity and smoothness of the surface [27,30,36]. 1) 3 W + × expansion coefficients for up to W-degree spherical harmonics. In addition to the sub-domain optical properties, a total of ( ) 2 ( 1) 3 L W + × shape coefficients could model the L disjointed sub-domains, denoted by γ . Although a spherical harmonics expansion with higher degree describes more complicated geometry and therefore yields better shape description, it accordingly brings greater computational burden and potentially aggravates the ill-posedness of reconstruction due to rapid increase in the expansion coefficient number. As a demonstration of the methodology, we simply use a 2-degree ( ) spherical harmonics expansion throughout the study. Forward calculation Since the CW method is technically the simplest scheme and potentially offers greater parallelism at lower cost, we hence choose the CW mode in my work. In CW-FMT regime, based on the assumption of multi-domain geometries and the piecewise constant optical properties in each sub-domain r r r r r r r n r r r n r r r r r n r r r is the source term: sub-domain as required by the BEM can be readily generated by mapping the mesh on a spherical surface onto the sub-domain surface according to its spherical harmonics coefficients. We finally obtain a set of linear equations with regard to the photon density υ Φ on the boundary mesh Where the system matrix is in a form of densely asymmetric blocks and depends nonlinearly on the shape coefficients γ and the sub-domain optical ; υ q represents the source term. According to the boundary integral equation, the photon density for an arbitrary interior point ∈ Ω r  by definition could be calculated directly through the boundary integral of the photon density over ∂Ω  [37], from which the measurable flux at the boundary sites can be obtained by applying Fick's law. Inverse problems In the most general case, an adjunct DOT procedure is needed to acquire the background optical properties of the imaging domain, prior to the FMT reconstruction [4]. This definitely optimizes the light propagation model and enables high-fidelity reconstruction of fluorescence target. Under the regional homogeneity assumption on the background optical properties and the spherical harmonics parameterization of the sub-domains aforementioned, both DOT and FMT inversions can be derived within the BEM framework. DOT reconstruction Use of the hard-or soft-prior DOT scheme has been investigated for obtaining the background optical properties and demonstrated a strong dependency on confidence of the anatomical priors. To suppress the adversity, we present a novel shape-parameterized DOT scheme for effectively acquiring the background optical structure. The three schemes can be expressed as follows: Where, x I is the column vector numerating the M excitation measurements on the boundary ∂Ω , x F is the forward operator describing the photon migration, ( ) 0 γ is the initial expansion coefficient set of the sub-domain geometries, which is estimated from the structural a priori information, and denotes the initial background optical properties of the imaging domain, which are commonly assumed to be homogeneous and set to those of muscle. The two former schemes are essentially derived from a voxel-based DOT, with the soft-prior one modifying the minimization functional to include a penalty term for the structural priors and tolerating the prior imperfect to some extent at the cost of low quantitativeness, while the hard-prior one utilizing the regional homogeneity assumption of the optical properties to reduce the voxel-oriented reconstruction to the domain-oriented inversion and being only unbiased for accurate priors [38]. In the soft-prior scheme, Eq. (4), each voxel is regarded as an element with its position indicated by r . While in the hard-prior scheme, Eq. (5), the reconstruction is based on the organ-relevant sub-domains, which, in contrast, are indexed by R . The details on the hard-and soft-prior schemes can be found in Ref [24,38]. We herein focus on the shape-parameterized case. For the shape-parameterized DOT reconstruction, the nonlinear inverse problem in Eq. (2) ( x υ = ) is linearized to construct the following Newton-Raphson iterative procedure μ γ denotes the optical property and expansion being the optical and shape perturbations, respectively; λ is the relaxation factor in range of [0,1]; ( ) J being the sub-matrices regarding the optical properties and shape coefficients, respectively. Although, it is naturally feasible to simultaneously reconstruct the optical property and shape coefficient sets with a single iterative formulation of inversion, a satisfactory solution might be difficult to find without a proper weight-scaling measure because of different orders of magnitude sensitivities of the measurements to both the parameters. To avoid this difficulty, we split the reconstruction procedure into two successive steps: the part of opticalreconstruction and the part of shape-reconstruction. This eventually leads to an iterative alternating scheme for joint estimation of the optical properties and shape coefficients, as shown in Fig. 1. The two-step reconstruction procedure is repeated until the relative change in the forward calculation x F between two successive iterations drops down below a threshold ε , i.e., At the k-th iteration of the two-step reconstruction procedure, the optical-reconstruction is performed with the hard-prior DOT scheme to obtain the optical properties for the (k + 1)-th iteration: ( ) , on the basis of the structural a priori information for the k-th iteration: ( ) k γ , and then the shape-reconstruction step is conducted with the parameterization for obtaining the spherical harmonics coefficients for the (k + 1)-th iteration: ( ) , based on the updated optical properties: ( ) Initialization (1) Initialization of the optical properties : Set homogeneously to be the background values. (2) Initialization of the shape coefficients: Set to be the center coordinates of the sub-domains. Determine the details using a least-squares fitting between the boundaries of the sub-domains and the spherical harmonics approximation. Optical-reconstruction Obtain the optical properties using the hard-prior DOT with the structural a priori by solving the relevant linear inversion: Shape-reconstruction Obtain the shape coefficients using the shape reconstruction with the optical properties a priori by solving the relevant linear inversion : Optical-reconstruction For the optical property reconstruction, Eq. (7) for the voxel-based DOT becomes N × column vectors denoting the optical properties and their perturbations, respectively, at the N discretizing voxels; μ λ is the relaxation factor; ( ) sub-matrices regarding the absorption and reduced scattering coefficients, respectively, which can be conventionally calculated from the photon density and the Green's function values at the N voxels that are obtained with BEM through the boundary integral equation [37]. With the regional optical homogeneity assumption for the hard-prior scheme, Eq. (9) is reduced to a ( ) with the aid of structural a priori information ( ) is the reduced Jacobian matrix with R being a ( ) 1 N L × + voxel-to-region mapping matrix with its entry given as [19] 1 if , 0 otherwise Shape-reconstruction With the optical properties a priori ( ) being reconstructed in the previous opticalreconstruction step, the shape-reconstruction at the k-th iteration is for updating the spherical harmonics coefficients: ( ) . The inverse problem in Eq. (7) is solved with a framework of nonlinear cost-minimization Levenberg-Marquardt method [17,18,39] where γ λ is the relaxation factor; the Jacobian matrix F γ γ is constructed using a perturbation method based on the BEM forward D is a regularization matrix of diagonal form with its diagonal elements being the norm of each column of γ J , and α is the regularization parameter. Since the different types of the shape coefficients contribute differently to the measurement data, we adopts a "center-priority" strategy for the shape coefficients reconstruction, i.e., the low degree coefficients such as those representing the center positions of the sub-domains were reconstructed at the first iterations for stabilizing the algorithm, and the high degree coefficients for describing the finer details of the sub-domain shapes are then appended to the reconstruction process at the later iterations. This technique can effectively enhance the robustness of the reconstruction procedure against the initial conditions, and will be further discussed later in the discussion section. Initialization It is vital for the Newton-Raphson iterative procedure to be properly initialized. In the scheme, the initial optical properties ( ) ( ) where, ( , ) FMT reconstruction In the FMT reconstruction, the fluorescence map ( ) η r is recovered on basis of the obtained optical structure: { } , μ γ , using the following linear system [6,41] [ ] where Numerical model Simulation validations are performed on a cylindrical domain of turbid medium with 17 mm radius and 35 mm height, as shown in Fig. 2. The domain is divided into five sub-domains with different optical properties to emulate heart, liver, lungs, and muscle of a mouse torso embedded in a cylindrical imaging chamber full of the matching fluid. The geometries (shapes and positions) of the organs are extracted from the Digimouse, a 3-D digital mouse model that is developed from the CT images and cryosection data and provides a simplified atlas of mouse anatomy [42]. These organ geometries are approximated with the 2-degree spherical harmonics, respectively, leading to 27 shape coefficients to be reconstructed for each organ. A cylindrical fluorescent target with 3.5 mm radius and 6 mm height is placed in the liver with its center at (x = −4 mm, y = 0 mm, z = 11 mm) and optical properties being the same as those of liver. For performance assessment of the shape-parameterized DOT for improving FMT sensitivity, two fluorescent target-to-background contrasts (TBCs) of 5:1 and 3:1 are considered, respectively, with the background fluorescent yield fixed at 0.001 mm −1 . 32 coaxial source-detector optodes are evenly distributed around the chamber in 4 imaging planes (8 optodes per plane) at z = 4, 13, 22, and 31 mm, respectively, collecting a total of M = 1024 measurements. Assuming that a high-sensitivity photon-counting system is used for small animal imaging, the Gaussian noises with a level of 1% and 3% are added to the excitation and emission measurements, respectively, for mimicking snap-shot process of the received photons in experimental scenarios. DOT for background optical structures For better demonstrating the superiority of the proposed shape-parameterized DOT method over the previous structural-prior-based DOT with either the hard-or soft-prior scheme for acquiring the background optical structure to improving FMT, the optical properties of the sub-domains are obtained using the three schemes, with the initial optical properties homogeneously set to be the typical values of living muscle: ( ) 0 a μ = 0.0040 mm −1 and ( ) 0 s μ′ = 0.80 mm −1 . For mimicking scenario with imperfect structural priors in practice, we distort all the initial shape coefficients of the four sub-domains (heart, liver, left lung, and right lung) that are extracted with the aforementioned least-squares fitting, by randomly adding errors of 15% ± , which is in the typical deviation range of a successful organ segmentation according to our practice. These initial shape coefficients are also used to construct the structural a priori information for the hard-and soft-prior regularizations. Here the relaxation factors μ λ and γ λ are chosen to be 0.75 and 0.8, respectively. Table 1 lists the reconstructed sub-domain optical properties results in contrast to the true ones [43][44][45] (mean values of a μ and s μ′ in each sub-domain for the soft-prior one). Figure 3 shows the relative errors of the reconstructed absorption and reduced scattering coefficients for the five different sub-domains in Table 1. From the results it is clearly observed that, the proposed shapeparameterized DOT method has a notable advantage over the previous hard-and soft-prior schemes for obtaining the sub-domain optical properties -nevertheless, its performance is still not perfect. As can be noticed in Fig. 3(a), the reconstructed absorption coefficients of the liver and heart exhibit large deviations from their true values, and the reduced scattering coefficient of the heart has a little large disparity with its true one, as shown in Fig. 3(b). Table 1. The iterative process of estimating the shape coefficients of the four sub-domains (heart, liver, left lung, and right lung) is shown in Fig. 4, where the red meshes denote the reconstructed sub-domain geometries. As shown in the figure, the result is very promising with good convergence for the associated shape coefficients. Of course, because of the presence of the measurement noise and the inaccuracy of the reconstructed optical properties (Table 1 and Fig. 3), fully accurate recovery of the true geometries is impossible to achieve. To assess the effectiveness of the DOT reconstruction, two metrics are defined: one calculating the residue between the simulated data and the forward model for the overall performance, referred to as residue-metric (15) and the other calculating the distance between the reconstructed and true shape coefficients for the shape-reconstruction, referred to as shape-metric where γ denotes the set of true shape coefficients in the simulation model. Figures 5(a) and 5(b) show the residue-metric ε I and the shape-metric ε γ versus the iteration index k, respectively. It is seen that both the metrics, ε I and ε γ ,are decreased rapidly in the first iterations, since the measurements are especially sensitive to the low degree coefficients of the sub-domains, and then they reach their minimums, respectively, thanks to the successful recovery of the finer details. The results demonstrate that the applicability of the shapeparameterized DOT method in acquiring the optical structure and also its performance superiority over the previous hard-and soft-prior ones. FMT reconstruction The obtained optical structures for the simulation model are then used directly in the BEM forward calculation in the FMT reconstruction. Figure 6 illustrates the sliced yield-images at z = 11 mm obtained using the five kinds of the optical structures (true, initial, hard-prior, softprior, and shape-parameterized, respectively) for the two fluorescent TBCs. To facilitate the comparison of the reconstructed images with the true ones, the cut-lines along the X axis in the reconstructed images, i.e., the X-profiles, are extracted, as shown in Fig. 6(f). From Fig. 6(b), it is obvious that the image obtained using the initial homogeneous optical background has a large deviation of the target location for the contrast of 3:1. Because the hard-prior scheme produces significant errors in the optical properties reconstruction with the imperfect initial structural a priori information, and the soft-prior one yields underestimated optical property values for almost all the sub-domains (Table 1 and Fig. 3), the fluorescence reconstructions with these defective optical structures result in inferior performances in terms of the location and especially quantitative accuracy of the target, as shown in Figs. 6(c) and 6(d), respectively. Comparatively, images obtained with the shape-parameterized DOT reconstructed optical structure reasonably disclose the target for all the two TBCs, as shown in Fig. 6(e), which clearly perform better than the other techniques presented. Instrumentation and phantom A phantom experiment is performed using a CW-FMT system of photon-counting mode, as shown in Fig. 7(a). The system uses a 660 nm diode laser (LTC100/LPS-660-FC, Thorlabs), specially for Cy5.5 dye with its peak excitation and emission wavelengths at 670 and 710 nm, respectively. The excitation light from the laser with intensity is adjusted appropriately by a variable attenuator (FVA-3100, EXFO, Canada), is coupled into a source fiber with 62.5 m μ core diameter and 0.22 numerical aperture (NA). The transmitted light is collected by 8 detection fibers of 500 m μ core diameter and NA = 0.37, evenly distributed on the surface of the phantom from 101.25° to 258.75° opposite to the incidence position (0°) with their tips being 1 mm apart from the phantom surface, i.e., in a noncontact configuration, as shown in Fig. 7(b), and coupled into an 8 × 1 fiber-optic switch with its output collimated for normal incidence to a successive motorized filter wheel housing a bandpass interference filter (Cy5.5-A Emitter, Semrock). The filtered light, finally, enters into a PMT photon-counting head (H7155-01, Hamamatsu, Japan) coupled with a counting unit for the photon-counting detection. By rotating the phantom at an angular interval and translating it at a vertical displacement, a 3-D spatial sampling process can be achieved with a programmed pattern. The whole experimental set is placed in a dark environment to shield the stray light. A cylindrical solid phantom with 15 mm radius and 80 mm height is fabricated from polyformaldehyde, with its background optical properties being ( ) Fig. 7(b)]. Three cylindrical holes, referred to as sub-domains #2, #3, and #4 (the solid phantom referred to as sub-domain #1), with radii of 5, 4, and 3 mm and heights of 40, 30, and 20 mm, are drilled 6 mm away from the cylinder central-axis, respectively, as shown in Fig. 8. In our experiments, 3-D data sets are acquired at 5 imaging planes of z = 20, 30, 40, 50, and 60 mm, respectively, with an angular interval of 22.5°, which provides 16 equally spaced projection angles for each scanning plane. As a result, two sets of M = 640 measurements are acquired at the excitation and emission wavelengths using integration times of 100 and 500 ms, respectively. Under the above experimental setup, the minimal photon counting numbers in the excitation and emission measurements are 4 1.75 10 × and 3 2.32 10 × , respectively. This means that the used system can achieve <1% and <3% noisy levels (inversely proportional to square root of the photon count) for the excitation and emission measurements, respectively, in agreement with the simulation case. DOT reconstruction To construct an optically heterogeneous background, the phantom sub-domains #2 and #3 are filled with mixture of India ink and Intralipid solution with the absorption/reduced scattering coefficients of 0.008/1.6 mm −1 and 0.012/2.4 mm −1 , respectively. A fluorescent target is formed by filling sub-domain #4 with mixture of Intralipid solution, Indian Ink and Cy5.5dyes of ~2 μM and ~0.5 μM concentrations, respectively, making that the optical properties of sub-domain #4 are approximately the same as those of sub-domain #1. The reconstructions are achieved using a difference imaging scheme with a reference phantom that is geometrically and optically equivalent to the background one. The initial shape coefficients of the two sub-domains (#2 and #3) are extracted also with the aforementioned least-squares fitting and then deviated by randomly adding errors of 15% ± similar to the simulation case. The choice of the relaxation factors are consistent with the simulations. Table 2 lists the reconstructed sub-domain optical properties of the phantom using the hard-prior, soft-prior, and shape-parameterized DOT methods, respectively. The relative errors of the reconstructed absorption and reduced scattering coefficients for the three sub-domains in Table 2 are shown in Fig. 9. The experimental reconstruction similarly shows the superiority of the proposed shape-parameterized DOT method over the previous hard-and soft-prior schemes in the optical structure acquisition of the imaging domain, while the same defects as those in the simulations are still observed: the reconstructed absorption coefficients of the sub-domains #2 and #3 exhibit a certain amount of errors, as shown in Fig. 9(a), and the reconstructed reduced scattering coefficient of the sub-domain #3 has a little bit large deviation comparing with its corresponding true one, as shown in Fig. 9(b). Figure 10 shows the iterative process of estimating the shape coefficients of the sub-domains #2 and #3. The residue-metric and the shape-metric versus the iteration index k are presented in Figs. 11(a) and 11(b), respectively, to demonstrate the reasonability and convergence of the reconstruction. Table 2. Figure 12 shows the obtained yield-images at z = 48 mm using the five optical structural backgrounds (true, initial, hard-prior, soft-prior, and shape-parameterized) for the two Cy5.5 concentrations (~2 μM and ~0.5 μM), and also their X-profiles, respectively. The results are consistent with the previous observations in the simulations: Fig. 12(b) shows that the approach of assuming a homogeneous optical background is unable to correctly recover the size and location of the target for the low concentration. Although, the shape-parameterized DOT method with a low degree spherical harmonics might still performs imperfectly in obtaining the phantom optical structure (Table 2 and Fig. 10), the FMT reconstruction with the aid of this DOT scheme generates consistently acceptable results, as shown in Fig. 12(e), which can be potentially optimized with higher degree approximations. Nevertheless, the hard-and soft-prior schemes, despite of their comparatively inferior performances in obtaining the background optical properties (Table 2 and Fig. 9), also provide an effective means of enhancing the FMT reconstructions, as shown in Figs. 12(c) and 12(d), and might even be better quantitative for the optical-reconstruction, especially in the case that the highly confident structural a priori is available. Discussions In practice, a successful application of the proposed shape-parameterized DOT method is dependent on two crucial factors: one is the availability of the reasonably accurate structural a priori; the other is completeness of the shape parameterization. The former normally requires a support from the anatomical imaging modalities with high soft tissue contrast, such as MRI or phase-contrast X-ray CT [8,46], and also can be potentially obtained by registry between the conventional micro-CT images and a standard digital mouse atlas. In both the methods some errors are inevitably introduced. The latter requires a high degree representation of the organ domains that means a significant increase in the shape coefficient number and therefore a degradation in the inversion condition. In order to assess the performance of the proposed shape-parameterized DOT scheme with increase in the spherical harmonics degree while without changing the measurement number, herein, we compare the DOT reconstructions with 2-and 3-degree spherical harmonics approximation for the surface descriptions, respectively. This time, the chambered Digimouse model is simplified to a medium with two sub-domains containing liver and muscle, as shown in Fig. 2. Figure 13 shows the surface of liver domain approximated with 2-and 3-degree spherical harmonics, leading to 27 and 48 shape coefficients, respectively. Table 3 lists the reconstructed optical properties of the two sub-domains, and the evolving process of the liver reconstruction with the two kinds of degrees of the spherical harmonics approximation are shown in Figs. 14 and 15, respectively. It is seen from the figures that, the results are satisfactory for both the cases and although the involvement of the low-degree approximation provides more overall accurate optical-reconstruction and faster shape coefficients convergence (Fig. 14), the high-degree approximation increases the fidelity of describing the liver geometry (Fig. 15). The FMT reconstructions with the obtained optical structures are compared for the same fluorescent target as in the simulations, as shown in Fig. 16, where the yield-images with the true optical structure, as shown in Fig. 16(a), are used as a goldstandard. It is clearly observed that the proposed scheme with the 3-degree spherical harmonics approximation has a better performance for improving the sensitivity of FMT than that with the 2-degree description. Although the number of unknowns in the shape-parameterized DOT reconstruction is greatly reduced, attention should be paid to further optimization of the shape-reconstruction to balance the different contributions of the shape coefficients to the measurement data, which, if simply recovered simultaneously without being properly scaled, probably fails the reconstruction. To do so, a "center-priority" strategy for the shape coefficients reconstruction is adopted as aforementioned. An intuitive explanation for use of this strategy is that, for a sub-domain, deviations in its center position and expansion coefficients influence the boundary measurements in different ways: the center position deviation alters the distance from the domain to the domain boundary, and thus re-distributes the photon density over the whole domain; in contrast, the deviation of a expansion coefficient changes the sub-domain shape and mainly changes the fine pattern of the boundary flux. This significant difference in the contributions requires the center position and the expansion coefficients to be handled differently. In particular, the center position needs to be estimated firstly to grasp the coarse configuration of the boundary measurements, and stabilizes the later reconstruction process for the expansion coefficients. The proposed shape-parameterized DOT guided FMT method is specially suitable for imaging the chest and abdomen regions. In these regions, however, there are also some other organs, mainly the bones, which might not be easily modeled by spherical harmonics approximation. Nevertheless, due to the significantly higher density of the bones than the surrounding tissue, nearly accurate structural a priori of these high-contrast regions could be obtained from the anatomical images. As a result, to include the bones, only the opticalreconstruction is required in the whole procedure, based on the accurate structural a priori. Conclusions In the paper we proposed an shape-parameterized DOT method for obtaining the background optical structure to improving the sensitivity of FMT. Both the numerical simulations and phantom experiments reveal the superiority of the scheme over the previously developed hard-and soft-prior ones, in the case of some low confident initial structural a priori information. Furthermore, a comparative investigation is performed for a single liver subdomain described with the 2-and 3-degree spherical harmonics and demonstrates the performance improvement of the proposed scheme with increase in the degree of the geometry approximation. Although higher degree spherical harmonics describes more complicated geometry and therefore yields better shape description, it accordingly brings greater computational burden and potentially aggravates the ill-posedness of the inverse problem, therefore, a balance between the condition of the inverse problem and complexity of the shape approximation should be made in applications. Future work will focus on applying the proposed approach to in vivo small animal imaging, with support of anatomical imaging modalities, and more complex examples such as reconstruction of multiple domains with higher degree (≥3-degree) spherical harmonics approximation are also of interest. Finally it is worthy to point out that, although the approach used for the shape coefficient initialization is easy and automatic to application without any challenging or trivial form of the problem, some other ways, such as initializing the shape of the sub-domains by computing the softprior reconstruction, thresholding the reconstruction, and then fitting a low order spherical harmonic expansion to the resulting characteristic function, would be appreciated in further work.
2018-04-03T04:17:20.914Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "bf93b70e58641c8e9bb3d1e0b910b2f7cd64462c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/boe.5.003640", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0d4fd5a18fe9eb16eb94e1966984b8157a411085", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
238582978
pes2o/s2orc
v3-fos-license
Precision and Fitness in Object-Centric Process Mining Traditional process mining considers only one single case notion and discovers and analyzes models based on this. However, a single case notion is often not a realistic assumption in practice. Multiple case notions might interact and influence each other in a process. Object-centric process mining introduces the techniques and concepts to handle multiple case notions. So far, such event logs have been standardized and novel process model discovery techniques were proposed. However, notions for evaluating the quality of a model are missing. These are necessary to enable future research on improving object-centric discovery and providing an objective evaluation of model quality. In this paper, we introduce a notion for the precision and fitness of an object-centric Petri net with respect to an object-centric event log. We give a formal definition and accompany this with an example. Furthermore, we provide an algorithm to calculate these quality measures. We discuss our precision and fitness notion based on an event log with different models. Our precision and fitness notions are an appropriate way to generalize quality measures to the object-centric setting since we are able to consider multiple case notions, their dependencies and their interactions. I. INTRODUCTION In the past years, process mining [1] has introduced a datadriven way to discover and analyze processes. The introduced techniques and algorithms enable organizations to discover the true underlying process from the data of its execution rather than manually designing what is assumed to be the process. These techniques work on an event log which records the executed activities for different cases. Each event has an activity and is attributable to exactly one case, e.g., the customer, order, patient, etc.. The attribute used to assign an event to a case is called the case notion. The event log, therefore, describes in which order the activities can be executed for such cases. However, this already shows the central limitation of traditional process mining: the single case notion. In practice, the assumption that every event is only related to exactly one case is unrealistic. One such example could be an event describing activity Load cargo for objects plane1 and bag1 at 2021-10-02 17:23. If one considers this event and the corresponding event log, what would be the case notion? The plane to be loaded with cargo or the bag to be loaded into We thank the Alexander von Humboldt (AvH) Stiftung for supporting our research. the plane? By focusing on only one case notion one would omit the behaviour of the other case notions, e.g., focusing on only the plane would omit the process of checking in and transporting baggage. Uniting the different case notions into one single, comprehensive process model allows us to consider dependencies between the executions that are not available otherwise. Object-centric process mining [2] addresses these problems and aims to define techniques and standards for retrieving and analyzing comprehensive, easy-to-understand models of processes with multiple case notions. An event log format [3] and a first discovery technique [4] have already been introduced. This discovery technique yields an object-centric Petri net. However, the missing key to enable further research on object-centric process discovery on the one hand and objective model evaluation on the other hand are quality criteria that link object-centric model and log and specify the conformance of the model. In traditional process mining quality criteria like precision, fitness, simplicity and generality are used to express the quality of a model with respect to a log [1]. These can be used to compare different models of the same event log, evaluate the results of different discovery algorithms, specify the conformance of a handmade process model and more. So far, there are no quality measures to evaluate an objectcentric Petri net. To facilitate further research we need a way to describe the quality of an object-centric Petri net, e.g., how well it fits the data or how precisely it describes the data. Due to multiple case notions and many-to-many relationships fitness and precision notions from traditional process mining can not trivially be adapted to the object-centric setting. In this paper, we introduce a fitness and precision notion for object-centric Petri nets with respect to object-centric event logs. Our fitness notion can be seen as an object-centric adaption to replaying traces [5], the precision notion as a generalization of the escaping edges precision [6]. The paper is structured as follows. In Section II, we discuss other approaches of handling multiple case notions. In Section III, we introduce object-centric event logs and Petri nets and illustrate them on the basis of a running example. In Section IV, we formally introduce our fitness and precision notion. This is further accompanied by our running example. In Section V, we present an algorithm to calculate both precision and fitness. We use three different models with respect to an event log in Section VI to evaluate our introduced notions. We conclude this paper in Section VII. II. RELATED WORK In this section, we introduce the related work on handling multiple case notions and the techniques to determine fitness and precision in traditional process mining. The different approaches to handle multiple case notions can be grouped into three categories: interacting processes, colored Petri net approaches, and novel modeling techniques. We discuss the corresponding approaches, their disadvantages and how object-centric Petri nets relate to these. Several approaches to handle multiple case notions use individual processes with some notion for interaction between them. The first method that was introduced is proclets [7], describing interacting workflow processes. Over time, this concept was extended to cover many-to-many relationships [8] and different analysis techniques [9]. Another modeling technique is artifact-centric modeling [10], [11]. Process discovery techniques for artifact-centric modeling have been developed [12], as well as conformance checking methods [13], [14]. To deal with the complexity of artifact-centric modeling, [15] introduced a restricted artifact-centric process model definition where no concurrency is allowed within one artifact. The main disadvantage of these approaches is the absence of a single comprehensive model of the process as they show interacting individual processes. Furthermore, models tend to get too complex and hard to understand. The second approach to handle multiple case notions are colored Petri nets. DB-nets [16] introduce a modeling technique to include a data-perspective into a process model using a colored Petri net. However, the modeling is hard to understand for a user and it targeted to a modeling, not a mining setting. Furthermore, there is a plethora of approaches that uses colored Petri nets with some restrictions, e.g., no concurrency within one case notion and discarding variable one-to-many relationships, which restrictions render them infeasible in many settings. The discussion of these approaches is outside the scope of this paper. The third group includes newly proposed modeling techniques to condense a process with multiple case notions to one model. Object-Centric Behavioral Constraints (OCBC) [17] are a recently introduced approach to show behavior and relationships of different objects in one model. Discovery algorithms as well as quality measures have been proposed for this discovery technique [18]. However, since OCBC builds on top of a data format that records the whole evolution of a database the models tend to get complex and the proposed techniques are not scalable. Multiple View Point models [19] introduce less complex models where the discovery is more scalable. A process model can be constructed by correlating events via objects. [4] extends this approach to discover objectcentric Petri nets, Petri nets with places of different colors and arcs that can consume a variable amount of tokens. Object-centric Petri nets can model a process with multiple case notions in one single model, consider one-to-many and many-to-many relations, concurrency within case notions and a scalable discovery algorithm is available. Therefore, objectcentric Petri nets alleviate most of the drawbacks from other approaches in the related work with respect to multiple case notions. Since quality metrics play an important role in traditional process mining several techniques to determine fitness and precision have been proposed. Techniques for determining fitness include causal footprints, token-based replay, alignments, behavioral recall and eigenvalue recall [20]. In this paper, we use an adaptation of a replaying fitness, i.e., whether the preoccurring activities of an event can be replayed on the object-centric Petri net. Techniques for determining precision include escaping edges precision, negative event precision and projected conformance checking [21]. The adaptation of these techniques to object-centric event logs and Petri nets each pose their own challenges. i.e., dealing with multiple case notions and variable arcs in the Petri net. In this paper, we propose a generalization of the escaping edges precision [6] which links an event to a state in the process model and determines the behavior allowed by model and log and, subsequently, the precision. This can also be seen as an object-centric adaptation of replaying history on the process model to determine fitness and precision [22]. III. OBJECT-CENTRIC PROCESS MINING Object-centric process mining moves away from the single case notion of traditional process mining, i.e., every event is related to exactly one case, and opens up the possibility for one event being related to multiple, different case notions. The foundations were defined in [4]. We introduce these key concepts of object-centric process mining in this section. These are accompanied by a running example of a flight process that considers planes and how the associated baggage is handled. It describes the operations of the plane, i.e., fueling, cleaning and lift off, and the handling of baggage, i.e., check-in, loading and pick up. We first introduce some basic definitions and notations. Definition 1 (Power Set, Multiset, Sequence). Let X denote a set. P(X) denotes the power set of these elements, the set of all possible subsets. A multiset B:X→N assigns a frequency to each element in the set and is denoted by [x m ] for x ∈ X and frequency m ∈ N. A sequence of length n assign positions to elements x∈X of a set σ:{1, . . . , n}→X. It is denoted by σ= x 1 , . . . , x n . The concatenation of two sequences is denoted with σ 1 ·σ 2 . Concatenation with an empty sequence does not alter a sequence σ· =σ. The number of elements in a sequence σ is given by len(σ). The central part of object-centric process mining are objects. Each object is of exactly one object type. Definition 2 (Object and Object Types). Let A be the universe of activities. Let U o be the universe of objects and U ot be the universe of object types. The function π otyp : U o → U ot maps Check-in b1 e 3 Check-in b2 e 4 Load cargo p1 b1,b2 e 5 Lift off p1 e 6 Unload p1 b1,b2 e 7 Pick up @ dest b1 e 8 Pick up @ dest b2 e 9 Clean p1 e 10 Fuel plane p2 e 11 Check-in b3 e 12 Check-in b4 e 13 Load cargo p2 b3,b4 e 14 Lift off p2 e 15 Unload p2 b3,b4 e 16 Clean p2 e 17 Pick up @ dest b3 e 18 Pick up @ dest b4 each object to its type. The two universes, of objects and of object types, contain all possible objects and all possible object types. Every object o ∈ U o has a type π otyp (o). In our example we are considering object types OT ={planes, baggage}. We have, in total, six objects: O={p1, p2, b1, b2, b3, b4} of which two are planes and four are baggage, indicated by the first letter of the object, i.e., π otyp (p1)=π otyp (p2)=plane, π otyp (b1)= · · · =π otyp (b4) =baggage. In general, we are interested in recording the behaviour of objects over time, i.e., at which point in time activities were executed that concern objects. This is done using an object-centric event log. The OCEL format [3] records the corresponding activity, timestamp and objects of each object type for an event. Additional attributes are stored for each object. We use a reduced definition of the objectcentric event log to focus on the aspects relevant for this paper. Definition 3 (Object-Centric Event Log). Let U E be the universe of event identifiers. An object-centric event log is a tuple L=(E, OT, O, π act , π omap , ≤) of event identifiers E⊆U E , object types OT ⊆U ot , objects O⊆U o , a mapping function from event to activity π act :E→A, a mapping function from event to related objects π omap :E→P(O) and a total order ≤. This definition gives us event identifiers of which each is related to an activity and to some objects. These event identifiers are subject to a total order, i.e., by the time of their occurrence. An example of an object-centric event log is depicted in Table I. It consists of the events with the identifiers E={e 1 , . . . , e 18 }. The event identifier is unique and provides the order of events. The activity of an event, e.g., π act (e 4 )=Load cargo is given as well as the related objects, e.g., π omap (e 4 )={p1,b1,b2}. Discovering process models from event data plays a central role in process mining as it is a data-driven way to uncover the true nature of a process. One way to represent process models are Petri nets [1]. Van der Aalst and Berti [4] introduce the concept of object-centric Petri nets and provide an algorithm to discover an object-centric Petri net from an object-centric event log. An object-centric Petri net is a colored Petri net where the places are colored as one of the object types and some arcs can consume a variable amount of tokens. Definition 4 (Object-Centric Petri Net). Let N =(P, T, F, l) be a Petri net consisting of places P , transitions T , flow relations F ⊆(T ×P )∪(P ×T ) and a labeling function l:T A. An object-centric Petri net is a tuple OCPN=(N, pt, F var ) of Petri net N , a mapping of places to object types pt:P →OT and a set of variable arcs F var ⊆F . We introduce the notations: the non-variable object types associated to transition t. An example of an object-centric Petri net for the flight process is depicted in Figure 1. Places are indicated by circles and transitions are indicated by rectangles. Arcs are indicated by arrows while variable arcs are indicated by double arrows. The object type of a place is shown by its color, blue being type plane and red being type baggage. The label of each transition is depicted inside the transition itself. The transition with no label is a so called silent transition. We refer to this transition as τ . This definition of the Petri net itself is not sufficient to describe a process and the behavior it allows. We need to introduce further concepts. We, first, introduce the concept of a marking. Places of object-centric Petri nets can hold tokens of the same object type. The marking of an objectcentric Petri net can be expressed as objects being associated to places of the corresponding object type. Definition 5 (Marking of an Object-Centric Petri Net). Let OCP N =(N, pt, F var ) be an object-centric Petri net with N =(P, T, F, l). The set of possible tokens is described by A marking describes the current state of the Petri net. It states which objects are contained in which places. We can move between markings by firing transitions. The concept of a binding is used to describe this. A binding is a tuple of a transition and the involved objects of every object type. The objects of the corresponding color in the binding are consumed in the input places and produced in the output places. A binding is only enabled at a given marking if each input place of the corresponding transition holds at least the tokens of this object type contained in the binding. Definition 6 (Binding Execution). Let OCPN=(N, pt, F var ) be an object-centric Petri net with N =(P, T, F, l). We can take a binding at the Load cargo transition of the Petri net in Figure 1 as an example. We assume a marking where the input places of Load cargo contain plane p1 and baggages b1, b2 according to their color [(pl3, p1), (pl4, b1), (pl4, b2)]. In this state of the Petri net the binding of Load cargo and p1, b1, b2 is enabled since the input places of the transition contain enough tokens of objects contained in the binding. The binding can be executed and leads to a new marking where the tokens of p1, b1, b2 moved from the input to the output places of Load cargo. We can construct a sequence of such enabled bindings that describes the execution of several bindings after each other. In this way, we can express that one marking is reachable from another marking in the Petri net. We define initial and final markings. Together with the objectcentric Petri net these form an accepting object centric Petri net. All binding sequences starting from an initial marking and ending in a final marking form the accepted behavior of that accepting object-centric Petri net. σ − →M f inal } describes all the binding sequences, i.e., the accepted behavior of the Petri net. In our example, an accepted behavior is a binding sequence that starts in a marking with only tokens in the places pl1 and pl2 and ends in a marking with only tokens in the places pl10 and pl11. The algorithm of [4] can be used to discover such accepting object-centric Petri nets. These concepts are sufficient to describe the behavior allowed by an object-centric Petri net. Starting in an initial marking all binding sequences are allowed that lead to a final marking. However, if we consider our event log in Table I and our model in Figure 1 the question arises of how well the process model describes the object-centric event log. So far research on object-centric process mining has focused on specifying standardized event log formats and discovering models. The missing key to enable further research on improving object-centric process discovery are quality criteria for a discovered object-centric model. IV. FITNESS AND PRECISION FOR OBJECT-CENTRIC PETRI NETS Generally speaking, fitness and precision compare the behaviour seen in a data set to the behaviour possible in a model. The more behavior of the data set the model allows the fitter it is. The more behaviour the model allows that is not covered by the data set the more imprecise it is. This general notion of fitness and precision can be adapted to event logs and process models by comparing the recorded activity sequences in the event log to the possible activity sequences in the process model [22] in traditional process mining. However, this can not easily be adapted to object-centric event logs and Petri nets. The main problem is multiple case notions of an event. There is no single sequence of previously occurring activities for an event anymore. Multiple involved objects are associated to different activity sequences that were previously executed for them. Furthermore, even if there is only one object involved in an event, an activity sequence of one object might be dependent on the occurrence of an activity sequence of another object. Compare the exemplary event log in Table I. Before event e 5 , Lift off of plane p1, happens, the baggages b1 and b2 have to be loaded into the plane, making this event also dependent on these two objects even though they are not included in the event itself. We define a context notion that describes the previously executed activity sequences of the objects on which an event depends. The dependent objects and their relevant events are extracted by constructing the eventobject graph. Definition 8 (Event-Object Graph and Context of an Event). Let L=(E, OT, O, π act , π omap , ≤) be an object-centric event log, let ot∈OT be an object type and let o∈O be an object. We introduce the following notations: • G L =(V, K) is the event-object graph of the event log L with V =E and K={(e , e) ∈ E×E | e <e ∧ π omap (e) ∩ π omap (e ) = ∅}. Analogously to the single case notion, each single object has a simple sequence of its previously occurring activities. We use a multiset of sequences to cover the possibility of an event being related to multiple objects of the same object type with the same sequence. We use the event-object graph to grasp every object on which the execution of an event depends and up to which event it depends on this object. The event-object graph introduces a directed edge between events if they share an object. For an event e, all other events that have a directed path to event e form the event preset of e, the events on which the execution of e depends. The full event-object graph of events e 1 to e 9 of log L 1 is depicted in Figure 2. Take event e 5 as an example. The event preset of e 5 is formed by all events that are directly or transitively connected to e 5 , i.e., that have a directed path the e 5 in the event-object graph. This includes e 1 and e 4 but also e 2 and e 3 even though they do not share any objects with e 5 . Therefore, •e 5 = {e 1 , e 2 , e 3 , e 4 }. This shows that the event preset includes all events on which the execution of an event depends, also the transitive dependencies. To construct the context we take all objects that appear in the event preset and the event itself, map them onto their sequence of activities in the event preset and construct a multiset of the objects' sequences for each object type. For •e 5 this includes the objects p1, b1, b2. We construct the sequences of these and combine them to the context context e5 (plane)=[ Fuel plane, The context can colloquially be described as everything that had to happen for this event to occur. We use the context to link log and model behaviour. The context specifies several sequences of activities executed for different object types. A binding sequence of an object-centric Petri net, thus, also has a context if we consider the executed activity sequences of all the objects in this binding sequence. be an accepting object-centric Petri net with OCPN=(N, pt, F var ) and N =(P, T, F, l). Let σ= (t 1 , b 1 ), . . . , (t n , b n ) be a sequence of enabled bindings of that Petri net. We introduce the following notations: for a binding sequence. • context σ (ot)=[σ↓ o |o∈ i∈{1,...,n} b i (ot)] for any ot∈U ot is the multiset of prefixes in a binding sequence, also called context of the binding sequence. Given a binding sequence, we can project it onto the transition labels for each object that is included in at least one of the bindings. We do this by projecting each binding of the sequence onto its transition label if the object is contained in the binding. If the transition is a silent transition, i.e., it has no label, this projection yields the empty sequence. The labels are concatenated into a sequence of labels. This is the prefix for an object. The prefixes are united into a multiset for each object type. To illustrate this we use the Petri net from Figure 1 and a binding sequence. In our example, a binding consists of a transition name and bounded objects b of each object type: ) . All objects that appear in this sequence are {p1,b1,b2}. The projected prefix for p1 is Fuel plane, Load cargo . For each b1 and b2 this sequence is Check-in, Load cargo . The context for this binding sequence is, therefore, context σ1 =context e5 . Given a certain context one can look at all the possible binding sequences that have this context. All of the states that are reached after executing any of these binding sequences are the states that are reachable given this context. Definition 10 (Context Reachable States). Let L = (E, OT, O, π act , π omap , ≤) be an object-centric event log, OCPN A be an accepting object-centric Petri net and e∈E be an event. We assume the existence of an oracle states:(OCPN A , context e )→P(M ) that retrieves the reachable states with a binding sequence of context e . The allowed behaviour of the model is specified by the enabled activities of the model in any of the reachable states of a context. Applying this to our running example, we extract the enabled activities for any state in states (OCPN 1 , context e5 ). There are the two enabled activities Lift off and Pick up , i.e., en OCPN1 (e 5 ) = {Lift off, Pick up @ dest}. With the so far introduced concepts we can already derive the context of an event and state the possible behavior of the model for this context. To retrieve precision and fitness of the model we now need to specify the behavior that is given by the log. The behavior recorded in the event log is specified by comparing the subsequent activities for the same context. Definition 12 (Enabled Log Activities). Let L = (E, OT, O, π act , π omap , ≤) be an object-centric event log and e∈E be an event. en L (e)={π act (e ) | e ∈E ∧ context e =context e } defines the enabled log activities for the corresponing context of an event. We illustrate that using our running example of context e5 . There is one other event that has the same context which is e 14 . The activity that is executed for both events of this context is Lift off, i.e., en L1 (e 5 ) = {Lift off}. The enabled log and model activities are calculated for the context of each event and the share of behaviour contained in the log and also allowed by the model, the fitness, is calculated. If all the behavior of the log is also allowed in the model it has a fitness of 1, if all replaying ends up in a final marking one could speak of perfect fitness. Definition 13 (Fitness). Let L=(E, OT, O, π act , π omap , ≤) be an object-centric event log and OCPN A be an accepting object-centric Petri net. The fitness of OCPN A with respect to L is fitness(L, The enabled log and model activities are calculated for the context of each event and the share of behaviour allowed by the model and also contained in the log, the precision, is calculated. Not replayable events are skipped. If all the behavior allowed by the model is also contained in the log the model is perfectly precise. Definition 14 (Precision). Let L=(E, OT, O, π act , π omap , ≤) be an object-centric event log and OCPN A be an accepting object-centric Petri net. E f ={e ∈E | en OCPN A (e ) =∅} is the set of replayable events. The precision of OCPN A with respect to L is calculated by precision(L, The fitness and precision metrics retrieve single comprehensive numbers about the quality of the model. We apply this to our running example. For all the events the enabled activities of the log are also allowed by the model. The fitness of the model is, therefore, fitness(L 1 , OCPN 1 )=1. The only events where the enabled model activities exceed the enabled log activities are events e 5 , e 6 , e 14 , e 15 . For each of these events, Pick up @ dest is enabled in the model but not in the log, i.e., the model allows for baggage to be picked up before the baggage was unloaded which is not contained in the event log. We, therefore, retrieve a precision of precision(L 1 , OCPN 1 )=0.89. V. CALCULATING PRECISION AND FITNESS In this section, we discuss an algorithm to calculate precision and fitness. Our algorithm is based on replaying the events on the model. The implementation of our algorithm is available on GitHub 1 . The calculation of fitness and precision can be divided in two steps: determining the enabled log activities and the enabled model activities. Constructing the contexts and calculating the enabled log activities is straightforward according to Definition 8 and Definition 12. We construct the event-object graph for the event log, extract the event preset for each event, calculate the prefix for each object and merge these prefixes into the context. We, subsequently, collect the activities of all events with the same context as the enabled activities of this context. Due to variable arcs and silent transitions the calculation of enabled model activities is not trivial. We, therefore, introduce a) c) b) Fig. 3. a) Object-centric flower model, b) restricted model, c) appropriate model an algorithm to determine the enabled model activities which is depicted in Algorithm 1. The core idea of our algorithm is to replay the events for each context and determine the enabled activities in the resulting states of the object-centric Petri net. For a given context we collect all the events that have this context. For each of these events we extract the binding sequence of visible transitions recorded in the event log. We do this by projecting the event preset of an event onto the bindings described by the events, i.e., transition and objects. See the event log L 1 in Table I and event e 5 with event preset •e 5 ={e 1 , e 2 , e 3 , e 4 } as example. The corresponding binding sequence of visible transitions that need to be replayed are the bindings of e 1 to e 4 , consisting of the transition and recorded objects, in order of their occurrence. We replay each binding sequence under consideration of silent transitions. This is a breadth-first search through the states reachable by executing the binding sequence and silent transitions. When the binding sequence is fully replayed all enabled activities are added to the set of enabled model activities for this context. One important aspect for the practicality of these fitness and precision notion is the complexity of the algorithm. The computation of the measures consists of computing enabled log and enabled model activities. Determining enabled log activities is done in quadratic time since the event log has to be traversed once for each event to determine the preset. The computation of enabled model activities depends on the log and the Petri net. It depends linearly on the number of events, however, two factors can lead to an exponential increase in the computation time: silent transitions and the number of objects. Coherent clusters of silent transitions with choices within one object type lead to the necessity to replay through all the possible reachable states to align the process model with the log, if possible. Especially when considering that multiple objects of one object type can be involved the state space that needs to be searched grows exponentially. This is a problem when considering, e.g., object-centric Petri nets mined by the inductive miner on a log with, e.g., noise. VI. EVALUATION In this section, we discuss our precision and fitness notion by applying it to three different models of one synthetic event log and analyzing the results. We do this to assess whether our precision and fitness notion can be interpreted analogously to the notions of traditional process mining, i.e., provide an intuitive interpretation for experts and practitioners. We use an object-centric flower model, a model that is tailored to the most frequent process execution and an appropriate model. The first model is an object-centric adaptation to a flower model, it is depicted in Figure 3a. In traditional process mining, a flower model is used to describe a process where every transition can be executed at any time. The fitness is very high since it covers any behavior seen in the log. The precision, however, is very low since all transitions can happen at any time, also behavior that is not covered by the event log. For an object-centric flow model, we expect a high fitness and Fitness Precision Skipped Events Flower Model (Figure 3a) 1 0.25 0% Restricted Model (Figure 3b) 0.31 0.95 54% Appropriate Model (Figure 3c) 1 0.57 0% low precision. The second model is a model that just accounts for the most frequent activity sequence of each object type, it is depicted in Figure 3b. This is a very restrictive model that only allows for little behavior. In traditional process mining, the precision of such a model is high as the little behavior it covers is contained in the log. The fitness, however, would be low since such a model only suits the most frequent execution of the process. We expect a low fitness and high precision for our notions. The third model is an appropriate representation of the underlying process, it is depicted in Figure 3c. We, therefore, expect high measures for fitness and precision. The results calculated by our algorithm are displayed in Table II. The appropriate model fits the event log perfectly and has a precision of 0.57 2 . This is a high precision compared to the flower model which also fits perfectly but has a low precision since it always allows for every activity to be executed. The restricted model shows an almost perfect precision. However, missing concurrency, choice and activities lead to a very low fitness and approximately 54% of the events where the context can not be replayed and which can not be considered. In summary, our fitness and precision notion behave analogously to the ones of traditional process mining and, therefore, yield an intuitive and easy-to-understand object-centric definition of fitness and precision. VII. CONCLUSION In this paper, we introduced a precision and fitness notion for object-centric Petri nets with respect to an object-centric event log. We use the concept of a context to relate log and model behavior and calculate the enabled activities for the context in the model and the log. We handle contexts that are not replayable on the Petri net by excluding them from the precision calculation. We provide an algorithm and implementation of calculating these quality metrics. We evaluate our contributions by comparing the quality measures for one event log and three different models. Our fitness and precision notion offer an objective way to evaluate the quality of an objectcentric Petri net with respect to an object-centric event log. In the future, we want to use these concepts for evaluation of improved object-centric process discovery. Other future lines of research could focus on handling non-replayable context, e.g., by the calculation of object-centric alignments or on providing approximations for models with large state spaces for replay. Furthermore, other quality metrics considered in
2021-10-12T01:34:16.652Z
2021-10-06T00:00:00.000
{ "year": 2021, "sha1": "da1c39a1c3ab5b11077487b7e74b0429dd6b89fc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2110.05375", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "da1c39a1c3ab5b11077487b7e74b0429dd6b89fc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
53074426
pes2o/s2orc
v3-fos-license
A model study of Abrahamsenbreen, a surging glacier in northern Spitsbergen . The climate sensitivity of Abrahamsenbreen, a 20 km long surge-type glacier in northern Spitsbergen, is studied with a simple glacier model. A scheme to describe the surges is included, which makes it possible to account for the effect of surges on the total mass budget of the glacier. A climate reconstruction back to AD 1300, based on ice-core data from Lomonosovfonna and climate records from Longyearbyen, is used to drive the model. The model is calibrated by requesting that it produce the correct Little Ice Age maximum glacier length and simulate the observed magni-tude of the 1978 surge. Abrahamsenbreen is strongly out of balance with the current climate. If climatic conditions remain as they were for the period 1989–2010, the glacier will ultimately shrink to a length of about 4 km (but this will take hundreds of years). For a climate change scenario involving a 2 myear − 1 rise of the equilibrium line from now onwards, we predict that in the year 2100 Abrahamsenbreen will be about 12 km long. a is to the mean surface elevation and thereby to increase the ablation area, causing a negative perturbation of the mass budget. found that the of surges to a faster retreat of the in Introduction Abrahamsenbreen is a valley glacier in the north-western part of Svalbard (79.10 • N, 14.25 • E), originating at the ice field Holtedahlfonna (for more topographic information, see the interactive map: http://www.npolar.no/en/services/maps/). It is about 20 km long and flows in a north-easterly direction (Fig. 1). The glacier snout terminates on land and is only a few tens of m a.s.l. (above mean sea level). The highest regions in the accumulation area are about 900 m a.s.l. A large part of the accumulation area is rather flat with an altitude ranging between 600 and 750 m a.s.l. According to Hagen et al. (1993), the equilibrium-line altitude is around 600 m. The glacial river runs through the very flat Woodfjorddalen over a distance of about 15 km before it enters the Woodfjord. The glacial history of northern Spitsbergen is only broadly known (Svendsen and Mangerud, 1997;Forman et al., 2004;Salvigsen and Høgvard, 2005). There is abundant evidence that the fjord areas were deglaciated by 10 kyr BP (Before Present) and that, during most of the Holocene, glaciers were less extensive than they are today. Abrahamsenbreen most likely reached its maximum Holocene extent during the Little Ice Age (LIA), in line with the evidence for many large glaciers in western and southern Spitsbergen (Hagen et al., 1993). One of the goals of this paper is to see whether this is in agreement with palaeoclimatic information derived from the Lomonosovfonna ice cores (Pohjola et al., 2002;Divine et al., 2011) and the meteorological record of Longyearbyen. Abrahamsenbreen is a surging glacier. It is well known for its fine set of looped moraines (Fig. 2) that were formed during and following the surge that took place around 1978 (Hagen et al., 1993). The duration of the 1978 surge and the frequency with which surges occur is not known. However, it is likely that the surge characteristics of Abrahamsenbreen are similar to those of other gently sloping glaciers in Svalbard. These surges are of a less vigorous type than observed on alpine glaciers like Variegated Glacier (Kamb et al., 1985), Medvezhiy Glacier (Osipova and Tsvetkov, 1991) or North Gasherbrum Glacier (Mayer et al., 2011). Surge characteristics of Svalbard glaciers vary considerably, but the common element is a relatively long surging phase which lasts for several years (Dowdeswell et al., 1991;Melvold and Hagen, 1998;Sund et al., 2009;Dunse et al., 2012). A "normal surge" is an event in which enhanced ice flow transports ice from higher regions to lower regions within a relatively short time, in the end leading to a marked advance of the glacier front. However, in a study of 50 glaciers, Sund et al. (2009) have also documented glacier surges in which the enhanced motion stops before the stage of an advancing front is reached. The effect of the surge then only implies a thinning of the accumulation region and a thickening of the ablation region. In the case of Abrahamsenbreen there is no doubt that the 1978 surge was a full surge, during which the glacier front advanced by at least 2 km. After a surge, a glacier will be subject to a negative net surface mass balance, because the mean surface elevation is lower than before the surge. However, because the ice flow becomes (almost) stagnant, after some time the accumulation area will thicken. This implies an increasing surface elevation, less melt in summer and consequently the transition to a stage in which the surface steepens and the glacier volume increases until a new surge is initiated. It is not a priori clear at which point in the cycle Abrahamsenbreen actually is. According to the map of the equilibrium-line altitude over Svalbard provided in Hagen et al. (1993), E ≈ 600 m in the region of Abrahamsenbreen. For the parameterized glacier geometry used in this study (discussed in Sect. 3), this would imply that the glacier currently has a net balance that is slightly negative. This is in agreement with the study of Nuth et al. (2010), who derived a net balance of −0.67 ± 0.14 m year −1 for the period 1966-2005. It should be noted that the surge took place within this period. There is no general consensus about the mechanism that causes glaciers in Svalbard to surge (Murray et al., 2003). These glaciers flow over soft sediments, and the duration of surges is significantly longer than for glaciers in steeper alpine terrain, which are at least partly hard-bedded. Thermal regulation has been put forward as a likely mechanism, in which the switch from frozen to warm bed conditions plays a central role (e.g. Fowler et al., 2001). However, direct evidence for this theory does not exist. Oerlemans (2013) has suggested that the steady accumulation of dissipative meltwater in the accumulation zone plays an important role. In recent years geometric changes caused by surging have been documented extensively (e.g. Sund et al., 2009), but this has not yet resulted in a major step forward in our understanding. Since so many glaciers on Svalbard are of the surging type, the question of to what extent surges interfere with the longer-term response of glaciers to climate change has arisen (Hagen et al., 2005;Paasche, 2010). This question is of importance with respect to the climatic interpretation of historical glacier fluctuations and also needs to be considered when making projections of glacier behaviour for scenarios of global warming. In the simple glacier model used in this study, surges are imposed and their effect on the mass budget is then implicitly dealt with. By comparing model experiments with and without the surging mechanism, the potential role of surges in the evolution of Abrahamsenbreen is evaluated. In this study the climate sensitivity of Abrahamsenbreen is studied with a simple glacier model. A so-called minimal glacier model is used (Oerlemans, 2011), in which the ice mechanics are strongly parameterized and the focus is on the total mass budget of the glacier. In fact, the ice mechanics are reduced to a relationship between the mean ice thickness, glacier length and mean bed slope. The surge cycle is then imposed by making the proportionality factor between length and thickness a prescribed function of time. We are aware of the limitations of such a model. It does not give insight into why surges occur and what determines the length of the surge cycle. However, since the mass budget of a (non-calving) glacier is mainly determined by the mean surface elevation relative to the equilibrium-line altitude, the details of the surface topography matter less. Therefore, useful information about the climate sensitivity of a glacier can be obtained even without the calculation of the spatially distributed fields of surface topography and ice velocity. Hardly any measurements have been carried out on Abrahamsenbreen, making the modelling of this glacier a real challenge. The available data consist of (references to these data sources are given later in this paper) In this paper we use these data to constrain and calibrate the model in the best possible way. We consider this exercise to be useful, because for more than 99 % of all glaciers in the world no more information is available than maps, satellite images and photographs. Modelling strategy and geometric input data The geometry of the main stream of Abrahamsenbreen is simple with a very smooth surface profile along the flow line, indicating that the bed is also gently sloping. Major ramps or overdeepenings are likely absent, since they would certainly be reflected in features at the glacier surface (e.g. Raymond and Gudmundsson, 2005). Such a regular geometry is a prerequisite for the use of a minimal glacier model, which requires a small set of input parameters and can be calibrated easily with the limited data available. In a minimal glacier model the state variables are glacier length and mean ice thickness. Before describing this model we will first summarize some of the information about the lower part of the glacier that is evident from the two aerial photographs (from 1969 and 1990), two topographic maps (1966 and 2002) and a satellite image (ASTER, 26 June 2001). Terminal moraines from the tributary glaciers T4, T5, T9 and T10 ( Fig. 1) are schematically mapped for 3 years (Fig. 3). The distance between the locations of the moraines in different years was calculated by projecting the moraine tips on the central flow line and measuring the displacement. For the displacement between 1990 and 1969 we found respectively 4.5 and 4.7 km for M1 and M3 and 5.9 and 6.3 km for M2 and M4. Using the mean values of the paired moraines (left and right of the glacier), average corresponding ice velocities would be 219 m year −1 for the lower region of the glacier and about 290 m year −1 for the middle part. The displacement between 2001 and 1990 is small, with corresponding velocities of 9 and 23 m year −1 . If we think of the ice velocity as composed of a background part and a surge part and we assume that the background part has been constant, it follows that the displacement of surface ice due to the surge would be 4.4 km for the lower part and 5.6 km for the middle part. It is not straightforward to convert these data into a total advance of the glacier front during the surge. Comparing the glacier outlines on the maps suggests a frontal advance of 1.8 km. However, the glacier front will have melted back from a more advanced position during the period 1978 (surge) to 2002 (map outline). Judging from the size of the moraine system (Figs. 2 and 3), retreat of the snout could have been at most 1.6 km during this period. This would imply that the total advance of the front related to the surge is not larger than 3.4 km. Altitudinal profiles along the central flow line are shown in Fig. 4. The absolute error of the topographic maps in this area is not known, but the profiles appear to be consistent with the occurrence of the surge in 1978. The mean slope of the pre-surge profile (1966) is 0.035, while that of the postsurge profile (2002) is 0.027. The mean difference in altitude ( h) between the profiles is 51 m. It should be noted once more that by the year 2002 part of the glacier snout has retreated. Hence, the mean difference in altitude shortly before and after the surge probably was somewhat larger. The value of h cannot be taken directly as a measure of the change in mean ice thickness H m , because the mean bed elevation is also different before and after the surge (solely due to the change in glacier length). With the representation of the bed chosen here (discussed shortly) we found b = 13 m. Altogether, we used a value of H m = 50 m to characterize the change related to the surge. Since the maps from which the profiles are taken are 36 years apart, the difference in mean surface elevation can also partly be due to a non-zero surface balance rate during this period. Unfortunately, in situ measurements are not available to check this. Nuth et al. (2010) infer a negative mean balance rate for the period 1966-2005 from remote sensing data. However, in their map of elevation changes over north-west Spitsbergen (their Fig. 5), the outlines for Abrahamsenbreen are not identical to those inferred here from the 2002 topographic map. The difference is mainly in the size of the accumulation area (larger in the present study), which had a slightly positive balance rate during the period 1966-2005. In view of this, we have not made any corrections to the value of H m = 50 m as being characteristic for the surge. We also note that with a significantly different (smaller) value, it is impossible to explain the glacier advance during the surge in terms of mass conservation (which implies a direct relation between change in glacier length and change in mean ice thickness). The geometric set-up of the model is shown in Fig. 1. The main glacier is modelled as a flow band with a constant width of 2000 m. It has its own surface mass budget, which is definitely negative because it is almost entirely in the ablation zone. The main stream is fed by tributary basins and glaciers numbered T1, . . ., T10. The mass input from these tributaries is parameterized in terms of a schematic geometry and depends on the climatic state. Details on this are described in Sect. 4. We assume that the tributaries have a considerably smaller characteristic response time than the main glacier The Cryosphere, 9, 767-779, 2015 www.the-cryosphere.net/9/767/2015/ because they are steeper, implying that the net balance of the tributaries is calculated as if they were in a quasi-steady state. This also implies that tributaries having a negative net balance are simply ignored in the total mass budget. The bed topography is basically unknown. The surface of the glacier is smooth and has a small slope (∼ 0.03), suggesting that a simple formulation of the bed profile is adequate. A bed topography that can be handled well by the minimal glacier model reads (1) Thus the bed elevation drops off exponentially from a value b h at the highest point of the flow band (x = 0) to sea level for large values of x (see Fig. 1). The characteristic length scale at which the bed becomes lower is denoted by x l . We also considered using a linear bed profile, but this generates problems for glacier stands that are significantly larger than today (a bed far below sea level, which is unrealistic in this case). Here we chose x l = 12 000 m. Admittedly, this value is not more than an educated guess based on the general picture of valleys in northern Spitsbergen that are more deglaciated than the Abrahamsenbreen valley (topographic map: http://www.npolar.no/en/services/maps/). The value for b h is discussed later. Glacier model The theory of minimum glacier models has been developed in Oerlemans (2011), and the reader is referred to that work for details (freely available from the internet; http://www.staff.science.uu.nl/~oerle102/MM2011-all. pdf). We only give a brief description of the model version used here. The starting point for the model formulation is the continuity equation: where V is the volume of Abrahamsenbreen, F (< 0) is the calving flux, B A is the total surface mass budget of Abrahamsenbreen and the last term represents the mass input from the tributary glaciers as defined in Fig. 1. The glacier length L is measured along the central line on the glacier (Fig. 1). Since here we do not consider states of Abrahamsenbreen where it calves into the Woodfjord, we set F = 0. Because the glacier width w is assumed to be constant, the rate of change of ice volume can be written as where B tot is the right-hand side of Eq. (1), the total mass budget of the glacier. The mean ice thickness is parameterized as (Oerlemans, 2011) where s is the mean slope of the bed over the glacier length and α m and ν are constants. A "surge function" S has been introduced, which makes it possible to impose a surge cycle. S is prescribed as a function of time. A rapid decrease of S mimics the surge, whereas a steady increase of S represents the quiescent phase during which the glacier thickness steadily increases. The precise form of S(t) will be discussed later. The parameterization of the mean ice thickness as described by Eq. (4) gives a good fit to results from numerical flow line models. For s → 0 the mean thickness varies with the square root of the glacier length, which is in agreement with the perfectly plastic and Vialov solutions for a glacier/ice cap on a flat bed (Weertman, 1961;Vialov, 1958). The minimal glacier model was used earlier in a study of Hansbreen, southern Spitsbergen . The bed topography of Hansbreen is known, and it was found that Eq. (4) matches the observed mean thickness for ν = 10 and α m = 3 m 1/2 . However, Hansbreen is a non-surging tidewater glacier in a different geographical and geological setting; therefore from the very few glaciers with bedrock data we have selected Kongsvegen as a better glacier to estimate the parameter α m . Like Abrahamsenbreen, Kongsvegen is a surging glacier, which is currently in its quiescent phase (Melvold and Hagen, 1998). It is not located far away from Abrahamsenbreen (about 25 km). From the bed and surface profiles of Kongsvegen a value of α m = 2.27 m 1/2 is found, indicating that basal conditions here allow for a lower resistance than in the case of Hansbreen. We have used the value of α m = 2.27 m 1/2 as the best possible estimate for Abrahamsenbreen. However, different values of this parameter will be used later in a sensitivity test. Using the chain rule for differentiation, the time rate of change of ice thickness can be expressed as Combining with Eq. (3) then yields Here B tot is the total mass budget, i.e. the right-hand side of Eq. (3). J. Oerlemans and W. J. J. van Pelt: A model study of Abrahamsenbreen The prognostic equation for the length of the glacier can thus be written as where From Eq. (7) it is clear that a sufficiently rapid decrease of S(dS/dt 0) leads to a strong increase in L (but not in V ). For the exponentially decaying bed profile described by Eq. (1) the mean bed slope over the glacier length is easily found to be The term ∂s/∂L, needed in the coefficient b in Eq. (8), is This concludes the formulation of the glacier model. When B tot is known, Eq. (7) can be integrated in time with a simple forward time-stepping scheme. The calculation of B tot is described in the next section. Mass budget of the main glacier Mass-balance measurements have been carried on a number of glaciers in Svalbard but not on Abrahamsenbreen. Glaciers with a mass-balance record of at least 10 years, as filed at the World Glacier Monitoring Service, are Midtre Lovénbreen, Kongsvegen, Hansbreen and Austre Brøggerbreen. Long-term mean balance profiles are shown in Fig. 5. These profiles suggest that a schematic representation of the balance rate can be taken as a linear function of altitude, i.e. where β is the balance gradient and E is the equilibriumline altitude. Linear regression on the profiles shown in Fig. 5 yields values of β ranging from 0.0039 to 0.0053 m w.e. m −1 . The mean value is 0.0045 m w.e. m −1 , which is used in this study. It is clear from the available observations that a higherorder formulation, e.g. with a quadratic term in h, is not meaningful. However, according to Hagen et al. (1993;their −2.25 m per %, see Sect. 6.1). This is taken into account by making E a function of x: where γ is the spatial gradient of the equilibrium-line altitude along the flow line of the glacier. From the glacier length and change of E we estimate γ = 0.005. The total mass gain or loss can now be found by integrating the balance rate over the glacier surface: where b is the mean bed elevation of the glacier. For the exponentially decaying bed profile it is given by Tributary glaciers For the tributary glaciers feeding the main stream, some further analysis is required to arrive at useful estimates of the mass input. Although the tributary glaciers could be modelled in a similar way as the main glacier, we take a somewhat simpler approach in which the surface geometry is fixed. This The Cryosphere, 9, 767-779, 2015 www.the-cryosphere.net/9/767/2015/ is justified because the tributary glaciers have much larger mean slopes and therefore a weaker altitude-mass-balance feedback. We assume that a tributary glacier can be described as a basin with a length L y and a width w(y) = w 0 + qy. Here y is a local coordinate running from the lowest part of the basin (y = 0) to the highest part of the basin (y = L y ). The surface elevation is taken as h(y) = h 0 + sy, where s is the surface slope. The parameters q and s are constants which are different for the individual basins. The total mass budget B i of basin i is then obtained from Evaluating the integral yields The geometric characteristics of the basins have been estimated from the topographic map and are summarized in Table 1. All basins have a trapezoidal shape, some becoming narrower when going up (q < 0) and some wider (q > 0). Due to the spatial gradient in E (see Eq. 11) the basins which are located further downstream along the x axis will experience a slightly higher equilibrium-line altitude. This is accounted for by applying a basin-dependent correction (Table 1). Basic experiments For S = 1 and E = 587 m the model produces a steady-state glacier with a length of 17.5 km, which is close to the presurge length (we cannot define this precisely). A good match between the calculated and observed (pre-surge) mean surface elevation is obtained with b h = 323 m, α m = 2.27 m 1/2 . The mean ice thickness then is 263 m. We refer to this state as the reference state. The corresponding mass inputs from the tributary basins are given in the last column of Table 1. The net balance of the main glacier stream is −0.66 m w.e., and this is then compensated exactly by the mass input from the tributaries. Glaciers, and certainly surging glaciers, are never in steady state. Nevertheless, it is useful to have a steady state as a reference state, because it reveals basic properties of the glacier model. At this point it should be noted that the value of the bed parameter b h is determined by the value of α m . Although we believe that the value of α m as discussed in Sect. 3 is a good choice, we will later discuss a few sensitivity tests to show how the value of α m affects the results. The next step is to introduce the surge behaviour. The surge function is formulated as The surge starts at t = t 0 and the surge amplitude S a determines by how much the thickness of the glacier is reduced. The characteristic timescale of the surge is denoted by t s . The last term in Eq. (16) represents the quiescent phase of the surge cycle, during which the glacier steadily thickens because the mass flux is smaller than the balance flux. The constant C should be chosen in such a way that the long-term mean value of S(t) is close to 1. We use t s = 2.5 years. This value is based on the observation that most surges of Svalbard glaciers typically last a few years (Sund et al., 2009). The value of S q is determined by two factors: the rate of mass addition in the accumulation area and the degree to which the glacier motion slows down after the surge. For the present case we have chosen values of S a and S q in such a way that (i) the frontal advance related to the surge is reproduced and (ii) the difference in the mean ice thickness before and after the surge is in agreement with the observations (about 50 m; Sect. 2). We thus found S a = 0.168 year −1 and S q = 0.002 year −1 . The duration of the surge cycle for Abrahamsenbreen is not known. For most glaciers the duration of the quiescent phase is in the 50-500-year range (e.g. Dowdewell et al., 1991). Because Abrahamsenbreen is a large and rather flat glacier in a relatively dry climate, we have chosen a surge cycle of = 125 years. Later we will show sensitivity tests that reveal how the particular choice of affects the results. A model simulation in which the surge mechanism is switched on at some point in time (after the glacier has reached the reference state defined above) is shown in Fig. 6. As discussed above, a surge leads to a sudden decrease of the mean ice thickness and associated negative net balance (−0.3 m year −1 just after the surge). During the quiescent phase the net balance gradually becomes positive and the ice thickness increases, but this is not enough to compensate for the mass loss during and just after the surge. Therefore the glacier length decreases until a new equilibrium is reached after about 1000 years. Thus, the net effect of the surging mechanism is to reduce the long-term glacier length. This is in agreement with earlier studies (Adalgeirsdóttir et al., 2005;Oerlemans, 2011). Reference simulation In this section we describe how a reference simulation, including the surging behaviour, has been obtained. A simulation of the evolution of Abrahamsenbreen during the late Holocene requires a plausible climatic forcing. In the present model climate change is imposed by adjusting the equilibrium-line altitude according to The annual perturbation of the equilibrium-line altitude is denoted by E (t) and determined by annual temperature and precipitation anomalies, denoted by T (in K) and P (in %) respectively. E (t) is thus written as where the sensitivities ∂E/∂T and ∂E/∂P are assumed to be constant. Sensitivities have been determined for Nordenskiöldbreen with a detailed energy and mass-balance model (van Pelt et al., 2012; Table 2), and here we use their values: For many glaciers in a more alpine setting, values of ∂E/∂T are of the order of 100 m K −1 (e.g. Oerlemans, 2011). The value for ∂E/∂T given in Eq. (19) thus appears as rather small. This is due to the fact that in the high Arctic summer temperature, anomalies which mainly determine the sensitivity are much smaller than annual temperature anomalies. This has been taken into account in the determination of the sensitivities. The input data to calculate E are taken from van Pelt et al. (2013). In this paper a climate reconstruction back to AD 1300 was made on the basis of ice-core data from Lomonosovfonna as well as climate records from Longyearbyen. For details the reader is referred to Divine et al. (2011) andvan Pelt et al. (2013). The temperature and precipitation anomalies, relative to the period 1989-2010, are shown in Fig. 7, and the corresponding history of the equilibrium-line altitude in Fig. 8. The most prominent feature in the reconstructed temperature record is the LIA, lasting from the late 16th century to the end of the 19th century, with long-term temperatures typically 4 K below the medieval and presentday levels. The reconstruction does not reveal a clear correlation between temperature and precipitation anomalies. The variation of the equilibrium-line altitude is substantial. During the period 1750-1850, the equilibrium line was about 200 m lower than in medieval times. The value of E 0 is optimized in such a way that the simulated maximum glacier length in 1978 corresponds to the observed length. This yields E 0 = 657 m. Note that E is defined with respect to the period 1989-2010, implying that its mean value over the period 1300-2010 is not 0. In the climate reconstruction used here, the value of E during the period 1989-2010 was 76 m larger than the long-term mean since AD 1300. The simulated glacier length (Fig. 8) appears to be in good agreement with geological evidence (distribution of moraines, strand lines and floodplains). There is general agreement that Abrahamsenbreen reached a Holocene maximum extent during the LIA, like most glaciers in northern Spitsbergen (e.g. Forman et al., 2004;Salvigsen and Høgvard, 2005;Humlum et al., 2005). According to our model, Abrahamsenbreen would have had a length of about 5 km in medieval times and started to grow in the 16th century until it reached LIA size (between 18 and 22 km) in the second half of the 19th century. For the calculation shown in Fig. 8, the value of E has been kept constant to the 1989-2010 value for the period after 2010. This clearly implies steady retreat, but the timescale at which this happens is large. This is an implication of the very small bed slope and the related strong altitude-mass-balance feedback (Oerlemans, 2011). Figure 9 shows the mass inputs (in m 3 of ice per year) of the tributary basins and glaciers corresponding to the simulation shown in Fig. 8. Although the inputs are highly correlated, there are large differences in the absolute changes of mass input through time. The input from tributary glaciers T4, T5 and T10 is sometimes 0. For T9 this happens just a few times. The other basins always deliver some mass to the main glacier, but the amounts can halve or double during high or low values of E . We refer to the simulation just described as the reference simulation. The model has been tuned in the best possible way given the limited amount of observations. There appears to be no evident discrepancy between the simulated glacier evolution and the geological evidence. The effect of surging The question of how surges interfere with the long-term response of glaciers to climate change has been raised several times (Hagen et al., 2005;Paasche, 2010). Although the present model does not initiate surges by means of an internal mechanism, it does include the main effect of a surge on the surface mass budget of a glacier related to the reduction of the mean surface elevation. Since a lower surface elevation implies a more negative mass budget, one would expect that a regularly surging glacier would have a smaller long-term mean glacier length. With the present model set-up it is not possible to just switch off the surging mechanism, because by virtue of Eq. (16) the model the glacier would be in the quiescent phase continuously and the ice thickness would increase forever. However, a meaningful way to study the effect of surging is to vary the duration of the surge cycle and see how this effects the long-term mean glacier length. To make a fair comparison between runs with different surge cycles, the constant C in Eq. (16) is adjusted in such a way that the mean value of S(t) is equal to 1 over the integration period. Figure 10 shows a comparison of runs with a longer (doubled, i.e. 250 years) and shorter (halved, i.e. 62.5 years) surge cycle. The integrations have been extended until AD 3000, Figure 10. The effect of surges on the evolution of Abrahamsenbreen. The "reference" simulation is the same as in Fig. 8 (note that this simulation has a 125-year surging period). with the equilibrium-line altitude equal to the mean value over the period 1989-2010. This leads to a steady decay of the glacier, implying that the current size of Abrahamsenbreen is far too large for the climatic conditions that prevailed during the past few decades. The effect of a different surge frequency is small until AD 1900 but much more obvious afterwards. This is related to the fact that, with a glacier in a state of decay, the massbalance effect of surges works in the same direction as the climatic forcing. Moreover, the glacier surface extends into a region with anomalously high ablation rates. In contrast, for a growing glacier the mass-balance effect of surges works against the climatic imbalance and is therefore less visible. Many more numerical experiments were carried out with different surge parameters. An increased surge amplitude (larger value of S a ) enhances the effect on the long-term glacier length because it implies a larger drop of the mean www.the-cryosphere.net/9/767/2015/ The Cryosphere, 9, 767-779, 2015 surface elevation. When the surge takes longer (larger value of t s ) there is a similar effect. The decay of the glacier after the year 2000 is a remarkable feature given the relatively small climatic forcing. In the model simulation, after 1989 the equilibrium line is 76 m higher than for the period 1300-1989. The new steady state length is about 4 km, but it takes 500 years to approach this state. The extreme climate sensitivity of Abrahamsenbreen is a consequence of the small bed slope. Basic theory on the relation between E and L for a schematic glacier geometry (constant glacier width) shows that a first-order estimate of the sensitivity is given by (Oerlemans, 2001(Oerlemans, , 2012) where s is the mean bed slope. For the bed parameters used here, a typical value of s is 0.015, implying that ∂L/∂E ≈ −133. So a 50 m change in the equilibrium-line altitude would imply a change in glacier length of 6.7 km. Oerlemans (2011) also reveals that the sensitivity as defined by Eq. (20) is larger when the accumulation zone is wider than the ablation zone. For Abrahamsenbreen this implies that the value of 133 is probably a conservative estimate. Sensitivity to bed elevation The basic unknown parameters that can be adjusted to make the model produce the correct glacier length and mean surface elevation are the bed elevation parameter b h , the shape parameter α m and the reference equilibrium-line altitude E 0 . We thus have three parameters and two constraints, implying that a unique set of parameters cannot be found. In Sect. 5 the problem was solved by assuming that the value of α m is the same as for Kongsvegen. Although Kongsvegen is also in a post-surge state and is located in a similar geological setting, it is still possible that the value of α m for Abrahamsenbreen differs significantly. Therefore some calculations were carried out with perturbed values of α m , namely +20 and −20 %. Changing the value of α m implies that a recalibration has to be done by adjusting the values of b h and E 0 to get the correct glacier length in 1978 and the correct mean surface elevation. For a 20 % larger value of α m (2.72 m 1/2 ) we found b h = 241 m (instead of 323 m) and E 0 = 643 m (instead of 657 m). For a 20 % smaller value of α m (1.82 m 1/2 ) we found b h = 412 m and E 0 = 670 m. Because the ice thickness is proportional to α m , it is not surprising that the adjustments in b h are quite significant. However, the required changes in the value of E 0 are rather small. The evolution of the glacier length for the three different tunings is shown in Fig. 11. It is interesting to see that for the case with α m = 1.82 m 1/2 the surge in 1853 produces a slightly longer glacier than in 1978. The differences among the three cases are small for the period of glacier growth and significant for the period of glacier retreat after 2000. This is a consequence of the fact that during the period of glacier growth the glacier length was rather close to its equilibrium value most of the time (because, irrespective of short-term fluctuations, the equilibrium line drops gradually). After the year 2000, the glacier is strongly out of balance for the imposed forcing, and the effect of different ice thicknesses on the rate of retreat turns out to be more pronounced. In summary, we conclude that the simulated glacier evolution depends on the choice of α m but not in a dramatic way. The characteristic behaviour of Abrahamsenbreen for the imposed forcing is rather similar for the three different tunings. Sensitivity to changes in the equilibrium-line altitude By means of numerical modelling it has been shown that a glacier on an isolated mountain bordered by a flat plane will grow to infinity if the equilibrium line is lowered beyond a certain critical value (Oerlemans, 1981;Fig. 10). This occurs because the feedback of the mean surface elevation on the balance rate keeps the total mass budget positive. In the present model the bed profile decays exponentially to a constant value (namely, sea level), and the critical value of the equilibrium-line altitude described above is very likely to be in the system. In the case of Abrahamsenbreen this would imply that for a certain drop of the equilibrium line the glacier would grow and grow until the front reaches the Woodfjord and only mass loss by calving could stabilize the glacier at some point. Critical (bifurcation) points in a dynamical system normally imply an increasing sensitivity and response time when the critical point is approached. Theoretically, when approaching the critical point the sensitivity and response time go to infinity. The large response time suggested by Fig. 10 actually suggests that Abrahamsenbreen may indeed be close to the critical point. With these considerations in mind we have carried out a set of integrations with different values of E after AD 2010. Figure 12 shows glacier length for different climatic perturbations, all started with the calibrated glacier history un-til AD 2010. Clearly, for E = −160 m the glacier quickly comes in a state of runaway growth, and it ultimately grows out of the model domain. For E = −120 m the glacier approaches a steady state, albeit very slowly. For further increasing values of E , steady states are approached more quickly. In fact, the curves in Fig. 12 show that the response time decreases when the steady-state glacier length is smaller. This is not a direct consequence of the glacier size but is related to the corresponding increase in the mean bed slope and the larger distance (in parameter space) to the critical point. The future of Abrahamsenbreen It is very likely that the Arctic will be subject to further warming, which will have a large impact on the glaciers of Svalbard. Reduced sea ice may lead to higher precipitation rates, but it is questionable whether this could stop the retreat of the glaciers. According to Eq. (19), a precipitation increase of about 15 % per degree warming would be required to keep the equilibrium line in place. A detailed analysis of the precipitation regime in the Arctic with a comprehensive climate model suggests a sensitivity of 4.5 % increase per degree of temperature warming (Bintanja and Selten, 2014). Although this is significantly more than the global value of 1.6-1.9 % increase per degree, it is by no means sufficient to prevent the rise of the equilibrium line when temperatures go up. The future evolution of Abrahamsenbreen can be studied with the present model, because it has been calibrated and no further assumptions are needed to define an initial state. Nevertheless, one should be aware of the schematic nature of the model and the limited data available for Abrahamsenbreen, implying that the constraints are not very tight. The results given below should therefore be considered as indicative of what is a possible scenario rather than a prediction. We have carried out a set of integrations until AD 2150, with an equilibrium line that rises linearly in time, according to Again, the anomaly is defined with respect to the period 1989-2010; time t is in years AD. With the aid of Eq. (19), changes in equilibrium-line altitude can be related to changes in temperature and precipitation. For instance, µ E = 1 m year −1 would correspond to a warming rate of 0.028 K year −1 or to a warming rate of 0.04 K year −1 combined with an increase in precipitation of 0.016 % year −1 . In all the integrations the surge period and amplitude have been kept constant. Figure 13 shows the result for µ E = 1 m year −1 , which we consider a typical value for the expected warming in the Arctic. In this case the length of Abrahamsenbreen is predicted to be reduced to 12.7 km by the year 2100. The corresponding reduction in volume is 66 % of the value in 2010. The net balance rate for the entire system and the input from the tributaries is also shown. By the year 2100 the input from the tributary glaciers has been reduced to virtually 0, because they have a negative net balance. Discussion In this paper we have applied a simple model to study the climate sensitivity of Abrahamsenbreen. We have demonstrated that even with a limited amount of information a meaningful calibration can be carried out, and some conclusions can be drawn about the present state of balance and the future of Abrahamsenbreen under conditions of climate change. Although the main trunk of Abrahamsenbreen has a relatively simple geometry, the total glacier system is complicated with many basins and tributary glaciers providing mass to the central flow band. The parameterization we have chosen to represent the geometry is effective and contains sufficient information to quantify the overall mass budget. Admittedly, the assumption that the tributaries are in a quasi-steady state is perhaps not always satisfied when the climatic forcing changes rapidly. However, modelling a glacier system like Abrahamsenbreen with a two-dimensional (vertically integrated) or three-dimensional ice-flow model, and dealing explicitly with the tributaries would be a complicated task requiring a large amount of input data. We therefore believe that the method used here is suitable to study the dynamics of complex glacier systems with many tributaries. We found that the effect of surges on the long-term size of the glacier is significant but not dramatic. Since surges are imposed rather than internally generated, only the impact of surges on the mass budget, by lowering of the mean surface elevation, could be dealt with. On the basis of our calculations (Fig. 11), we expect that in a warming Arctic surging glaciers are prone to retreat somewhat faster than non-surging glaciers. It is likely that the effect of surging is larger for glaciers with a larger surge amplitude. However, when comparing the surge amplitude of Abrahamsenbreen with that of some other glaciers in Svalbard (e.g. Skobreen, Kongsvegen, Monacobreen, Nathorstbreen; Sund et al., 2009), it appears that Abrahamsenbreen is quite typical. We therefore think that our results apply to other surging glaciers as well. It is encouraging that forcing the model with an independently derived climate history leads to a glacier evolution that is in line with the geological and geomorphological evidence. This certainly lends credibility to the approach, and makes projections for the future more believable. If the present climatic conditions persist, we predict that Abrahamsenbreen will shrink considerably (to a length of about 4 km). In the case of future warming of a few degree K, the glacier will ultimately disappear, but this will take a few hundred years. Due to the fact that Abrahamsenbreen flows into a valley with a very small bed slope, its sensitivity to climate change is very large. Our calculations suggest that Abrahamsenbreen is rather close to a critical point, marking the onset of a runaway situation in which the glacier will grow into Woodfjorden for only a modest drop of the equilibrium line (160 m). However, this would take a long time (a few thousand years, Fig. 12). The large sensitivity of Abrahamsenbreen is probably not an exception. Many large glaciers on Spitsbergen have small slopes and are subject to similar processes. An earlier modelling study of Hansbreen in southern Spitsbergen also revealed a large sensitivity to climate change (Oerlemans et al., 2011). The consequence of these findings is that a temperature increase of 1-2 K would remove most of the ice from Spitsbergen, although it may take a long time (hundreds to thousands of years). This is in line with the growing evidence for an only marginally glaciated landscape in Spitsbergen during the Holocene Climatic Optimum (e.g. Humlum et al., 2005).
2018-10-10T00:31:21.550Z
2015-04-27T00:00:00.000
{ "year": 2015, "sha1": "2a22a14886799900949283b8696a9c760fe26ad8", "oa_license": "CCBY", "oa_url": "https://tc.copernicus.org/articles/9/767/2015/tc-9-767-2015.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4670940b809b4f44e7935ffb755b577bd166dfef", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
14578540
pes2o/s2orc
v3-fos-license
Application of Detergents or High Hydrostatic Pressure as Decellularization Processes in Uterine Tissues and Their Subsequent Effects on In Vivo Uterine Regeneration in Murine Models Infertility caused by ovarian or tubal problems can be treated using In Vitro Fertilization and Embryo Transfer (IVF-ET); however, this is not possible for women with uterine loss and malformations that require uterine reconstruction for the treatment of their infertility. In this study, we are the first to report the usefulness of decellularized matrices as a scaffold for uterine reconstruction. Uterine tissues were extracted from Sprague Dawley (SD) rats and decellularized using either sodium dodecyl sulfate (SDS) or high hydrostatic pressure (HHP) at optimized conditions. Histological staining and quantitative analysis showed that both SDS and HHP methods effectively removed cells from the tissues with, specifically, a significant reduction of DNA contents for HHP constructs. HHP constructs highly retained the collagen content, the main component of extracellular matrices in uterine tissue, compared to SDS constructs and had similar content levels of collagen to the native tissue. The mechanical strength of the HHP constructs was similar to that of the native tissue, while that of the SDS constructs was significantly elevated. Transmission electron microscopy (TEM) revealed no apparent denaturation of collagen fibers in the HHP constructs compared to the SDS constructs. Transplantation of the decellularized tissues into rat uteri revealed the successful regeneration of the uterine tissues with a 3-layer structure 30 days after the transplantation. Moreover, a lot of epithelial gland tissue and Ki67 positive cells were detected. Immunohistochemical analyses showed that the regenerated tissues have a normal response to ovarian hormone for pregnancy. The subsequent pregnancy test after 30 days transplantation revealed successful pregnancy for both the SDS and HHP groups. These findings indicate that the decellularized matrix from the uterine tissue can be a potential scaffold for uterine regeneration. Introduction Infertility is often associated with the inability of women to either conceive or maintain pregnancy. On average worldwide, 72.4 million or 9% of women at the reproductive age are infertile [1]. The infertility can be attributed to various reasons such as: ovulatory dysfunction [2,3], tubal diseases [4,5], endometriosis [6][7][8] and uterine malformations [9,10]. A common infertility treatment is assisted reproductive technology (ART), which can work for ovulatory and tubal disorders but not for uterine abnormalities. In addition, gynecological malignancies such as cervical and endometrial cancers often lead to the removal of the uterus by hysterectomy, which prevents the women from carrying out future pregnancies. A current solution for this type of infertility includes gestational surrogacy [11]. Gestational surrogacy is often preferred as it carries the lowest risk and highest success rate. However, it is impossible to prevent the possibility of accidents such as complications and death of the surrogate mother during childbirth. The conception and childbirth impose physical and mental stresses on the surrogate mother. In some cases, a suitable surrogate mother might not be found for the couple. Moreover, by using surrogacy, the intended mother would not be able to experience the childbearing process. For women who prefer to carry out the pregnancy themselves, uterus transplantation [12] has gotten a lot of attention. Recently, the success rate of uterine transplantation has been steadily increasing, and one study also showed the possibility of embryo implantation following surgery [13,14]. However, to date, there is still no report of childbirth following a uterine transplantation in a human. Furthermore, due to ethical reasons and risks for donors and/or recipients, uterus transplantation from a living donor is restricted on a case-by-case basis [15,16], while extracting a uterus from a dead donor has a higher risk of incompatibility [17]. In all cases, following transplantation recipients must be treated with immunosuppressive drugs for a lifetime. In these circumstances, uterus reconstruction by tissue engineering approach is considered an attractive approach. Biological materials, such as collagen [18], for uterine tissue regeneration has been reported. While promising, the regeneration speed was considered slow. On top of this, due to the nature of collagen scaffold, the tissue regeneration is dependent on existing native tissue, making it inapplicable to whole uterine tissue engineering. Considering these obstacles, the application of decellularized matrices as scaffold for uterine tissue engineering was taken as the focus of this study. Decellularized matrices have been reported for scaffolding in many vital organs such as the kidney [19,20], heart [21,22], blood vessel [23,24], and bone [25,26]. In decellularization, the tissue of interest is subjected to extreme conditions such as high acidity, alkalinity or pressure. With these methods, disruption of the cell membrane occurs and causes cell death. Removal of the dead cells without altering the overall properties of tissue can be done using a washing buffer containing enzymes. Due to the preservation of original tissue properties, a lower immune response for the host has been reported [27]. Consequently, we hypothesized that this method is highly useful for the purpose of uterus reconstruction and is under-explored. In this study, we first assessed three different decellularization methods for their applicability to uterine tissues, namely sodium dodecyl sulphate (SDS), high hydrostatic pressure (HHP) and Triton-X. Through this, we created scaffold with high bio-and structure-compatibility to the native uterus. Subsequently, an in vivo study using decellularized tissue was performed and evaluated for the capability of reconstruction and the responsiveness to the ovarian hormone. In addition, the capability for pregnancy was investigated in order to confirm the possibility of utilizing decellularized uterine tissues as scaffolds in uterine reconstruction. Uterine tissue sample preparation All animals used in this investigation were housed in the University of Tokyo Animal Care Facility according to the institutional guidelines for the use of laboratory animals. The experimental procedures were approved by the institutional animal experiment committee. The title of the approved animal experiment plan is ''Rat uterine tissue engineering (ID number is P12-113; Duration is January 8, 2012 to January 7, 2017; responsible person is Yutaka Osuga)'', which was finally approved by the Animal Experimentation Committee, Faculty of Medicine, University of Tokyo, as of December 16, 2011. Sprague Dawley rats (SD rats, female, 9 weeks old) were purchased from CLEA Inc. Japan. We put down the rats painlessly under the overdose of anesthelic agent, isoflurane (Mylan, USA). Uterine horns were excised from rats having a proestrus and metestrus cycle and trimmed of connective tissue and fat. Briefly, the horns were rinsed of blood with phosphate buffered saline (PBS), incise in the mesometrium line and cut into 15 mm65 mm rectangular samples. Decellularization of uterine tissue Ionic detergent (Sodium dodecyl sulfate; SDS) treatment. SDS was made according to the method by Booth et al. [28] with a slight modification. Briefly, SDS (Wako, Japan) was dissolved in PBS and sterilized. Up to four samples at a time from the same rat were immersed in 5 ml of SDS solution at room temperature with the following conditions: 0.1% SDS/PBS for 1 hour, 1% SDS/PBS for 1 hour, or 1% SDS/PBS for 2 hours. After SDS treatment, samples were washed for 1 week at 4uC using a washing buffer containing 0.9% NaCl (Wako, Japan), 0.05 M magnesium chloride hexahydrate (Wako, Japan), 0.2 mg/ ml DNAse I (Roche, USA) and 1% penicillin and streptomycin (Gibco, Japan) on a shaker set at frequency of 1 Hz. Non-ionic detergent (Triton X-100). Triton X-100 solution was made according to the method by Bader et al. [30] with modification. Triton X-100 (Sigma-Aldrich, Japan) was dissolved directly in PBS and sterilized. Up to four samples at a time from the same rat were immersed in 5 ml of Triton X-100 solution at room temperature with the following conditions: 1% Triton X-100/PBS for 24 hour, 3% Triton X-100/PBS for 24 hour, or 3% Triton X-100/PBS for 48 hours. After decellularization, samples were washed for 1 week at 4uC in a washing buffer, as previously described in the SDS treatment, on a shaker set at a frequency of 1 Hz. Histology Uterine tissues were briefly rinsed with PBS containing 1% penicillin and streptomycin and fixed in 10% neutral buffered formalin solution. Next, samples were dehydrated in graded alcohol, embedded in paraffin blocks and sectioned at 4 mm thickness. Samples were stained with Hematoxylin & Eosin (nucleus and cytoplasm), Masson's Trichrome (collagen) and Verhoeff's Van Gieson (elastin). Samples were observed using an optical microscope (Zeiss, Germany) equipped with camera. Mechanical test Extracted uterine horns were cut into ring-shaped samples with a length of 3-6 mm and then opened up into a rectangular shape. Mechanical tests were carried out using an autograph AGS-5kNG (Shimadzu, Japan). The thickness was determined as the point where the load cell detected a compressive force of 0.02 N when the sample was positioned flat on the stage and loaded in compression. Next, a uniaxial tensile test was done on each of the samples. The sample was fixed to the fixtures at both short sides. After measuring the sample's length and width, the sample was loaded under tensile strain at a speed of 0.5 mm/min. The loaded force until rupture of the sample and the distance between both of the fixtures were measured automatically. The mechanical stress and tensile strain were calculated by using these data. Transmission electron microscopy (TEM) Samples were minced into pieces with less than 0.1 mm 3 total volume and fixed in 2.5% glutaraldehyde/PBS overnight. The samples were then processed and observed according to the standard TEM procedure. Protein Assay Uterine tissue samples were washed briefly in PBS prior to wet weight measurement. Following this, samples were freeze-dried for a minimum of 8 hours using a vacuum freeze dryer (Eyela, Japan). The dried samples were immersed in a lysate buffer containing 446 mg/ml of papain, 5 mM cysteine-HCl and 5 mM EDTA-2 Na and incubated at 60uC for at minimum of 15 hours. The samples were further refined using a homogenizer and an ultrasonic cell disruptor. The end product of this process is referred as the ''protein extracted sample''. DNA assay. A commercially available DNA assay kit (Quant-iT PicoGreen dsDNA assay kit; Invitrogen, USA) was used to quantify the DNA contents. Each of the protein extracted samples was processed according to the standard protocol and analyzed with a fluorospectrophotometer at a wavelength of l = 522 nm. Hydroxyproline assay. Collagen protein contents of each sample were measured in terms of hydroxyproline content through the standard procedure [31]. Briefly, 50 ml of each protein extracted sample was mixed with 50 ml of 4 M NaOH and heated at 120uC for 30 minutes. Subsequently, 50 ml of 1.4 N citric acid and 250 ml of chloramine-T solution were added to the samples. After 20 minutes elapsed, 250 ml aldehyde-perchloric acid reagent was put into the solution and incubated at 70uC for 20 minutes. Each sample was analyzed with a fluorospectrophotometer at a wavelength of l = 450 nm. Elastin assay. A commercially available elastin assay kit (Fastin elastin assay kit; Biocolor, UK) was used for assessing elastin contents. 100 ml of each protein extracted sample was mixed with 50 ml of elastin precipitating reagent and incubated for 15 minutes. The sample was centrifuged at 10,000 g for 10 minutes and the precipitate was collected. 1 ml of dye reagent (TPPS: 5, 10, 15, 20-tetraphenyl-21, 23-porphine sulphonate) was added and mixed lightly for 90 minutes. The dye-bound elastin was collected by centrifugation at 12,000 g for 10 minutes, re-diluted by 250 ml of dye dissociation reagent. Each sample was analyzed with a fluorospectrophotometer at a wavelength of l = 513 nm. Transplantation of samples Twenty four uterine horns from SD rats (9 weeks old, female) at the proestrus or metestrus cycle were used for the experiments of the decellularized tissue transplantation. These horns were divided into three groups (n = 8, sham, SDS, and HHP). Sham operations refer to incising part of uterine horn ( Fig. 1) and stitching it back without any modification. For SDS and HHP, part of the uterine horn was excised ( Fig. 1) and replaced with SDS or HHP decellularized tissues extracted at the same point in the estrous cycle as the recipient. The decellularized tissue was sutured by a non-degradable polypropylene thread at 8 points. The points were on the sample's corners and the some points on the sides of each sample. After suturing, the implanted part was covered with seprafilm (KAKEN, Japan), a postoperative film that prevents adhesion between the implanted part and surrounding fat tissues or other organs. The rats were kept for 1 month and sacrificed during the proestrus cycle. Implants were extracted and trimmed of connective tissue and fat. The samples were processed for histology, protein assay and mechanical testing. Evaluation of fertility after transplantation of decellularized tissues Thirty two uterine horns from SD rats (9 weeks old female) at the proestrus cycle or metestrus cycle were used for the pregnancy test. These horns were divided into three groups: sham (n = 16), SDS (n = 8), and HHP (n = 8). Thirty days following the surgery (as previously described in section 2.7), the female rats were mated with fertile males to introduce pregnancy. Pregnancy was examined by the existence of sperm. Twenty one days after the confirmation of virginal sperms, the rats were sacrificed for the evaluation of pregnancy. Evaluation of decellularized tissue Evaluation of the decellularization efficiency was done by using hematoxylin and eosin staining. Native uterine tissue is presented in Fig. 2A as the control sample. Fig. 2B-D represents samples decellularized by SDS with various concentrations and time frames. The sample processed with 0.1% SDS for 1 hour (Fig. 2B) showed the most residual cells in the epithelial and stromal layers compared to the other samples. Treatment at 1% SDS for 1 hour (Fig. 2C) effectively removed cells in the smooth muscle and epithelial layers, but some cells still remained in the stromal layer. On the other hand, samples decellularized using 1% SDS for 2 hours (Fig. 2D) showed the best result with a thorough removal of the majority of smooth muscle cells and stromal cells. Collagen protein helps maintain the elasticity of the uterine tissue and can be found in the form of fibers in the tissue. These fibers, like elastin, are susceptible to denaturation by external stresses or chemicals. Assessment of the collagen content in the ECM was done using Masson's Trichrome staining. The intensity of Aniline Blue in decellularized matrices was compared to the native tissue (Fig. 3A) as a qualitative indicator of the reduction of collagen content. All of the decellularized uterine tissue samples (See Fig. 3B-3K) revealed a reduction in collagen content relative to the native tissue. The conditions in the SDS and HHP groups that had the highest residual collagen content after treatment are: 1% SDS for 1 hour, HHP 10-4 (Fig. 3E), and HHP 30-4 (Fig. 3G). In the Triton-X group (Fig. 3I-K) all conditions were shown to reduce the collagen content significantly compared to native tissue. Verhoeff's Van Gieson staining (Fig. 4) was used to detect the elastin content in the decellularized matrix. These fibers are stained dark purple and a qualitative analysis of the elastin content is done by comparing the intensity of the stain. Similar to the collagen content results, there was a reduction of elastic content across all samples relative to that of the native tissue. In the SDS group, 1% SDS for 1 hour (Fig. 4C) samples exhibited a higher elastin content compared to the other two conditions. In the HHP group, HHP 10-4 (Fig. 4E) and HHP 30-4 (Fig. 4G) showed a similar elastin content, and were stained more intensely than the other two samples in the HHP group ( Fig. 4F and 4H). Within the Triton-X group (Fig. 4I to 4K), treatment with 1% Triton-X for 24 hours (Fig. 4I) had the highest elastin content; however, the staining was less intense compared to SDS and HHP group. The DNA contents of the decellularized tissues were quantified to confirm the effectiveness of the cell removal methods. Fig. 5A shows that the SDS and HHP treatments reduced the DNA contents in uterine tissues with time. The remaining DNA contents in the tissues reached a plateau at 7 days. At 11 days washing, there was no statistical difference in the DNA contents in the tissues between SDS and HHP treatments. As shown in Fig. 5B, the remaining DNA contents at 7 days washing for SDS (0.9660.30610 24 mg/mg) and HHP (0.5360.22610 24 mg/mg) tissues were significantly lower than that of the native tissues (8.7560.12610 24 mg/mg), with a lower DNA content in HHP tissues compared to SDS tissues. At 7 days washing, the hydroxyproline (HP) content of decellularized tissues was also measured. In Fig. 5C, the HP content per dry weight for the SDS and HHP group (SDS: 3.1161.61610 21 mg/mg, HHP: 3.6961.63610 21 mg/mg) were 52% and 44% lower than the native tissue (6.6061.25610 21 mg/mg). Contrary to this, the elastin contents of SDS (6.0760.96610 22 mg/mg) and HHP (6.9161.25610 22 mg/mg) were 39% and 30% lower, respectively, than the one of native tissue (9.9061.53610 22 mg/mg) (Fig. 5D). The collagen density and structure of the tissues was visualized through TEM (Fig. 6A). Fig. 6B and 6C show the collagen fibers of decellularized matrices by SDS and HHP, respectively. In both decellularized samples, a reduction in the number of fibers was observed. Nonetheless, the structure of the residual collagen fibers was preserved. Next, the thickness and mechanical properties of the decellularized uterine tissues were evaluated as shown in Fig. 7. The Young's modulus of SDS-treated samples (0.68860.131 MPa) was 2.00 and 2.29 times higher than those of the native and HHP samples (native: 0.34460.043 MPa, HHP: 0.30060.079 MPa, Fig. 7A). There was a statistically significant increase in the Young's Modulus of SDS-treated tissues compared to the native tissues (p,0.05). Similarly, the rupture strength of the SDS group (0.37960.065 MPa) was 1.45 and 1.18 times higher than that of the native tissue (0.25860.071 MPa) and HHP-treated (0.32060.017 MPa) samples (Fig. 7B). However, there was no significant difference between the SDS group and native tissues. HHP-treated samples showed similar levels in Young's modulus and rupture strength to native tissues. There were no statistical differences in the mechanical strength between the HHP group and native tissues. The thickness of SDS-(0.6660.10 mm) and HHP-(0.6460.01 mm) decellularized tissues decreased 36% and 38%, respectively, in comparison to the native tissue (1.0360.13 mm) (Fig. 7C). In vivo evaluation In vivo transplantation experiments are necessary to determine the regenerative capability of decellularized tissues with native tissues. Thirty days after transplantation, uterine tissues were extracted from the rats and the gross evaluation of the tissue is presented in Fig. 8. Comparison between the native (Fig. 8A-Right), sham (Fig. 8A-Left), HHP- (Fig. 8B-Right) and SDS- (Fig. 8B-Left) treated tissues show no noticeable differences. The implants seemed to fully integrate with the surrounding native tissue without any inflammation. At day 30, decellularized uterine tissues using HHP and SDS methods showed tissue regeneration and epithelial cell migration into the implanted area (Fig. 9B, C). In both reconstructed uterine tissues, regeneration of stromal layers underneath the implant area was detected with vimentin staining (Fig. 9E, F). Similarly, the regeneration of the smooth muscle layer was observed for HHPand SDS-decellularized tissues (Fig. 9H, I). CD31 staining revealed blood vessels present in the reconstructed uterine tissues underneath both decellularized tissues (Fig. 9K, L). From the quantitative analysis, DNA contents per dry weight in the SDS (3.9160.72610 24 mg/mg) and HHP (4.5360.84610 24 mg/mg) groups were 46% and 38% lower than those in the sham group (7.3061.23610 24 mg/mg, Fig. 9M). Masson's Trichrome staining was used to detect the recovery of the collagen proteins in the implant that were denatured during decellularization. In both SDS (Fig. 10B) and HHP (Fig. 10C) samples, the aniline blue intensity was comparable to the sham group (Fig. 10A) signifying the growth of collagen protein from the regenerated tissue into the implant. Verhoeff's Van Gieson staining was used to observe the restoration of elastin protein in the implant. Similar to the Masson's Trichrome staining results, the elastin protein in SDS (Fig. 10E) and HHP (Fig. 10F) groups increased and are comparable to the sham group (Fig. 10D) 30 days after transplantation. The quantitative results of HP and elastin contents are presented in Fig. 10G and 10H, respectively. The HP contents of SDS (4.1460.56610 23 mg/mg) and HHP (3.7660.23610 23 mg/mg) were at the same level as the sham groups (4.8662.85610 23 mg/mg, Fig. 10G). On the other hand, there was a slight decrease in the elastin contents of SDS (0.8060.12610 21 mg/mg) and HHP (0.8160.15610 21 mg/mg) samples compared to the sham (1.0260.08610 21 mg/mg) samples (Fig. 10H); although, with both the HP and elastin quantitative results, there are no significant differences between all samples. The mechanical properties and thickness of the implanted tissues are shown in Fig. 11A (Fig. 11A). Similarly, regarding of the rupture strength, large differences were not detected between the sham (0.09860.007 MPa), SDS (0.10460.007 MPa), and HHP (0.08660.014 MPa) tissues. Also, in comparison to the thickness of sham (1.3860.12 mm), SDS (1.2960.07 mm) and HHP (1.7060.06 mm) was at the same level. Responsiveness of the implant to ovarian hormone was ascertained by using Ki67, a marker for proliferating cells. Both SDS (Fig. 12B) and HHP (Fig. 12C) reconstructed uterine tissues had a positive Ki67 staining in the luminal epithelium and negative staining in the stroma, which is the normal proliferation pattern in the proestrus uterus. A higher number of proliferative cells in stromal layer were detected as brown colored particles in HHP group. Evaluation of the ER expression (dark orange staining), which reveals the responsiveness to the ovarian hormone for pregnancy in the reconstructed tissues, showed, as expected, ER-positive immunostaining both in the luminal epithelium and the stromal cells for both samples from the SDS (Fig. 12E) and HHP (Fig. 12F) group. Positive staining of the glandular epithelium in HHP (Fig. 12E) and SDS (data not shown) samples was exhibited the same as sham samples. Thirty days after the transplantation of decellularized tissues, we evaluated fertility of the rats on day 21 of pregnancy. As presented in Fig. 13, the numbers of fetuses were comparable among the sham, SDS and HHP groups (1.4460.99, 0.8860.43 and 1.1360.58, respectively). Discussion Decellularized matrices have been used for the treatment of several organs over biomaterials due to its practicality and lower immune reactivity. To our knowledge, we are the first to report on decellularized matrices for uterine tissue engineering. The optimization of conditions for decellularization is essential, so various methods for decellularization were evaluated for both the efficiency of cell removal and the preservation of the physical, chemical and biological integrity of rat uterine tissues. In this research, three kinds of decellularization methods were selected: SDS due to its efficiency in removing cells from dense material, HHP for its capability to preserve collagen and elastin, as well as, Triton-X because of its ability to decellularized thick tissues. From the histological results, decellularization using SDS was the most effective in removing cells within the smooth muscle layers. SDS decellularizes the tissue through diffusion from the smooth muscle layer towards the epithelial layer, so the application time used in this study may have been too short for the detergent to reach the epithelial layer. Although prolonging the decellularization time results in greater cell removal (Fig. 2C vs. 2D), the detergent would also denature or reduced collagen and elastin proteins in the tissue in the process (Fig. 3C vs. 3D and 4C vs. 4D). Similar to SDS, Triton-X also utilizes diffusion for decellularization. At the conditions used in this study, Triton-X did not manage to penetrate the uterine tissue all the way to the epithelial layers. Compared to SDS at a similar cell removal efficiency, Triton-X required more time (Fig. 2D vs. 2I). Moreover, Triton-X severely denatured or reduced the collagen and elastin protein within the matrix (Fig. 3D vs. 3I and 4D vs. 4I). Thus, for uterine tissues, Triton-X was deemed as unsuitable. In HHP, decellularization is achieved by applying a high pressure to the tissue to disrupt the cell membrane. Thus, in principle, during the decellularization process, cells can effectively be removed from all layers without denaturing or reducing the proteins. However, as previously studied by Funamoto et al. [29], the pressure-temperature relationship during HHP is an important factor. It directly determines whether the tissue reaches the freezing zone where water in the tissue is converted to ice. Ice crystal formation can cause modification to tissue structure in the form of scissions of the collagen fibers. The freezing zone can be avoided by using an onset temperature of 30uC, as shown by the higher collagen fiber content in Fig. 3G compared to Fig. 3E, which used an onset temperature of 10uC. A similar result of higher collagen content in conjunction with a higher onset temperature was seen by a more intense VVG staining of the HHP 30-4 sample (Fig. 4G) when compared to the HHP 10-4 sample (Fig. 4E). The reduction of the collagen and elastin contents within the tissue processed at 30uC (Fig. 3G, H and Fig. 4G, H) are the result of the enzyme contained in the washing process. Since the enzyme activity is higher at 37uC than 4uC, a lower collagen and elastin content was detected in samples washed at 37uC (Fig. 3H and 4H). A similar washing procedure was also used for samples with a 10uC onset temperature (Fig. 3E-F and Fig. 4E-F), and a collagen and elastin reduction due to the enzyme activity was also observed in samples washed at 37uC (Fig. 3F and Fig. 4F). Based on these results, for the purpose of uterine tissue decellularization, HHP (specifically HHP 30-4) offers a higher cell removal efficiency and ECM preservation. Additionally, protein quantification, TEM and mechanical tests were used to objectively compare the residual content between different decellularization methods. The optimal conditions of each decellularization method (excluding Triton-X) were determined to be: SDS 1% for 1 hour and HHP 30-4. Comparison of the residual collagen content (Fig. 5C, Fig. 6) showed that HHP didn't alter the collagen fibers significantly, whereas SDS destroyed the collagen fibers. Further investigation of the collagen structure using TEM elucidated that even though the denaturation of the collagen protein reduced the number of fibers, the structure of the residual collagen fibers was preserved. These results indicate that there was a minimal overall change in the structural properties during both decellularization processes. Residual elastin content (Fig. 5D) showed that elastin fibers, unlike collagen, were susceptible to both pressure and chemical. The values of the Young's Modulus in SDS samples were significantly higher than the native and HHP samples before transplantation, as shown in Fig. 7A and B. Since SDS is a detergent, it is possible that the SDS treatment resulted in the denaturation of collagen fibers. From the TEM analysis ( Fig. 6A-C), the number of collagen fibers seemed to be reduced in the SDS samples, but collagen structure between SDS and HHP samples did not differ. Thus, as a possible cause, it was considered that the change in higher-ordered structure of collagen fibers such as collagen-collagen interactions induced denaturation which influenced in the stiffness of SDS samples. DNA quantification (Fig. 5A) showed that while both decellularization methods were effective in removing cells, it was impossible to completely remove the DNA content from the tissue. To remove the DNA completely, harsher conditions for decellularization, such as a longer treatment time with HHP or SDS, and higher concentrations of SDS, should be considered. Nonetheless, due to the required collagen and elastin fibers in the matrix in order to maintain the mechanical properties of the tissue, more extreme conditions were not deemed appropriate. Despite the residual DNA content, the in vivo study of the decellularized tissue proves that the amount of DNA removed was sufficient in preventing an immune reaction. Gross examination of the in vivo study showed the integration of decellularized tissue with the native tissue within 30 days. No obstruction was observed at the implant/native tissue anastomoses as opposed to a study by Jonkman et al. [32] in which the artery were occluded by blood clot due to failure of vascularization into scaffold. Patency of the reconstructed uterus was similar to native tissue proving decellularized tissue to be superior to porcine SIS graft [33] where samples larger than 1 mm were found to be twisted due to lack of mechanical strength. Thickness and DNA content of the SDS and HHP group prior to ( Fig. 7C and 5B) and after ( Fig. 11C and 9M) the in vivo study showed an increase, signifying the existence of tissue regeneration. From H&E staining (Fig. 9B, C), it was evident that the mode of uterine reconstruction for the SDS group was re-growth of tissue from native uterine tissue (tissue regeneration) underneath the decellularized matrix. However, in the HHP group, it was a combination of cell migration and tissue regeneration. Qualitatively, thicker tissue regeneration underneath the decellularized tissue was observed in the SDS group. The faster regeneration in SDS group is hypothesized to be the result of mechanical difference between the decellularized and native tissue, which creates a mechanical stimulus for cell proliferation and growth. Conversely, due to similarities in both the protein content and the mechanical properties of the HHP-decellularized and native tissue, the cells chose to migrate into the implant. However, due to the lack of microvasculature to supply nutrition, cell migration was limited to the area near the native tissue. In SDS, cell migration was not observed, which was potentially caused by the lack of protein for cell-matrix signaling and cell binding. At day 30, both the stromal and smooth muscle layer of the SDS reconstructed uterus showed a similar thickness to the sham group, which was representative of native tissue. In contrast, a similar study conducted by Li et al. [18] using a collagen scaffold showed that the regeneration of the smooth muscle layer to a thickness similar to sham group could only be achieved after 90 days of transplantation. Based on this, SDS decellularized tissue seems to encourage a higher regeneration rate compared to a collagen scaffold. As mentioned previously, the collagen and elastin content is closely related to mechanical strength. Through the recovery of the collagen and elastin contents, the mechanical properties of the decellularized tissues became comparable to the sham group. However, the mechanical properties of the sham were lower than the native tissue (Fig. 7A, B and Fig. 11A, B), which may be due to imperfect surgical procedure causing tissue adhesion. During the tissue extraction, there was a noticeable increase in adipose tissue surrounding the uterine tissues, especially at the implant region. Most of the adipose tissue was trimmed off, but complete removal was not possible. As a result, the adipose tissue contributed to a higher overall implant thickness (Fig. 11C), which was supported by the fact that the thickness of the implanted tissues including the sham was higher than the one of native tissue (Fig. 7C and 11C). With increased thickness, the cross-sectional area of the sample would be increased and the Young's modulus and rupture stress would be reduced. Additionally, the non-degradable polypropylene sutures in the implanted tissues act as a point defect, which will cause a decrease in the resistance to loads during tensile tests. Responsiveness of the reconstructed uterus to the ovarian hormone was evaluated by immunostaining. It is known that estrogen governs the cell proliferation in the proestrus uterus. Therefore, Ki67 positive staining suggests that the responsiveness to estrogen is normal in the reconstructed tissues. Since estrogen acts via ER and governs the estrus cycle and uterine cell composition for pregnancy, ER positive staining indicates that SDS-and HHP-decellularized matrices provide the reconstructed uterus with normal hormone responsiveness and functions. To confirm the functionality of reconstructed uterus, we examined the fertility of the female rats with transplantation. Based on the numbers of fetuses on day 21 pregnancy, both SDS and HHP groups showed similar fertility to the sham group, suggesting the successful reconstruction of the uterus by both decellularization methods. Conclusions In this study, we focused on decellularized tissues for segmental uterine reconstruction; however, these methods are also applicable for larger samples such as the whole uterus. According to the biochemical and mechanical evaluation, decellularization using solutions such as SDS and Triton-X is highly dependent on various diffusion factors and their interaction with the sample; thus, the optimization of the solution's concentration and application time (depending on the sample's thickness, pore size, etc) is necessary to avoid under or over decellularization. Due to the variation in organ size from one person to another, the variables for chemical decellularization must be individually optimized for each case. In contrast, in HHP decellularization there is no need to customize the decellularization conditions as they are independent of the sample's size and structure. Moreover, decellularization by HHP causes minimal protein denaturation in contrast to SDS and Triton-X, resulting in the superior preservation of the ECM content. In addition, residual SDS and Triton-X within the tissue can introduce a toxic response from the host. This problem can be avoided in HHP since no harmful chemicals are involved in the decellularization process. Therefore, HHP decellularization is the better option as a scaffold in uterine regeneration. Immunohistochemical staining of the in vivo study showed that the regenerated tissues were positive for Ki67 and ER in both the SDS and HHP group. Specifically in the HHP group, we found cell migration near the native/implant anastomoses, showing the possibility for a novel reconstruction method. Functionality in the form of hormone responsiveness and mechanical properties revealed the potential of reconstructed uteruses to behave as a native uterus. In the fertility tests, we found successful pregnancy in both the SDS and HHP groups similar to the sham model in term of number of fetuses. These findings indicate the possibility of a regular pregnancy and childbearing in the uteruses reconstructed by decellularized tissues. All of the above results support our hypothesis that decellularized tissues could be used as a novel scaffold for clinical applications in uterine tissue engineering. The use of decellularized tissue in whole uterine decellularization could potentially lead to a tissue engineered uterus with a low immune response, which could be an alternative to uterine transplantation by a donor. This would be beneficial for women who have past history of hysterectomy or uterine malformation and want to experience the childbearing process.
2017-07-19T19:44:00.743Z
2014-07-24T00:00:00.000
{ "year": 2014, "sha1": "46ed8ab2c57ecf71bec7c8e7f12a564331dfcbe9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0103201", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46ed8ab2c57ecf71bec7c8e7f12a564331dfcbe9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
32152610
pes2o/s2orc
v3-fos-license
Chemotherapeutic Effect of Withaferin A in Human Oral Cancer Cells Withaferin A (WA) is a bioactive compound derived from a medicinal plant Withania somnifera and has potential therapeutic effects against various types of cancers. The purpose of this study is to investigate an apoptotic effect of WA and identify its molecular target in HSC-3 and HSC-4 human oral cancer cell lines using Trypan blue exclusion assay, DAPI staining and western blotting. WA inhibited cell viability and induced apoptosis in a concentrationor time-dependent manner, as evidenced by induction of nuclear condensation and fragmentation, activation of caspase 3 and poly (ADP-ribose) polymerase (PARP) cleavage. WA-induced apoptosis was partly diminished by Z-VAD, a pancaspase inhibitor. WA also increased Bim and Bax protein in HSC-3 and HSC-4 cells, respectively. These results suggest that WA may be a potential chemotherapeutic drug candidate against human oral cancer. Introduction Anticancer agents including cisplatin and pingyangmycin have been generally used in head and neck and tongue cancer treatment by suppressing tumor size and inhibiting metastasis [1] [2].However, it has been reported to have toxicity, side effect and resistance to apoptosis in ovarian and tongue cancer [2] [3].The failure of current therapies in adrenocortical carcinoma has been reported to be correlated with cytotoxic drugs containing cisplatin, etoposide, mitotane and doxorubicin [4].For this reason, novel agents with low toxicity are necessary to be developed for the treatment of various cancers including oral cancer. Natural compounds derived from plant sources have steadily been known as invaluable source of therapeutic agents [5].They were also used as traditional medicines for the treatment of various cancers because of their potential anti-cancer effects [6]- [8].Withania somnifera is a plant with bioactive compounds, which has been known as winter cherry or Indian ginseng [9].WA, derived from Withania somnifera, has therapeutic effects, such as anti-inflammatory, anti-angiogenesis and anti-cancer effects in various cancers [9]- [11].It has been reported that WA suppressed cell growth and induced apoptosis in breast cancer cells in vitro and in vivo [12].WA also induced apoptosis in human melanoma cells [13].Thus, WA has a great possibility to become an effective cancer therapy. Bcl-2 family proteins have been characterized as cell survival factors (Bcl-2, Bcl-X L , Mcl-1 and so on) and pro-apoptotic factors (Bak, Bax, Bim, Bid and so on), which can regulate mitochondria-dependent apoptosis [14].These proteins include at least one of four BCL-2 homology domains, BH1 to BH4 [15].BH3-only proteins, Bim and Bid act upstream of Bax and Bak with BH1-BH3 domains to induce apoptosis [16].It has been demonstrated that Bim activates the apoptotic proteins Bax and Bak, leading to cytosolic release of cytochrome c from mitochondria and induce apoptosis [17].Recently, WA-induced apoptosis has been studied in various cancer cell lines, such as breast [18], prostate [19], pancreatic [20], ovarian [21], lung [22], head and neck cancer cell lines [23].However, the detailed molecular target behind the apoptosis of human oral cancer cells is not clear yet.Therefore, it is valuable to investigate the molecular target of WA-induced apoptosis in human oral cancer cells.In this study, our group investigated the efficacy of WA through the regulation of Bcl-2 family proteins in human oral cancer cells. Cell Culture and Chemical Treatment HSC-3 and HSC-4 cells were provided by Hokkaido University (Hokkaido, Japan).Cells were cultured in DMEM supplemented with 10% FBS and antibiotics at 37˚C in 5% CO 2 incubator.All experiments were prepared in cells cultured at 50% -60% confluence.Withaferin A (Sigma, St. Louis, Mo, USA) was dissolved in 0.1% DMSO (vehicle control) and stored at −20˚C.Final concentration of DMSO did not exceed 0.1%.Z-VAD (Minneapolis, Minnesota, USA) was treated as a pan-caspase inhibitor into cell lines. Trypan Blue Exclusion Assay The growth inhibitory effect of WA was determined with trypan blue solution (Gibco, Paisley, UK).Cells were stained with trypan blue (0.4%), and then viable cells were counted using a hemocytometer. Western Blot Analysis Whole-cell lysates were prepared with lysis buffer and protein concentration in each sample was measured using a DC Protein Assay Kit (BIO-RAD Laboratories, Madison, WI, USA).After normalization, equal amounts of protein were separated by SDS-PAGE and then transferred to Immun-Blot™ PVDF membranes.The membranes were blocked with 5% skim milk in TBST at RT for 2 hr, and incubated with primary antibodies and corresponding HRP-conjugated secondary antibodies.Antibodies against cleaved PARP, cleaved caspse-3, Bax and Bim were purchased from Cell Signaling Technology, Inc., (Charlottesville, VA, USA).Actin antibody was obtained from Santa Cruz Biotechnology, Inc., (Santa Cruz, CA, USA).The immunoreactive bands were visualized by ImageQuant™ LAS 500 (GE Healthcare Life Sciences, Piscataway, NJ, USA). Statistical Analysis Student's t-test was used to determine the significance of differences between the control and treatment groups; values of p < 0.05 were considered significant. WA Reduces the Viability in HSC-3 and HSC-4 Cells To explore the potential anti-cancer effects of WA in human oral cancer cells, we examined the effects of WA by cell counting after the treatment with DMSO or various concentrations for 24 hr as well as certain concentration (1 μM for HSC-3 cells and 0.8 μM for HSC-4 cells) for each indicated time points (0, 3, 6, 12 and 24 hr).As shown in Figure 1, the viability of HSC-3 and HSC-4 cells was notably decreased in a concentration-and time-dependent manner.These results indicate that WA can decrease cell viability in human oral cancer cells. WA Increased Apoptosis in HSC-3 and HSC-4 Cells To determine whether the growth inhibitory effect of WA was associated with apoptosis, we performed using DAPI staining.As presented in Figure 2, the exposure of cells to WA exhibited a noticeable increase in the distinct features of apoptotic cells such as chromatin condensation and nuclear fragmentation.These results suggest that growth inhibitory effect of WA may be associated with induction of apoptosis. WA-Induced Apoptosis Is Associated with Activation of Caspase 3 in HSC-3 and HSC-4 Cells Next, we carried out western blot analysis using antibody against cleaved PARP and caspase 3. The results showed that the augmentation of cleaved PARP and caspase 3 by WA was in a concentration-dependent in HSC-3 and HSC-4 cells (Figure 3(a)).Also, the results from western blot analysis showed that exposure of cells to WA caused a markedly induction of cleaved PARP in a time-dependent manner (Figure 3 WA Induces Apoptosis through Regulation of Bim and Bax To clarify whether the apoptotic effect of WA is related to the regulation of Bcl-2 family proteins, we examined protein levels of Bim and Bax.As shown in Figure 4, WA increased Bim expression in HSC-3 cells and caused an increase in Bax in HSC-4 cells.These results suggest that WA-induced apoptosis may be associated with the regulation of several Bcl-2 family proteins in a cell line-specific manner. Discussion The natural synthetic or biological compounds have been used to treat cancers.Numerous studies have been demonstrated that natural compounds play critical roles in the induction of apoptosis in various cancers such as gastric, breast, lung and others cancers [24]- [26].For example, a cacalol derived in Asian herbal plant has exerted anti-proliferative and apoptotic effect in breast cancer cells [27].Eugenol has reported to induce apoptosis in human HT 29 colon cancer cells [28].Previously, our group has demonstrated that several natural compounds such as Codonopis lanceolata and Tricholoma matsutake extracts exerted apoptotic activities through the augmentation of pro-apoptotic proteins, Bid and Bax levels in oral cancer [29].Herein the present study, we showed that a natural compound, WA reduces cell viability and induced apoptosis in human oral cancer cells (Figures 1-3) indicating that WA may have anti-carcinogenic activities against oral cancer.WA has been reported as promising anti-cancer drug candidate due to its cytotoxic and apoptotic properties [30].WA is known having a critical role in the inhibition of abnormal cell proliferation occurring in oral carcinogenesis [31].Furthermore, anti-cancer effect of WA has demonstrated that it reduced cell viability and proliferation in adrenocortical carcinoma [32].Thus, our results proved that WA has growth inhibitory and apoptotic effect in oral cancer cells. The apoptotic pathway includes the extrinsic (cytoplasmic) and intrinsic (mitochondrial) pathway [33].Permeabilization of the mitochondrial outer membrane is importantly associated with Bcl-2 family proteins that regulate the integrity of the mitochondria [34].In particular, natural compound regulated the Bim expression of Bcl-2 member and induced apoptosis by cytochrome C release [35].Antrodia camphorate extracts have been demonstrated that activation of caspase-3, -8, and -9 and the increase in the cytosolic level of cytochrome c were accompanied by increasing the expression levels of Bak, Bad and Bim in HeLa and C-33A cells [36].Rhein also increased the expression of Bim and FOXO3a in MCF-7 and HepG2 cells during apoptosis [37].Our group also has demonstrated that an analogue of curcumin, dibenzylideneacetone enhanced Bax expression resulting in apoptosis in oral cancer cells [38].In the light of the potential effect by natural compounds on Bcl-2 family proteins, we assumed that WA could have pro-apoptotic properties through regulation of Bcl-2 family proteins.In practice, WA-induced apoptosis in human melanoma cells has reported that it correlated with mitochondrial pathway, which is regulated by Bcl-2 family protein, Bax and Bak and caspase-dependent pathway [13].In this study, we also identified whether WA-induced apoptosis affects Bcl-2 family proteins and the results demonstrated that Bim and Bax were affected by WA in HSC-3 and HSC-4 cells, respectively (Figure 4).These results suggest that the up-regulation of Bim and Bax may be required for WA-induced apoptosis in oral cancer cells.Therefore, it would be valuable to further investigate continuously in future study. Conclusion In conclusion, our results show that WA reduces cell viability and upregulates the expression of Bim and Bax, leading to apoptosis in HSC-3 and HSC-4 oral cancer cells.Thus, these provide the basis that WA has an attractive chemotherapeutic drug candidate for therapy of oral cancer, although anti-tumorigenic effect of WA in vivo model is needed. Figure 1 . Figure 1.Withaferin A inhibits cell viability in human oral cancer cells.HSC-3 and HSC-4 cells were treated with DMSO or multiple concentrations of WA for 24 hr.A, The effect of WA on cell viability was examined using Trypan blue exclusion assay.B, Each cell lines were harvested at different time points (0, 3, 6, 12 and 24 hr).The graphs were expressed the mean ± S.D. of triplicate experiments and significance (p < 0.05) compared with the DMSO-treated group was indicated ( * ). Figure 2 .Figure 3 . Figure 2. WA increases nuclear fragmentation and condensation in human oral cancer cells.HSC-3 and HSC-4 cells were treated with DMSO or various concentrations-and time-dependent of WA for 24 hr and/or each time point.A and B, Nuclear condensation and DNA fragmentation were stained with DAPI solution as mentioned in Material and Methods (×400).DAPI-stained cells were observed by fluorescence microscopy. Figure 4 . Figure 4. WA regulates Bim and Bax expression in HSC-3 and HSC-4 cells.HSC-3 and HSC-4 cells were treated with DMSO or various concentrations of WA for 24 hr.A and B, the cell lysates were analyzed by western blot using antibodies against Bim and Bax.
2017-11-09T19:33:07.621Z
2015-07-28T00:00:00.000
{ "year": 2015, "sha1": "1c089dc0d97f7c3b2dbb325f48b7399e0fd05687", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=58711", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1c089dc0d97f7c3b2dbb325f48b7399e0fd05687", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
245974098
pes2o/s2orc
v3-fos-license
Princivalleite, Na(Mn2Al)Al6(Si6O18)(BO3)3(OH)3O, a new mineral species of the tourmaline supergroup from Veddasca Valley, Varese, Italy Abstract Princivalleite, Na(Mn2Al)Al6(Si6O18)(BO3)3(OH)3O, is a new mineral (IMA2020-056) of the tourmaline supergroup. It occurs in the Veddasca Valley, Luino area, Varese, Lombardy, Italy (46°03’30.74’’N, 8°48’24.47’’E) at the centre of a narrow (2–3 cm wide) vertical pegmatitic vein, a few metres long, crosscutting a lens of flaser gneiss. Crystals are subhedral (up to 10 mm in size), azure with a vitreous lustre, conchoidal fracture and white streak. Princivalleite has a Mohs hardness of ~7, a calculated density of 3.168 g/cm3 and is uniaxial (–). Princivalleite has trigonal symmetry, space group R3m, a = 15.9155(2) Å, c = 7.11660(10) Å, V = 1561.15(4) Å3 and Z = 3. The crystal structure was refined to R1 = 1.36% using 1758 unique reflections collected with MoKα X-ray intensity data. Crystal-chemical analysis resulted in the empirical crystal-chemical formula X(Na0.54Ca0.11□0.35)Σ1.00Y(Al1.82Mn2+0.84Fe2+0.19Zn0.07Li0.08)Σ3.00Z(Al5.85Fe2+0.13Mg0.02)Σ6.00[T(Si5.60Al0.40)Σ6.00O18](BO3)3O(3)[(OH)2.71O0.29]Σ3.00O(1)[O0.66F0.22(OH)0.12]Σ1.00 which recast in its ordered form for classification purposes is: X(Na0.54Ca0.11□0.35)Σ1.00 Y(Al1.67Mn2+0.84Fe2+0.32Zn0.07Mg0.02Li0.08)Σ3.00 ZAl6.00[T(Si5.60Al0.40)Σ6.00O18](BO3)3V[(OH)2.71O0.29]Σ3.00W[O0.66F0.22(OH)0.12]Σ1.00. Princivalleite is an oxy-species belonging to the alkali group of the tourmaline supergroup. The closest end-member compositions of valid tourmaline species are those of oxy-schorl and darrellhenryite, to which princivalleite is related by the substitutions Mn2+ ↔ Fe2+ and Mn2+ ↔ 0.5Al3+ + 0.5Li+, respectively. Princivalleite from Veddasca Valley is a geochemical anomaly, originated in a B-rich and peraluminous anatectic pegmatitic melt formed in situ, poor in Fe and characterised by reducing conditions in the late-stage metamorphic fluids derived by the flaser gneiss. The Mn-enrichment in this new tourmaline is due to absence of other minerals competing for Mn such as garnet. A formal description of the new tourmaline species princivalleite is presented here. The mineral is named after Francesco Princivalle (b. 1956), Professor of Mineralogy at the Department of Mathematics and Geosciences, University of Trieste, Italy, for his contributions to the understanding of the crystal chemistry and geothermometery of several group minerals such as spinels, olivines and pyroxenes. The new species and the name (symbol Pva) have been approved by International Mineralogical Association's Commission on New Minerals, Nomenclature and Classification (IMA2020-056, Bosi et al., 2020). Holotype material is deposited in the collections of the Natural History Museum of Milano, Italy, catalogue number M38850. Occurrence The holotype specimen was collected in 2003 by one of the authors (FP), along the cut of a small road at the eastern side of the Curiglia Village, Veddasca Valley, Luino area, Varese, Lombardy, Italy (46°03 ′ 30.74 ′′ N, 8°48 ′ 24.47 ′′ E, ∼730 m above the sea level). The Veddasca Valley is characterised by rocks belonging to the 'Serie del Laghi', a structural unit which is part of the tectonic unit known as Massiccio del Laghi, comprising the central-western sector of the crystalline basement of the Southern Alps (Pezzotta and Pinarelli, 1994). The 'Serie dei Laghi' consists of paragneiss (pelites, sandstones, greywackes and metatufites) and scattered orthogneissic bodies, affected by amphibolite-facies Hercynian metamorphism (Boriani et al., 1990). During a detailed field mapping performed by one of the authors in 1990-1991, a few narrow, not-metamorphic, veins of pegmatite were discovered nearby the Curiglia village in the Veddasca valley. These pegmatites, which have never been reported in literature, crosscut some heavily deformed metamorphic rocks (flaser gneiss) perpendicularly to their foliation, and probably originated by some minor partial melting that occurred during the uplift of the tectonic units, during the latest stages of the Hercynian metamorphic event. At the centre of one of these small veins (2 to 3 cm wide and a few metres long), vertically crosscutting the flaser gneiss at Curiglia, there are some quite common occurrences of dispersed grains and crystals up to 1 cm long, of azure princivalleite and oxy-schorl. The pegmatitic vein is composed of muscovite aggregates, with blades mostly oriented perpendicular to the walls, with quartz, albitic plagioclase and minor K-feldspar. In addition to tourmaline, other accessories are rare small pyrite crystals and violet glassy cordierite grains. Appearance, physical and optical properties The princivalleite crystals show subhedral habits, up to ∼10 mm and are azure with a vitreous lustre (Fig. 1). It has a white streak and shows no fluorescence. It has a Mohs hardness of ∼7 and is brittle with a conchoidal fracture. The calculated density, based on the empirical formula and unit-cell volume refined from single-crystal X-ray diffraction (XRD) data, is 3.168 g/cm 3 . In thin section, princivalleite is transparent; in transmitted light, pleochroism was not visible in the thin-section fragment investigated. Princivalleite is uniaxial (-) with refractive indices ω = 1.650(5) and ε = 1.635(5) measured by the immersion method using white light from a tungsten source. The mean index of refraction, density, and chemical composition led to an excellent compatibility index (1 -Kp/Kc = 0.024) (Mandarino, 1981). General comment The present crystal structure refinement (SREF), electron microprobe (EMP) and μ-laser induced breakdown spectroscopy (μ-LIBS) data were all obtained from the same crystal fragment. However, complementary experimental data were recorded from coexisting crystals. Small differences in composition occur between these princivalleite crystals (see text). Micro-laser induced breakdown spectroscopy Lithium analysis was performed using 110 mJ of energy per pulse by a double pulse Q-Switched (Nd-YAG, λ = 1064 nm) laser with a 1 μs delay between the two pulses. The small spot size (7-10 μm) was obtained using a petrographic optical microscope (objective lens = 10X, NA = 0.25 and WD = 14.75 mm). The μ-LIBS spectra were acquired using an AvaSpec Fiber Optic Spectrometer (390-900 nm with 0.3 nm resolution) with a delay of 2 μs after the second pulse and were integrated for 1 ms. Quantitative data were obtained by generating a linear regression using the main Li emission line intensity (670.706 nm corresponding to resonance transition 1s 2 2s > 1s 2 2p) particularly sensitive to Li amounts. The linear fit was made according to Bosi et al. (2021) Mössbauer spectroscopy To determine the Fe 3+ /ΣFe ratio of princivalleite, a crystal fragment was ground under acetone and analysed using 57 Fe Mössbauer spectroscopy with a conventional spectrometer system equipped with a 10 mCi point source and operated in constant acceleration mode. Data were collected over 1024 channels and were folded and calibrated against the spectrum of an α-Fe foil. The spectrum (Fig. 2) was fitted using the software MossA (Prescher et al., 2012) with three absorption doublets consistent with Fe 2+ (Table 2). No indications of absorption due to Fe 3+ was observed. Single-crystal infrared spectroscopy Polarised Fourier-transform infrared (FTIR) absorption spectra were measured on a 35 μm thick doubly polished single-crystal section oriented parallel to the c-axis. A Bruker Vertex spectrometer attached to a Hyperion 2000 microscope and equipped with a halogen lamp source, a CaF 2 beamsplitter, a ZnSe wiregrid polariser and an InSb detector was used to collect spectra in the range 2000-13000 cm -1 at a resolution of 2 cm -1 . Spectra recorded in polarised mode parallel to the crystallographic c-axis show a significant band at 3365 cm -1 , a very intense band around 3500 cm -1 , two weaker bands at 3632 and 3644 cm -1 , and two very weak bands at 3662 and 3671 cm -1 (Fig. 3). As observed typically for tourmaline spectra in the (OH) range, the main band is off-scale for the E||c direction due to excessive absorption. Spectra obtained perpendicular to the c-axis show considerably weaker bands. Note that the band at 3365 cm -1 is consistent with the presence of minor Al along with Si in [4]-fold coordination (Nishio-Hamane et al., 2014), whereas the comparatively weak bands above 3600-3650 cm -1 , which is the region where bands due to (OH) at the W position (≡ O1 site) are expected (e.g. Gonzalez-Carreño et al., 1988;Bosi et al., 2015b), indicate small amounts of W (OH). On the basis of previous investigations of Bosi et al. (2012Bosi et al. ( , 2016Bosi et al. ( , 2021 and Watenphul et al. (2016), the main broad FTIR band at ∼3500 cm -1 is probably caused by the occurrence of the atomic arrangements 3[ Y (Mn 2+ , Al) Z Al Z Al]-O3 (OH) 3 , whereas the bands above 3600 cm -1 may be caused by the arrangements Optical absorption spectroscopy (OAS) Polarised optical absorption spectra of princivalleite ( Fig. 4) were acquired at room temperature on the same polished crystal that was used for the collection of infrared spectra. An AVASPEC-ULS2048X16 spectrometer, connected via a 400 μm UV fibre cable to a Zeiss Axiotron UV-microscope, was used. A 75 W Xenon arc lamp was used as the light source and Zeiss Ultrafluar 10× lenses served as objective and condenser. An UV-quality Glan-Thompson prism, with a working range from 40000 to 3704 cm -1 was used as a polariser. The recorded spectra show two broad absorption bands at 13500 and 8900 cm -1 . The weak polarisation of these bands is explained by the absence of Fe 3+ (e.g. Mattson and Rossman, 1987) in the sample and consequently the bands mark pure d-d transitions in [6]-coordinated Fe 2+ . This assignment agrees with the Fe valency and site distribution observed from Mössbauer spectra of the sample. Additional sharp absorption bands observed in the E||c-spectrum in the range 6700-7000 cm -1 mark overtones of the fundamental (OH)-stretching modes. Weak and relatively sharp absorption bands at ∼18000, ∼22500, ∼24000 and ∼27500 cm -1 are related to spin-forbidden electronic transitions in [6]-coordinated Mn 2+ (e.g. Hålenius et al., 2007). Single-crystal structure refinement A representative azure crystal of princivalleite from Veddasca Valley was selected for X-ray diffraction measurements on a Bruker KAPPA APEX-II single-crystal diffractometer (Sapienza University of Rome, Earth Sciences Department), equipped with a CCD area detector (6.2 × 6.2 cm active detection area, 512 × 512 pixels) and a graphite-crystal monochromator, using MoKα radiation from a fine-focus sealed X-ray tube. The sample-to-detector distance was 4 cm. A total of 1621 exposures (step = 0.4°, time/ step = 20 s) covering a full reciprocal sphere with a redundancy of ∼12 was collected using ω and w scan modes. Final unit-cell parameters were refined using the Bruker AXS SAINT program on reflections with I > 10 σ(I ) in the range 5°< 2θ < 75°. The intensity data were processed and corrected for Lorentz, polarisation and background effects using the APEX2 software program of Bruker AXS. The data were corrected for absorption using a multi-scan method (SADABS, Bruker AXS). The absorption correction led to an improvement in R int . No violation of R3m symmetry was detected. Structure refinement was done using the SHELXL-2013 program (Sheldrick, 2015). Starting coordinates were taken from Bosi et al. (2015a). The variable parameters were: scale factor, extinction coefficient, atom coordinates, site-scattering values (for X, Y and Z sites) and atomic-displacement factors. The fully ionised-oxygen scattering factor and neutral-cation scattering factors were used. In detail, the X site was modelled using the Na scattering factor. The occupancy of the Y site was obtained considering the presence of Al versus Mn, and the Z site with Al versus Fe. The T, B and anion sites were modelled, respectively, with Si, B and O scattering factors and with a fixed occupancy of 1, because refinement with unconstrained occupancies showed no significant deviations from this value. The position of the H atom bonded to the oxygen at the O3 site in the structure was taken from the difference-Fourier map and incorporated into the refinement model; the O3-H3 bond length was restrained (by DFIX command) to be 0.97 Å with the isotropic displacement parameter constrained to be equal to 1.2 times that obtained for the O3 site. There were no correlations greater than 0.7 between the parameters at the end of the refinement. Table 3 lists crystal data, data-collection information, and refinement details; Table 4 gives the fractional atom coordinates, equivalent isotropic-displacement parameters and Table 5 shows selected bond lengths. The crystallographic information file has been deposited with the Principal Editor of Mineralogical Magazine and is available as Supplementary material (see below). Powder X-ray diffraction A powder X-ray diffraction pattern for princivalleite was collected using a Panalytical X'pert powder diffractometer equipped with an X'celerator silicon-strip detector. The range 5-80°(2θ) was scanned with a step-size of 0.017°with the sample mounted on a background-free Si holder using sample spinning. The diffraction data (for CuKα = 1.54059 Å), corrected using Si as an internal standard, are listed in Table 6. The program UnitCell (Holland and Redfern, 1997) was used to refine unit-cell parameters in the trigonal system: a = 15.8851(3) Å, c = 7.1041(2) Å and V = 1522.46(5) Å 3 . Determination of number of atoms per formula unit (apfu) In agreement with the structure-refinement results, the boron content was assumed to be stoichiometric (B 3+ = 3.00 apfu). Both the site-scattering results and the bond lengths of B and T are consistent with the B site being fully occupied by boron and no amount of B 3+ at the T site (e.g. Bosi and Lucchesi, 2007). The iron oxidation state was determined by Mössbauer spectroscopy. In accordance with these results, together with results from optical absorption spectroscopy and Fe and Mn redox potential arguments, all Mn was considered as Mn 2+ . Lithium was determined by μ-LIBS. The (OH) content and the formula were then calculated by charge balance with the assumption (T + Y + Z) = 15 apfu and 31 anions. The excellent agreement between the number of electrons per formula unit (epfu) derived from EMP data and SREF (223.2 and 223.0 epfu, respectively) supports the stoichiometric assumptions. Site populations The princivalleite site populations at the X, B, T, O3 (≡ V) and O1 (≡ W) sites follow the standard site preference suggested for tourmaline (e.g. and are coherent with the information from FTIR absorption spectra (Fig. 3). In particular, the presence of 0.40 Al apfu at the T site is consistent with observed <T-O> = 1.624 Å, which is larger than the expected value for < T Si-O> = 1.619(1) Å (Bosi and Lucchesi, 2007). The site populations at the octahedrally coordinated Y and Z sites were optimised according to the procedure of Bosi et al. (2017), and by fixing the minor elements Zn and Li at the Y site. Table 7. The agreement between the refined and calculated values is very good, and validates the distribution of cations over the X, Y, Z and T sites in the empirical structural formula of princivalleite. This site population is also supported by the comparison of weighted bond valence sums and mean formal charge calculated from the empirical structural formula (Table 8). For classification purposes, the empirical crystal-chemical formula was recast in its ordered form following Henry et al. (2011) End-member formula and relation to other species The composition of the present sample is consistent with an oxytourmaline belonging to the alkali group : it is Na-dominant at the X position of the general formula of tourmaline, oxy-dominant at W with O 2-> (F + OH) and Al 3+ dominant at Z. With regard to the Y position, the formula electroneutrality requires that the total charge at Y is +7 in the end-member formula: Na(Y 3 ) Σ7+ Al 6 (Si 6 O 18 )(BO 3 ) 3 (OH) 3 O. In accord with the dominant-valency rule and the valency-imposed double site-occupancy (Bosi et al., 2019a(Bosi et al., , 2019b, the possible charge and atomic arrangements compatible with the Y-site population in the ordered formula are: Princivalleite is related to oxy-schorl and darrellhenryite by the substitutions Mn 2+ ↔ Fe 2+ and 2Mn 2+ ↔ Al 3+ + Li + . The properties of these three tourmalines are compared in Table 9, whereas the position of the princivalleite holotype sample in the ternary diagram for the (Fe 2+ 2 Al)-(Mn 2+ 2 Al)-(Al 2 Li) subsystem is displayed in Fig. 5. This figure also shows the chemical variability of princivalleite to oxy-schorl and darrellhenryite by the occurrence of two additional samples from the same batch of tourmalines from the Veddasca rock sample (Fig. 1) and samples from Uvildy, Chelyabinsk region, Russia, and Pikárec, Czech Republic (Cempírek et al., 2015;Bosi et al., 2022). The chemical composition of these four samples is reported in Table 10. Moreover, the chemical composition of the yellow Mn-tourmaline identified as tsilaisite by Nuber and Schmetzer (1984) is W O-dominant; thus, it corresponds to princivalleite (Fig. 5). It is most likely that the locality of this yellow tourmaline should be the Canary mining area in the Lundazi District of eastern Zambia (for details, see Laurs et al., 2007), although it is important to point out that other tourmalines from this locality are actually Mn-rich elbaite or fluor-elbaite samples (Laurs et al., 2007;Simmons et al., 2011). Petrogenesis of princivalleite Formation of Mn-dominant tourmalines (e.g. tsilaisite, fluor-tsilaisite and celleriite; Bosi et al., 2012Bosi et al., , 2015aBosi et al., , 2022 requires specific geochemical conditions that are rare in nature: ideally, high activity of Al and Mn combined with low activity of Fe, Li and Mg. For example, Simmons et al. (2011) suggested that the original pegmatite-forming melt (preferably a B-rich peraluminous melt) of tsilaisitic and Mn-rich elbaitc tourmalines Bosi (2014). Bond valence parameters from Brown and Altermatt (1985). a Mean Formal Charge (or weighted atomic valence) from the empirical crystal-chemical formula. must be relatively low in Fe and enriched in Mn and B, moreover, during the early stages of crystallisation Fe must be removed, but abundant B and Mn must still be available when tourmaline crystallises. The system, at the stage of growth of princivalleite, must be also depleted in F; this condition is not commonly achieved as Mn-enrichment in pegmatites is typically followed by an increase of F content in the system and in the Mn-rich tourmalines formed (e.g. Selway et al., 1999;Dixon et al., 2014). Manganese enrichment in late-stage pocket tourmaline is a characteristic feature of elbaite-subtype pegmatites (e.g., Novák and Povondra, 1995;Novotný and Cempírek, 2021;Bosi et al., 2022) that, compared to lepidolite-subtype (Selway et al., 1999) or transitional pegmatites (Dixon et al., 2014;Roda-Robles et al., 2015), contain lower amounts of F. The latter typically remains below 0.5 apfu in tourmaline until the hydrothermal-metasomatic stage of pegmatite crystallisation, which is characterised by fluor-elbaite to fluorliddicoatite compositions (e.g. Novotný and Cempírek, 2021;Zahradníček, 2012;Flégr, 2016). This is the case of the princivalleite occurrences in the Uvildy and Pikárec pegmatites (Table 10), whereas F in princivalleite from Veddasca valley could have been limited by the abundant crystallisation of muscovite in the pegmatitic vein. Princivalleite from Veddasca Valley is Li-poor and relatively Fe 2+ -rich, with Fe 2+ contents that are sometimes higher than those of Mn 2+ , leading to the oxy-schorl compositions. Despite princivalleite being an oxy-species, the occurrence of Mn and Fe in oxidation state +2 indicates that its formation is not constrained by oxidising conditions. Princivalleite originated in a B-rich and peraluminous anatectic pegmatitic melt formed in situ, poor in Fe and characterised by reducing conditions, as evidenced by the occurrence of rare pyrite and only Fe 2+ in tourmaline. Such reducing conditions are related to a low fugacity of oxygen in the late-stage metamorphic fluids derived by the flaser gneiss, which promoted the formation of very low levels of melting. The formation of this type of vein could also be compatible with the crystallisation of batches of 'silicate-rich fluids', as in the model proposed by Thomas and Davidson (2012). The Mn-enrichment in tourmaline was allowed by the lack of formation other minerals competing for this element such as garnet. The Mn-enrichment is unusual for anatectic pegmatites (cf. ; we therefore assume that micas (especially biotite) in the protolith metapelite were enriched in Mn, possibly due to admixture of volcanosedimentary component; this might also be indicated by relatively elevated ZnO contents in princivalleite (Tables 1 and 10). Another explanation might be an unexposed magmatic source of melt as in the case of (apparently anatectic) kyanitebearing Li-rich pegmatites at Virorco, Argentina (Galliski et al. 2012). Fig. 5. Plot of princivalleite compositions on the (Fe 2+ 2 Al)-(Mn 2+ 2 Al)-(Al 2 Li) diagram. Black circles represent the coexisting samples from same batch of tourmalines from the Veddasca rock sample (Italy); black triangle and black diamond represent princivalleite samples from Uvildy (Russia) and Pikárec (Czech Republic), respectively; black star is the yellow Mn-tourmaline from Zambia (Nuber and Schmetzer, 1984) identified as princivalleite in this study.
2022-01-16T16:33:54.378Z
2022-01-14T00:00:00.000
{ "year": 2022, "sha1": "aeae762d57a0a52bbb62d4f766dc4b40019ada35", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/FB93F3A23A690F8422BFFDCADFB096F2/S0026461X22000032a.pdf/div-class-title-princivalleite-na-mn-span-class-sub-2-span-al-al-span-class-sub-6-span-si-span-class-sub-6-span-o-span-class-sub-18-span-bo-span-class-sub-3-span-span-class-sub-3-span-oh-span-class-sub-3-span-o-a-new-mineral-species-of-the-tourmaline-supergroup-from-veddasca-valley-varese-italy-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "22a8d15cf31af1b200d846b3905147c1a4678b70", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
30271457
pes2o/s2orc
v3-fos-license
Deregulation of allopathic prescription and medical practice in India: Benefits and pitfalls In the background of debates on Universal Health Coverage, skill transfer from the medical practice license holders to other health-care providers such as nurse practitioner has become a global norm. In India, where the world's largest numbers of medical graduates are produced, this discussion is expanding to extremes and serious suggestions are coming forward for the development of legal framework for allowing dentists, homeopaths, pharmacists, and half duration trained doctors; permission to issue allopathic prescription. Allopathic medical prescription. It is noteworthy that this discussion only pertains to the pharmaceutical products retailed through “allopathic medical prescriptions.” A prescription is not only advice for patient's recovery but it also is a legitimate order for the sale of controlled drugs and pharmaceutical product; thereby functions as a regulatory tool for consumption of pharmaceutical products at retail level. Who is ultimately going to benefit from this prescription deregulation? This editorial explores benefits and pitfalls of prescription and medical practice deregulation. Within the cotemporary health policy discussions, the broadening of the scope of practice and skill transfer for various groups of health-care workers is a debate in vogue. Expanding the framework of legal permission to issue medical prescription by nonlicensed providers to public at large has become a part of this broad process. A word of caution is required for developing economies, where medical practice regulation is still not sufficiently evolved or has been rendered noneffective due to various reasons. While the popular argument in the support of prescription deregulation is the "nonavailability of licensed medical doctors," a reality check is required on the account of capacity and intent of public/private health system to engage qualified medical practitioners. Indiscriminate and illogical deregulation also has a flip side, and the debate is apparently moving toward a regulation-free distribution of curative medical services within the public health sector. The new employment contracts, remunerations, and recruitment policies of public health systems are indicative of prohibitive barriers toward entry and sustenance for licensed medical practitioners as resource regulators at the level of primary care curative services. In the well-regulated environment of developing countries, primary care often means the essential presence of licensed medical practitioners, who provide effective gatekeeping services in terms of provision of comprehensive medical care and public health expenditure. Whereas, the same "primary care" is often translated into several dysfunctional and fragmented vertical programs in the developing nations. Prescription deregulation is especially important for countries which are contemplating upon implementation of Universal Health Coverage (UHC). Without an effective mechanism of regulatory mechanism within public health systems, more so in the case of curative medical services, the benefit of health-care subsidy of UHC is likely to be transferred to the industry or leaked to corruption instead of the intended purpose of palliating and preventing population morbidity and mortality. Public health resources must be protected from inherent fallacies of willful wasteful public health expenditure and crony corporate designs. Allopathic Prescription Prescription often refers to a health-care provider's written authorization for a patient to purchase a prescription drug from a pharmacist. Prescription by a Registered Allopathic Physician is the form of instructions that govern the plan of care for an individual patient. Prescribing entails the following four steps: Gathering patient history, assessing appropriateness of the medications, communicating the therapy to other health-care professionals, and monitoring patient's drug regimen. [1] Even though the process of prescribing seems simple, choosing the most appropriate medication therapy for the patient often requires a sound judgment on the part of the health-care provider. [2] Thus, a prescription is not only advice for patient's recovery but it is also a legitimate order for the sale of controlled drugs and pharmaceutical product; thereby functions as a regulatory tool for consumption of pharmaceutical products at retail level. Deregulation of Allopathic Prescription: Caution Required nonlicensed providers, legitimacy to issues medical prescriptions. Deregulation of allopathic prescription requires a cautious consideration since it is a grave concern in the context of delivering safe and quality medical care to 125 crore citizen of India. It is often stated that India's health-care delivery is suffering mainly due to the acute shortages of allopathic doctors. It is also a common perception that allopathic doctors seldom prefer rural service. Let us examine these common perceptions in the background of available data. Scarcity of Allopathic Doctors: Myths and Reality India produces more than 50,000 allopathic doctors per year, but the public health system has only 100,000 existing posts for (total posts) employment of doctors. There is a deficiency of only a few thousand doctors within the public health systems at Primary Health Centers (PHCs), but as a country, India is producing several times more doctors. Shortage of doctors for primary health care has been overstated. As per Rural Health Statistics-2015 published by Ministry of Health and Family Welfare (MOHFW), Government of India, the number of allopathic doctors at PHCs has increased from 20,308 in 2005 to 27,421 in the year 2015, which is about 35.0% increase and shortfall of allopathic doctors in PHCs was 11.9% of the total requirement for existing infrastructure. [3] To be more specific, all over India only 3002 allopathic doctors are shortfall in PHC and in that too only in nine states. [4] Of these vacancies, a proportion is due to nonrecruitment rather than nonavailability of doctors. Data showed that each year, about 100,000 doctors took postgraduate medical entrance examinations across the country. However, only around 25,000 made it and the rest were available for service as MBBS doctors for the public health system. [4] In fact, states like Maharashtra are now producing surplus MBBS doctors. The Government of Maharashtra has, therefore, decided to scrap the service bond to serve rural sector, which was earlier compulsory for all medical students qualifying from government medical colleges. [5] The requirements (advertised posts) have not changed for last several decades in India but the populations as almost doubled. For any number of regular government medical officer posts advertised, there are far more applicants. The recent order to move retirement age from 60 to 65 years effectively means that there will be no urge for new recruitments for 5 more years. These additional senior doctors, who would have been looking after administrative responsibilities till now, are less likely to see patients in coming 5 years. Therefore, no change is expected in addressing community-based morbidity. The real problem is not nonavailability of MBBS doctors but recruiting them and giving an atmosphere to retain them. According to OPPI KPMG report on healthcare access initiatives, "the country faces acute shortage of infrastructure at the primary, secondary, and tertiary levels, which is further hampered by inadequately trained health-care professionals and staff." [6] The problem is underdeveloped infrastructure and rather than a shortage of workforce. Who is Pushing for Prescription Deregulation? Under the pretext of deficiency of doctors, the pharmaceutical industry is pushing further deregulation of allopathic prescription. Several pharmaceutical groups already run allopathic educational programs in the name of continuous professional development for nonlicensed practitioners. Legalizing cross pathy and creating an opportunity for back door entry for Homeopathy or Yoga graduates to practice Allopathy in the name of meeting shortage of allopathic doctors in rural India will only compound and complicate medical problems. One can prefer and adopt shortcut, short-sighted sightless, "stitch and suture" policy, i.e., cross pathy (engaging institutionally qualified ISM vaidyas to substitute the need of allopathic doctors) and can fill up the gap. But by legalizing cross pathy by deregulating allopathic prescription is likely to severely impact the prescription patterns at public health (government) health centers. The pharmaceutical industry is ultimately going to benefit from the deregularization of allopathic prescription. Legal Boundaries of Medical Prescription in India At the heart of medication therapy, lies the prescription; a legal document governed by the following laws: 1956); or ii. Registered or eligible for registration in a medical register of a state meant for the registration of persons practicing the modern scientific system of medicine (excluding the Homeopathy system of medicine); or iii. Registered in a medical register (other than a register for the registration of homeopathic practitioners) of a state, who although not falling within subclause (i) or subclause (ii) is declared by a general or special order made by the State Government in this behalf as a person practicing the modern scientific system of medicine for the purposes of this Act. Honorable Supreme Court of India upheld the validity of Rule 2 (ee) (iii), as well as the notifications issued by various State Governments there under allowing Ayurveda, Siddha, Unani, and Homeopathy practitioners to prescribe allopathic medicines. [9] In view of the above judgment, Ayurveda, Siddha, Unani, and Homeopathy practitioners can prescribe allopathic medicines under Rule 2 (ee) (iii) only in those states where they are authorized to do so by a general or special order made by the concerned State Government in that regard. Practitioners of Indian Medicine holding the degrees in integrated courses can also prescribe allopathic medicines if any State Act in the state in which they are practicing recognizes their qualification as sufficient for registration in the State Medical Register. [9] The recent judgment of High Court of Delhi further rules out ambiguity. It states "That a harmonious reading of Section 15 of MCI Act and Section 17 of the Indian Medicine Act leads to the conclusion that there is no scope for a person enrolled on the State Register of Indian Medicine or Central Register of Indian Medicine to practice modern scientific medicine in any of its branches unless that person is also enrolled on a State Medical Register within the meaning of the MCI Act. That the right to practice modern scientific medicine or Indian system of medicine cannot be based on the provisions of the drugs rules and declaration made there under by State Governments." [8] Earlier practitioners of other systems of medicine to claim right to practice in modern medicine without qualification in the said system and that the practitioners of Indian System of Medicine though entitled to practice Indian System of Medicine cannot practice modern system of medicine. [8] From these verdicts, the legal stand becomes very clear, i.e., medical practitioners who are not qualified and licensed to practice Allopathy cannot issue an allopathic prescription, which seems valid, logical, and scientific too. Promotion of Ayurveda, Yoga, Unani, Siddha, and Homeopathy versus Demand for Deregulation of Allopathic Prescription Ayurveda, Yoga, Unani, Siddha, and Homeopathy (AYUSH) systems of medical therapies have traditionally received support from the Government of India. To promote AYUSH, a new ministry was formed on November 9, 2014, earlier it had a status of department under MOHFW. A section of AYUSH practitioners, few of the public health organizations and a section of public health policy makers, have been advocating for legal permissions to allow practice of Allopathy by AYUSH practitioners. It is understandable that by prescribing allopathic medicine their income and employment opportunities would improve dramatically. However, if allowed such a situation would defeat the purpose of the Government of Indian policy of promoting AYUSH. The viewpoint titled "Can a Homeopath Practice Allopathy?" published in National Journal of Homeopathy highly criticized those half-baked homeopaths desirous of Allopathy practice. [10] The article further states that "a very few homeopathic colleges send their students to learn surgery, midwifery and gynecology, and other allied sciences. What is taught in the Homeopathic Hospital is very cursory and is taught by doctors who have not had any opportunity of allopathic training." [10] Therefore, argument for legal permission to allow AYUSH practitioners to issue allopathic prescription is not only paradoxical but also appears to be a noxious design which is being apparently pushed by the pharmaceutical industry and against the spirit of development of real AYUSH. The intention of promotion of sales of pharmaceutical allopathic products in the name of promotion of AYUSH is overtly clear. Prescription Deregulation: Global Trends There is a global trend to encourage developing economies toward bulk purchase of medical and pharmaceutical products through government funds and push it down through public health systems. The issue of deregulation of allopathic medicine is especially important at primary care level where the public interface exists. Primary care should function as efficient gatekeeping of health-care expenditure; however in countries like India, as discussed above, a design of nonrecruitment of licensed doctors is being encouraged. However, same is being propagated as deficiency of doctors for public consumption. Therefore, a legal framework is being developed to allow free distribution of pharmaceutical products through bulk purchase (through public funds) by deregulating allopathic prescription. Hence, all types of innovative proposals are being pushed forward such as (a) permission to allow allopathic drugs by hakims, Ayurvedic vaidyas, and homeopathic practitioners; (b) medical practice by pharmacists although over the counter sale of controlled drugs is rampant; almost everything is available in India without prescription except for narcotics at most of the places; and (c) bridge courses for dentists to be able to function as prescribers of allopathic medication for general medical problems. The basic intention is to ease out and push the pharmaceutical products by nonqualified doctors through private and public health services. [11] Sadly, the attempt to deregulate allopathic prescription will elicit similar situation in public health scenario, i.e., leaving the patient in the complete care of nonlicensed practitioners under the perception of qualified practitioners. Conclusion Destabilizing and deregulating the existing legal framework of sale of pharmaceutical products is likely to have serious implications on the healthcare expenditure both in the public and private sector. Over the counter sale of controlled prescription items by private pharmacies is already a known challenge in India and almost everything including "antibiotics" is available for sale without presenting a legal order of a medical prescription. Further indiscriminate deregulation of prescription laws is likely to have a catastrophic impact on the public health curative expenditure. After such a deregulation, there would be no need for doctors as professional (neutral) regulators of resources except at hospital as "procedurists" and "medical interventionalists" but without any autonomy or regulatory role. Professional regulatory pillar within the health services will be lost forever. There also seems to exist a tussle for control of regulatory powers over health-care resources between "medical professionals" and "administrators". Amidst chaotic health-care ecosystem, there is an attempt for a hostile takeover of "medical profession" by the "industry" on the pretext of public health necessities. Political and thought leaders must deeply ponder over such interventions and move ahead with great caution. Moreover, there is an urgent need to develop a "National Health Agenda" instead of chasing and perennially lagging behind the "International Health Goals" decades after decades; India's last national health policy was released almost 15 years back in the year 2002.
2018-04-03T03:44:14.902Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "7aaf4dc031f772201c569e6016f197654a43ddff", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2249-4863.192331", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "915dc15999caf98b4fe0c85a9bce3899ff033ded", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
18008352
pes2o/s2orc
v3-fos-license
Lack of “obesity paradox” in patients presenting with ST-segment elevation myocardial infarction including cardiogenic shock: a multicenter German network registry analysis Background Studies have associated obesity with better outcomes in comparison to non-obese patients after elective and emergency coronary revascularization. However, these findings might have been influenced by patient selection. Therefore we thought to look into the obesity paradox in a consecutive network STEMI population. Methods The database of two German myocardial infarction network registries were combined and data from a total of 890 consecutive patients admitted and treated for acute STEMI including cardiogenic shock and cardiopulmonary resuscitation according to standardized protocols were analyzed. Patients were categorized in normal weight (≤24.9 kg/m2), overweight (25-30 kg/m2) and obese (>30 kg/m2) according to BMI. Results Baseline clinical parameters revealed a higher comorbidity index for overweight and obese patients; 1-year follow-up comparison between varying groups revealed similar rates of all-cause death (9.1 % vs. 8.3 % vs. 6.2 %; p = 0.50), major adverse cardiac and cerebrovascular [MACCE (15.1 % vs. 13.4 % vs. 10.2 %; p = 0.53)] and target vessel revascularization in survivors [TVR (7.0 % vs. 5.0 % vs. 4.0 %; p = 0.47)] with normal weight when compared to overweight or obese patients. These results persisted after risk-adjustment for heterogeneous baseline characteristics of groups. An analysis of patients suffering from cardiogenic shock showed no impact of BMI on clinical endpoints. Conclusion Our data from two network systems in Germany revealed no evidence of an “obesity paradox”in an all-comer STEMI population including patients with cardiogenic shock. Background Obesity and associated disorders like hypertension, hyperlipidemia and diabetes are linked to increased morbidity and mortality among a Western population [1,2]. This patient cohort is also at greater risk to develop coronary artery disease [2]. Population-based registry data revealed that 43 % and 24 % of coronary revascularizations were carried out in overweight and obese patients, respectively [3]. However, despite evidence of a positive correlation between obesity and increased cardiovascular morbidity, previous studies have described an "obesity paradox" in patients undergoing coronary revascularization either by interventional (PCI) or surgical (CABG) strategies, reporting a protective effect of obesity in terms of postoperative mortality. The first description of this phenomenon was done by Gruberg et al. 12 years ago [4]. Although the impact of obesity on clinical outcomes after elective PCI has been subsequently investigated in several studies, the issue remains controversial. Thus, there is insufficient data in unselected populations suffering from acute coronary syndrome (ACS), which is additionally associated with a complex thrombogenic and proinflammatory status [5][6][7][8][9][10]. Our current analysis compares clinical outcomes after PCI between consecutive normal weight, overweight and obese patients diagnosed with an ST-segment elevation myocardial infarction (STEMI) including patients with cardiogenic shock. Network structures Both myocardial infarction networks, which were the first networks in Germany, aim at coronary reperfusion therapy with primary PCI as the treatment prerogative for all presumed STEMI patients according to a uniform, regional treatment protocol patterned for a 24 h/7days week in a single interventional centre. "Network A", located in northeastern Germany constituted a mixed urban and rural catchment area with an approximate population of 415,000 inhabitants and was spread across a 60 km radius from its centre. At the time of collecting data, there were eight hospitals in the network area, with a lone high-volume interventional facility functioning as a 24 h/7days primary PCI service point. Emergency Medical Services (EMS) transferred suspected STEMI patients to the emergency department of the nearest hospital without prior announcement. Upon arrival of the patient, local emergency departments alarmed the interventional cardiology team and organized the direct transfer of the patient to the cathlab. "Network B", located in southwestern Germany, constituted a rural catchment area with approximately 350,000 inhabitants and was spread across a 35 km radius from its centre. At the time of data collection there were six hospitals in this network area, with a lone high-volume interventional facility functioning as a 24 h/7days primary PCI service point. Trained personnel at all collection points supported both Network structures. All STEMI patients, irrespective of cardiogenic shock or preceding cardiopulmonary resuscitation were intended for primary PCI through femoral access. Primary PCI protocol All provisionally diagnosed STEMI patients were treated with 250-500 mg Aspirin intravenously and received a weight adjusted unfractionated dose of Heparin (70 IU/ kg) by EMS. The loading dose of clopidogrel (600 mg) was mostly administered before the PCI. In few cases, this was administered immediately after the procedure. When treating patients in shock, interventional cardiologists were encouraged to treat all presumed hemodynamically relevant non-target lesions. Thrombectomy, periprocedural GPIIb/IIIa blockers (predominantly abciximab) and drug-eluting stents (DES) were utilized at the discretion of the operator. The full anticoagulant dosing of heparin was stopped after PCI, unless there was a high risk of thromboembolism (eg. atrial fibrillation or mechanical heart valves). Study population Consecutive STEMI patients admitted for primary PCI were prospectively included in their respective registries, in network A from 2001 to 2003 and in network B from 2005 to 2007. Definitions These were based on parameters defined by the World Health Organization (WHO) and the National Heart, Lung and Blood Institute. The patient population was classified into normal weight (body mass index [BMI] 18.5 -24.9 kg/m 2 ), overweight (BMI 25-30 kg/m 2 ) and obese group (>30 kg/m 2 ) [11,12]. STEMI was diagnosed by the presence of chest pain lasting > 20 min and of significant ST-segment elevation (≥0.1 mV in two adjacent leads if leads I-III, aVF, aVL, V4-V6, and ≥ 0.2 mV in leads V1-V3), as in the first recorded electrocardiogram (ECG). Patients with persistent angina and presumably new left bundle branch block (LBBB) were included in the registry if myocardial infarction (MI) was subsequently confirmed. Cardiogenic shock was defined clinically by the presence of hypotension (systolic blood pressure < 90 mmHg for >30 min or need for vasopressors to maintain systolic blood pressure >90 mmHg) and tachycardia (heart rate >90 beats/min) with evidence of end-organ hypoperfusion [13]. Thrombolysis In Myocardial Infarction (TIMI) flow grades were assessed in the culprit vessel before and after the PCI procedure. Major bleeding was defined according to the TIMI major bleeding definition as intracerebral bleeding, bleeding requiring surgical intervention, bleeding requiring transfusion or loss of more than 5 g/% haemoglobin [14]. As indicators of guideline adherent therapy we analyzed pre-and in-hospital delays, procedural success of primary PCI, stent use, peri-interventional antiplatelet management, medication at discharge and medication at 12 months [15]. Procedural success was defined as residual stenosis < 30 % of the culprit lesion. For outcomes we analyzed mortality, re-infarction rate, target lesion revascularization (TLR) and target vessel revascularization (TVR) up to 12 months. TVR included repeat procedures, either PCI or CABG, in the target vessel. The composite of these events was defined as the major adverse cardiac and cerebrovascular events (MACCE) including death, MI, TVR and stroke. Patients were discouraged to undergo routine angiography for follow-up; therefore, all re-interventions can be counted as clinically driven. Stent thrombosis (ST) was classified according to the definition proposed by the Academic Research Consortium (ARC) [16]. Data collection and follow-up All patients diagnosed with STEMI were cataloged in a dedicated database. The procedure for follow-up usually included telephone interviews and subject-based questionnaires at the time frame of 6 and 12 months. A descriptive follow-up concerning mortality was obtained from state registries. The local ethics committees in Rostock (Medical Faculty of University Rostock, Germany) and Freiburg (Albert-Ludwigs-University Freiburg, Germany) approved the registries. All patients included in this study gave preemptive informed consent for the extension of our routine follow-up. Statistical methods Data was analyzed according to established standards of descriptive statistics. Categorical variables were compared by chi 2 test. Continuous variables were reported as mean ± standard deviation or median with interquartile ranges. For comparisons, the t test, the two-tailed Mann-Whitney U test and ANOVA model was used where appropriate. Odds ratios (OR) and 95 % confidence intervals (CI) were provided where appropriate. A p value of less than 0.05 was considered significant. A multivariate logistic regression analysis (stepwise backward model) including sex, age, diabetes, hypertension, smoking, renal failure, cardiogenic shock, resuscitation, stent type and impaired ejection fraction (<45 %) at discharge with normal weight as a fixed parameter was performed to determine independent factors predicting 12-month mortality and MACCE. The final logistic model for 12month mortality with the independent variables age, diabetes and impaired ejection fraction (<45 %) at discharge showed a good predictive value (C-statistic = 0.84), and good calibration characteristics using the Hosmer-Lemeshow test (p = 0.90). Mortality and MACE at 12 months was adjusted for the abovementioned variables. One-year survival was demonstrated by Kaplan-Meier curves and compared by logrank test. Data pertaining to pre-hospital and intra-hospital time intervals were also not different between groups. However we observed that normal weight and overweight patients suffered more often from cardiogenic shock (9.9 % vs. 11.1 % vs. 5.1 %; p = 0.02) and had a higher calculated Grace Score (100.6 ± 16.18 vs. 87.2 ± 12.56 vs. 74.2 ± 10.14; p = 0.02) ( Table 2). Approximately half of all patients included in this study had a multivessel coronary artery disease with no significant difference in the distribution of dual-vessel, triplevessel and left-main vessel disease as well as treated target vessel ( Table 2). Primary PCI, being performed through femoral access in all patients, with implantation of nearly 1.4 ± 0.9 stents per patient was carried out as a single In-hospital follow-up The overall in hospital mortality rate was 5.3 % in the normal weight, 4.4 % in the overweight, and 3.1 % in obese groups (p = 0.51). Similarly, rates of MI, stroke and bleeding complications as well as need for repeat urgent revascularization and resuscitation was low with no differences between subsets ( Table 4). One-year follow-up At one-year follow-up no significant differences were noted between groups with respect to the incidence of MACCE-free survival and TVR-free survival. Similarly, no differences were noted in the rates of overall death, MI, stroke, and definite ST (Table 4, Fig. 1 An exclusion of lean patients (BMI < 18.5 kg/m 2 ; n = 5) from the normal weight group did not change abovementioned intrahospital and follow-up results with similar event rates in all three groups. Additionally, a separate analysis of patients suffering from cardiogenic shock (26 versus 48 versus 8) did not show any differences between groups with a mean intrahospital and one-year follow-up mortality rate of up to 26 % and 31 %, respectively. Discussion In Europe, the prevalence of obesity ranges from 4.0 to 36.5 % [17] and it is also well known that obesity acts as an independent cardiovascular risk factor for the development of coronary artery disease as well as general atherosclerosis and is associated with increased overall morbidity and mortality [18,19]. There is evidence that this increased risk is mediated through obesityrelated co-morbidities such as diabetes mellitus, hyperlipidemia, hypertension, increased insulin resistance, enhanced free fatty acid turnover, and promotion of systemic inflammation [20]. However, despite this correlation there is an assumption of an inverse correlation of obesity with mortality post PCI and less pronounced with a smaller need for repeat revascularisation. This has been described as the "obesity paradox" [10,21]. An analysis of 9,633 patients being stratified in normal weight (n = 1,923), overweight (n = 4,813) and obese (n = 2,897) undergoing PCI revealed a higher incidence of major in-hospital complications, including cardiac death (1.0 % vs. 0.7 % vs. [22]. These results persisted even after adjustment for potential confounders, including age, arterial hypertension, diabetes, and left ventricular function. Another analysis on patients with established coronary artery disease undergoing medical, interventional or surgical treatment showed an "obesity paradox" after revascularisation irrespective of the chosen strategy. In the whole cohort patients who were overweight or obese were more likely to undergo revascularization procedures compared with those with normal BMI, despite having lower risk coronary anatomy [23]. The underlying mechanism of the "obesity paradox" is speculative. Obesity is associated with lower levels of plasma renin, epinephrine and high serum levels of lowdensity lipoproteins that bind circulating lipopolysaccharides [24]. Coronary vessel diameters, as confirmed in out-patient cohorts, have been shown to correlate with the increase in body weight; thus a smaller coronary artery size in normal weight and lean patients could theoretically influence periprocedural outcome [25]. The relationship between obesity and survival is characterized in the literature by a J-or U-shaped curve with increasing mortality in the very lean or severely obese group [26,27]; however, after adjustment for smoking and concurrent illness, the relationship has always been linear [28,29]. Contrasting with these findings our analysis of high-risk all-comers STEMI population including patients with cardiogenic shock does not support the presence of an "obesity paradox". Although there is a trend for better oneyear survival in obese patients, this difference did not reach statistical significance. However, with access site being femoral there might be more bleeding events in obese patients, which could be avoided by radial access. Nevertheless, we think that the term "obesity paradox" might predominantly reflect different degrees of bias that cannot be completely corrected for by statistical means. Inherent bias in all obesity analyses result from the fact that overweight and obese patients are usually younger and have larger culprit coronary vessel diameters than normal weight counterparts. In general younger patients have better clinical outcomes after acute MI regardless of reperfusion modality [30,31]. Additionally, the presence of comorbidities in obese and overweight younger patients usually leads to more aggressive therapy of cardiovascular risk factors likely to improve outcomes despite obesity [30,31]. In a study of 130,139 patients hospitalized for coronary artery disease, higher BMI was associated with increased use of standard medical therapies such as aspirin, beta-blockers, reninangiotensin inhibitors, and lipid lowering therapy, and an increased likelihood of undergoing diagnostic catheterization and revascularization [32,33]. The allcomer design of our registry with the majority of patients having had no established coronary artery disease before the index STEMI reduces the influence of potential confounders. Especially promotion of primary PCI in shock patients and after resuscitation (significantly more frequent in obese and overweight patients) avoided a severe pre-selection bias. Another point of discussion with respect to the obesity paradox is that underweight patients may receive standard anticoagulation doses that are too high for their body size, making them more prone to post-procedural bleeding complications, which could be ruled out in our cohort by weight-adjusted doses [3,5]. In addition obesity was found to correlate with higher levels of factor VII, VIII, fibrinogen and plasminogen activator inhibitor-1, which were all associated with increased risk of thrombosis [34]. Accordingly prospective investigations have shown that overweight and obese patients were more likely to suffer from suboptimal platelet response to clopidogrel and aspirin treatment [35,36]. In our cohort the use of GP IIb/IIIa inhibitor was high facing the nature of exclusively high-risk STEMI patients. Furthermore obesity as well as STEMI is considered a low-grade inflammatory state, as demonstrated by increased levels of the pro-inflammatory cytokines interleukin-6 and tumor necrosis factor-alpha, and acute phase proteins such as C-reactive protein [37]. This proinflammatory state may also directly and indirectly cause thrombosis by oxidative stress and endothelial dysfunction [38]. Such findings could not be confirmed in our real-world setting with similar rates of stent thrombosis in all subsets. Since low BMI may be a marker of severe systemic illness [18,39], we defined in a separate analysis the normal-weight group from 18.5 kg/m 2 -24.9 kg/m 2 and excluded 5 extremely underweight patients. However, this did not change the previous findings with lack of an "obesity paradox". A separate analysis of patients with cardiogenic shock, which is associated with a prothrombic situation and Study limitation The present study is an observational non-randomized study in which patients were stratified according to their BMI at index-PCI. Thus, we had no information on intended or unintended weight change, as well as on variables like physical inactivity and socioeconomic factors which may have influenced the results. BMI is not as well correlated to cardiovascular disease and death as waist circumference and waist-to-hip ratio, which, however, were unavailable in our registries. Another limitation of our analysis is the length of follow-up and small sample size that might result in lack of power for meaningful conclusions but is reliable enough for hypothesis generation. An extended follow-up may result in a cumulative detrimental effect of obesity and may even manifest as increased late mortality and confirm the negative correlation of obesity with clinical outcomes even in a setting of coronary revascularization. Additionally the access site during PCI was femoral. With use of radial access site, bleeding events might be reduced in overweight and obese patients, which might result in better clinical outcomes as bleeding events correlate with overall mortality and myocardial infarction rate. Conclusions Data from our all-comer network registry does not confirm the evidence of the "obesity paradox" during short and long term follow-up in patients suffering from STEMI including patients with cardiogenic shock. With respect to the limitations of available data prospective large-scale studies with long-term follow-up focusing on more reliable parameters reflecting the body fat are needed to reveal the phenomenon of the obesity paradox.
2017-06-26T05:36:56.501Z
2015-07-11T00:00:00.000
{ "year": 2015, "sha1": "ad70e524b121f06056b276ac9823a544471ad065", "oa_license": "CCBY", "oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/s12872-015-0065-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86a085506a04782df945b5c3c4f50c5ea06db175", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257714447
pes2o/s2orc
v3-fos-license
Reliability-enhanced surrogate-assisted particle swarm optimization for feature selection and hyperparameter optimization in landslide displacement prediction Landslides are dangerous disasters that are affected by many factors. Neural networks can be used to fit complex observations and predict landslide displacement. However, hyperparameters have a great impact on neural networks, and each evaluation of a hyperparameter requires the construction of a corresponding model and the evaluation of the accuracy of the hyperparameter on the test set. Thus, the evaluation of hyperparameters requires a large amount of time. In addition, not all features are positive factors for predicting landslide displacement, so it is necessary to remove useless and redundant features through feature selection. Although the accuracy of wrapper-based feature selection is higher, it also requires considerable evaluation time. Therefore, in this paper, reliability-enhanced surrogate-assisted particle swarm optimization (RESAPSO), which uses the surrogate model to reduce the number of evaluations and combines PSO with the powerful global optimization ability to simultaneously search the hyperparameters in the long short-term memory (LSTM) neural network and the feature set for predicting landslide displacement is proposed. Specifically, multiple surrogate models are utilized simultaneously, and a Bayesian evaluation strategy is designed to integrate the predictive fitness of multiple surrogate models. To mitigate the influence of an imprecise surrogate model, an intuitional fuzzy set is used to represent individual information. To balance the exploration and development of the algorithm, intuition-fuzzy multiattribute decision-making is used to select the best and most uncertain individuals from the population for updating the surrogate model. The experiments were carried out in CEC2015 and CEC2017. In the experiment, RESAPSO is compared with several well-known and recently proposed SAEAs and verified for its effectiveness and advancement in terms of accuracy, convergence speed, and stability, with the Friedman test ranking first. For the landslide displacement prediction problem, the RESAPSO-LSTM model is established, which effectively solves the feature selection and LSTM hyperparameter optimization and uses less evaluation time while improving the prediction accuracy. The experimental results show that the optimization time of RESAPSO is about one-fifth that of PSO. In the prediction of landslide displacement in the step-like stage, RESAPSO-LSTM has higher prediction accuracy than the contrast model, which can provide a more effective prediction method for the risk warning of a landslide in the severe deformation stage. Introduction Landslides are major geological disasters that cause great property losses and casualties every year [1].The successful prediction of the displacement of landslides within a certain period will play a vital role in disaster prevention and mitigation.Therefore, how to efficiently predict landslide displacement has become an urgent problem to be solved.In the early stage of landslide displacement prediction, researchers usually adopt strict mathematical and mechanical analysis or statistical models such as grey GM (1,1).However, because the landslide displacement problem is a complex nonlinear system, this kind of method is only suitable for the prediction of the upcoming landslide, and the reusability is not high. With the rapid development of artificial intelligence technology, an increasing number of scholars are introducing machine learning methods to landslide displacement prediction problems [2,3].As a mature and effective machine learning method, the long short-term memory (LSTM) neural network can better deal with the situation where the input samples are correlated because of its special structural design.Therefore, it is widely used in various sequence problems, such as natural language processing and time series prediction [4,5].However, feature selection and hyperparameter selection have a great influence on the final prediction result of the LSTM neural network. Generally, there are three methods for feature selection: the filter, wrapper, and embedded methods.In the filter method, the correlation between each feature and the label is first calculated, and then the most appropriate feature is selected as the training input according to the correlation.In the wrapper method, a feature subset is selected, the feature subset is substituted into the machine learning algorithm for training, and the training error is regarded as the score of the subset to judge its quality.The embedded method is similar to the filter method.However, the correlation calculation method between features is replaced by a machine learning algorithm.Therefore, unlike the wrapper method, the filter method and the embedded method are not dependent on the specific network when features are selected.Although calculations are performed sufficiently fast in these methods, they are not as effective as the wrapper method.However, to evaluate the advantages and disadvantages of each feature subset, each feature subset must be substituted into specific network training to obtain the corresponding evaluation when using the wrapper method, and training consumes considerable time. In addition, to test the hyperparameters of a set of LSTM algorithms, it is often necessary to construct the corresponding LSTM model and train the model to obtain its prediction accuracy.Therefore, each evaluation of hyperparameters requires a large amount of evaluation time. Therefore, wrapper-based feature selection and hyperparameter optimization is a computationally expensive black box optimization problem, where the global optimum must be found.Moreover, there is no explicit objective function, and each evaluation is very time consuming. The evolutionary algorithm (EA) has been proposed due to its powerful global search ability and strong generality.It can be used to search for the global optimal solution to optimization problems with nonderivable functions or even without explicit objective functions.Therefore, compared with traditional mathematical optimization methods, EAs are used in a wider range of real-world engineering problems [6,7].However, EAs often require a large number of evaluations to adapt to the search space and find the global optimal solution [8,9].Therefore, it is often difficult to use general EAs for computationally expensive problems. The surrogate-assisted evolutionary algorithm (SAEA) is an important method for solving computationally expensive problems [10].Unlike in ordinary EA, in SAEA, a search is performed not for a real objective function but for a surrogate model that approximates a real objective function.Therefore, by reducing the number of fitness evaluations on the real objective function, SAEA can be used to effectively reduce the time cost and economic cost. In recent years, a large number of studies on highdimensional computationally expensive problems have been carried out, and less attention has been given to lowdimensional problems [11][12][13].Although high-dimensional computationally expensive problems are more challenging, there are still many low-dimensional problems in real-world engineering applications, such as airfoil design problems, pressure vessel design problems, and welded beam design problems [14][15][16].In addition, SAEAs for high-dimensional computationally expensive problems are not necessarily suitable for low-dimensional problems.More powerful surrogate models can be used for low-dimensional problems.Particle swarm optimization (PSO), as a global optimization algorithm, has been widely considered by industry and academia because of its fast convergence rate and generality.Therefore, this paper focuses on the application of PSO to low-dimensional computationally expensive problems. Although the SAEA framework avoids PSO from performing a large number of fitness evaluations directly on real objective functions and makes it suitable for computationally expensive problems.But its performance is limited by several factors. Different models have different characteristics, which gives them different fitting abilities for different types of functions [29,30].Therefore, to adapt to different search spaces, the simultaneous utilization of multiple surrogate models becomes a more reliable option [30].However, multisurrogate model-based methods require a suitable strategy to integrate the predicted values of multiple surrogate models.2. The fitting ability of the surrogate model is different in different local regions.However, few studies have considered the local confidence of surrogate models.3.In SAEA, it is often necessary to select suitable individuals from the population to be evaluated in the real objective function to update the surrogate model.Good evaluation points can effectively improve the accuracy of the surrogate model.In the promising-uncertain strategy, an optimal and uncertain point from the population is selected; thus, both exploration and development are considered [17,18].However, in the selection process, only the fitness of the individual is taken as the basis for selection, i.e., the fitness maximum and minimum are selected without considering more individual information or the influence of the error between the surrogate model and the real objective function on the fitness.As a result, the selected points have little improvement on the accuracy of the surrogate model. Based on the above analysis, we propose reliabilityenhanced surrogate-assisted particle swarm optimization (RESAPSO).In RESAPSO, the weights of each region of the surrogate model are adaptively adjusted using BES to more accurately predict the fitness of each individual.In this paper, we use intuitionistic fuzzy sets to extract fuzzy attribute information from individuals and use IFMADM to select new evaluated points so that the surrogate model can better approximate the real objective function.In addition, the comprehensive analysis of RESAPSO's search accuracy, convergence speed, stability, and other aspects further proves its optimization ability.In the landslide displacement prediction experiment, the accuracy of RESAPSO with 100 iterations was better than that of PSO with 500 iterations, providing higher prediction accuracy while keeping the number of assessments low.Furthermore, compared with the other optimization algorithms, the LSTM model based on RESAPSO can better predict the severe landslide displacement during the strong rainy season. The rest of this paper is organized as follows.The "Related work" section introduces SAEAs and surrogate models."The proposed RESAPSO" section briefly describes RESAPSO, including the Bayesian evaluation strategy (BES), the promising-uncertain strategy based on intuitionistic fuzzy multiattribute decision-making (IFMADM), and the method of extracting individual information and transforming it into intuitionistic fuzzy sets.The "Experimental analysis" section includes a comparison with other SAEAs and applications in feature selection and hyperparameter selection.The "Conclusion" section summarizes this research. Feature selection As a global optimization algorithm, evolutionary algorithm can search for the global optimal solution in the case of underivable and no explicit objective function, which has stronger generalization [20].Therefore, various evolutionary algorithms have been applied to feature selection [21][22][23]. In these methods, the relationship between features or new operators is used to accelerate the convergence rate of the population, thereby reducing the computational cost.However, a large number of evaluations are still required.In order to reduce the number of evaluations, some recent studies have proposed SAEA-based feature selection methods [24]. Hyperparameter optimization In addition to manually tuning hyperparameters, other commonly used tuning methods include the random search, grid search, and Bayesian optimization algorithms [25].In a grid search, the computational complexity and width of the grid need to be balanced; too large a width leads to low accuracy, and too small a width leads to too much computation.The stability of a random search is too low.In the Bayesian optimization algorithm, the number of evaluations is reduced through the kriging model, and exploration and development points are selected using Bayesian probability estimation.Therefore, Bayesian optimization can be regarded as a heuristic optimization algorithm based on a single surrogate model.The Bayesian optimization algorithm differs from RESAPSO, which utilizes more surrogate models.In this paper, the Bayes theorem is used to estimate fitness, and the development and exploration points are selected through IFMADM.In addition to the methods mentioned above, a variety of evolutionary algorithms have been applied to the hyperparameter optimization of machine learning algorithms to improve their optimization ability [26][27][28]. However, these methods still require a large number of evaluations or reduce the number of evaluations at the cost of accuracy [29]. Surrogate-assisted evolutionary algorithm The SAEA constructs a surrogate model with a small number of individuals evaluated in the real objective function, and the population searches on the surrogate model instead of the real objective function, thus significantly reducing the number of evaluations on the computationally expensive real objective function [30,31].Common surrogate models include the Kriging model [32,33], artificial neural network (ANN) [34,35], polynomial regression (PR) [36], radial basis function neural network (RBFNN) [37][38][39], etc.The SAEA framework effectively reduces the number of evaluations on the real objective function so that EAs can be applied to computationally expensive problems.However, it is difficult to adapt the single surrogate model to complex real-world applications. Different surrogate models have different characteristics [40,41]; thus, they have different fitting abilities for different types of functions.The experimental results of Sun et al. [37] showed that by relying on multiple surrogate models, the prediction accuracy of the surrogate model can be higher and more stable [42].As an interpolation model, the kriging model [32] can fit complex search space, but it requires a lot of calculation.An artificial neural network [34,35] also has a strong fitting ability but requires a large amount of calculation and a large number of training samples.Polynomial regression [43] has a small amount of computation and a relatively low ability to fit a complex local space, but it can better simulate the trend of the global environment.The radial basis function neural network [37,38,44] is relatively balanced in all aspects and is the most widely used surrogate model.Therefore, when the accuracy of a single model cannot be improved, using multiple models at the same time becomes the optimal choice.However, the multimodel integration strategy requires adaptive adjustment of the weight of each surrogate model.Common weight updating strategies include the root of mean square error (RMSE) [17], prediction variance [45], and prediction residual error sum of square (PRESS) [41].In these methods, the weight of the corresponding surrogate model is calculated through the error of the selected individual in the surrogate model and the real objective function.Since each selected individual may be in any region of the search space, the accuracy of each surrogate model is different in different regions of the search space.Therefore, such methods cannot reflect the accuracy of surrogate models within local regions. In addition to using multisurrogate models to indirectly improve the prediction accuracy of fitness, there are also studies that directly improve the prediction accuracy of surrogate models through a reasonable selection of evaluation points [38,46]. The above methods focus computational resources on regions where global optimal solutions may exist to improve the accuracy of the surrogate model in the corresponding regions.These methods can be used effectively in environments where the local optimal solution and the global optimal solution are close.However, the exploration ability of the algorithm is reduced, so it is difficult to cope with more complex multimodal environments.To enable the algorithm to explore the search space while developing the local optimal region, most studies adopt the promising-uncertain strategy [17,18].In this strategy, the best individual, called the promising point, is selected to improve the local accuracy of the model, thereby improving the local development ability of the algorithm, and the worst individual, called the uncertain point, is selected to explore the search space, thereby providing diversity for the algorithm.However, as mentioned above, it is difficult to select ideal promising solutions and uncertain solutions from the population when an algorithm is run due to the influence of fitness fuzziness.Therefore, such methods usually use the kriging model [47,48].The kriging model provides a confidence value with the predicted results to help the algorithm determine whether it is worth investing computational resources at an individual's position.Other than the kriging model, some researchers have also used the multisurrogate model voting mechanism to select the potential best individuals and uncertain individuals [17]. The proposed RESAPSO The selected individual represents the individual selected by IFMADM, and the evaluated point represents the individual evaluated in the real objective function.When an individual is selected by IFMADM, it is transformed into a selected individual, and when a selected individual is substituted into the real objective function, it can be transformed into an evaluated point. Overall framework Figure 1 shows the overall framework of RESAPSO.The workflow in the figure represents the running process of the algorithm, and the data flow represents the data exchange process in the work of the algorithm.Initially set database and archive to be empty.First, Latin hypercube sampling is used to select the initial points, which are evaluated on the real objective function and stored in the database.The kriging model, RBFNN model, and PR model are constructed according to the evaluated points in the database.Use the optimizer (PSO) to search for the surrogate model, and the individual fitness is evaluated using the BES.All individuals in each iteration will be stored in the archive.When an epoch is over, all the individuals in the Archive are transformed into intuitionistic fuzzy sets, and IFMADM is used to select the promising point and uncertainty point.The promising point and uncertainty point are then evaluated by the real objective function and stored in the Database.Finally, empty the Achieve and update the surrogate model.The epochs continue until the maximum number of fitness evaluations is reached. Particle swarm optimization Due to its simple implementation and strong versatility, PSO has received extensive attention from researchers and engineers, and various variants have been developed and applied to various practical problems.In this paper, the PSO proposed by Shi [49] is adopted.The particle update equation is as follows. where x t i and v t i represent the position and velocity of particle i at the t th iteration, respectively, and g t represents the position of the optimal particle in the population at the tth iteration.p t represents the historical optimal position of the particle's i until the tth iteration, c 1 and c 2 are two preset constants, and r 1 and r 2 are two random numbers in [0, 1].w t represents the inertia weight in the tth iteration.In this paper, the commonly used inertia weight of linear descent is adopted. Bayesian evaluation strategy To synthesize the predicted fitness of multiple models and make it closer to the real objective function, a Bayesian evaluation strategy based on Bayes' theorem, which uses the posterior probability to represent the weight of each surrogate model in a local area, is proposed. Specifically, whenever the individual's fitness needs to be predicted, the weight of all evaluated points in the individual's neighborhood will be calculated using Eq. ( 4).This weight represents the accuracy of the surrogate model, and the larger the weight is, the more accurate the surrogate model.The final fitness prediction can be obtained using the weighted sum of the distances between the evaluation points and the individual, namely, Eq. ( 3).When an individual is the selected individual, the true fitness of the selected individual will be recorded based on the evaluated points in its neighborhood.When the fitness of individuals in subsequent epochs needs to be predicted, the error between the previously predicted fitness and the true fitness will be calculated from the evaluated point to obtain the current accuracy of each model at that point, that is, Eq. ( 6). The calculation for the synthetic fitness of each individual is as follows. where fit i represents the synthetic fitness of individual i. prefit i, k represents the fitness of individual i calculated on surrogate model k, and l represents the number of surrogate models.Since the kriging model, radial basis function neural network, and polynomial regression model have been used in this paper, l 3.In Eq. ( 3), ne(i) represents the number of evaluated points in the neighborhood of individual i. wg j represents the weight of the jth evaluated point in the neighborhood of individual i, and the calculation of wg j is described in Eq. (7).P(θ k |x j ) represents the posterior approximation of the kth surrogate model on the evaluated point j, and its calculation is as follows. where P(x j |θ k ) represents the likelihood function of surrogate model k at evaluated point j, and P(θ r ) represents the prior approximation of the r th surrogate model at evaluated point j.In this paper, we assume that all surrogate models have the same prior approximation, that is, the prior approximation of each surrogate model on any evaluated point is where is used to prevent the denominator from being 0; in this paper, 1. abs(•) is used to calculate the absolute value, sep represents the selected individual, and n( j) represents the number of times the surrogate model has predicted the fitness of the selected point at the evaluated point j.In Eq. ( 6), prefit q j, k represents the fitness of the selected individual predicted by surrogate model k at evaluation point j for the qth iteration, and reafit q sep represents the true fitness of the selected individual.In essence, Eq. ( 6) takes the error of each evaluated point and the selected individual in its domain as a sampling in Bayes' theorem. In addition, when the neighborhood of the individual contains multiple evaluated points, in addition to considering the credibility of each surrogate model in the neighborhood of the individual, it is also necessary to consider the distance between these evaluated points and the individual.The closer the evaluated point is to the individual, the higher the credibility is.Based on the above considerations, this paper takes the Euclidean distance between the individual and these evaluated points as the weight, and the calculation for the weight is as follows. where wg j represents the weight of evaluated point j, d i, j represents the Euclidean distance between i and j, and ne(i) represents the number of evaluated points in the neighborhood of individual i.Note j in the neighborhood of i. Figure 2 shows a schematic diagram of calculating the synthetic fitness. In Fig. 2, the green squares represent individuals in the population, where solid green squares represent selected individuals and nonsolid squares represent ordinary individuals.I1 and I2 are the selected individuals, and I3 is the ordinary individual.The fitness of the selected individuals will be calculated in the real objective function after the population iteration.The yellow circles represent evaluated points.The blue curve represents the real objective function.The left and right brackets indicate the neighborhood range of an individual.The black solid arrow indicates that the evaluated individual transmits the confidence of each surrogate model at its location to the individual, who fitness is calculated using Eq. ( 3).The red dashed arrow indicates that after the selected individual is calculated in the real objective function, the real fitness will be fed back to the evaluated individual, and the evaluated individual will adjust the confidence of each surrogate model at its location through Eq. ( 6).It can be seen from the above figure that the surrogate model at each evaluated point will undergo multiple confidence adjustments.In this paper, each error at the same evaluated point is expressed in the form of probability and taken as a sampling in Bayes' theorem to approximate the true distribution through posterior probability. Intuitionistic fuzzy multiattribute decision making With the special advantage of IFS in dealing with uncertain and fuzzy problems, IFMADM can be used to select the optimal plan that best meets the decision maker's expectations in an uncertain environment.A common IFMADM process is as follows [50]. Step 1. Transform the attributes of all decision-making plans into the form of intuitionistic fuzzy sets and input the prior weight of each attribute in the form of intuitionistic fuzzy sets. Step 2. Combine all the decision attribute weights expressed in the form of intuitionistic fuzzy sets into certainty weights.The transformation method is as follows. In the above equation, μ w (a i ) and π w (a i ) represent the degree of membership and hesitation of attribute a i , respectively, and m represents the number of attributes.α ∈ [0, 1] and β ∈ [0, 1] represent the importance of membership and hesitation, respectively, in IFMADM.Since there is no prior information, the weights of all decision attributes in this paper are equal, and α 1, β 0.5. Step 3.According to the weight, all attributes of the decision-making plan are combined into a comprehensive evaluation.The specific synthesis equation is as follows. where A i is the intuitionistic fuzzy set of individual i, which represents the comprehensive evaluation of decision-making plan i. μ i j represents the degree of membership of decisionmaking plan i on attribute j, and γ i j represents the degree of nonmembership of decision-making plan i on attribute j. Step 4. Calculate the intuitionistic fuzzy distance between the comprehensive evaluation of each decision-making plan and the ideal plan. The ideal plan G is defined as G { g, 1, 0 }, that is, the degree of membership is 1, and the degree of nonmembership is 0. The negative ideal plan B is defined as B { b, 1, 0 }, that is, the degree of membership is 0, and the degree of nonmembership is 1. where ξ i represents the degree of similarity to the ideal solution.The closer to 1 the decision plan is, the closer it is to the ideal solution.The closer to 0 the decision plan is, the closer it is to the negative ideal solution.D(X , Y ) represents calculating the distance between any two intuitionistic fuzzy sets X and Y .This paper uses the Hamming distance calculation, which is given as follows. where μ X (a i ), π X (a i ), γ x (a i ), μ Y (a i ), π Y (a i ) and γ Y (a i ) represent the membership, hesitation, and nonmembership of the intuitionistic fuzzy set X and Y on the attribute a i , respectively, and m represents the number of attributes in the intuitionistic fuzzy set X .Since all attributes in the IFMADM introduced in this paper have been combined into a comprehensive evaluation by Eq. ( 10), m=1. Step 5. Sort all ξ from largest to smallest, and the first one is the optimal decision-making plan.Therefore, the larger ξ i is, the more likely it is that individual i is a promising individual or an uncertain individual. The method of intuitionistic fuzzification of individual information It is difficult to rely on the fitness information alone to verify whether an individual's area is the most promising area or the most uncertain area.Therefore, information from each individual is extracted from multiple perspectives as decision-making indicators for IFMADM to select individuals, thereby improving the accuracy of decision-making.Specifically, these include individual fitness, regional reliability, maximum fitness deviation, and regional unreliability.The first two are used to select promising individuals, and the latter two are used to select uncertain individuals. Promising individual selection In this paper, the fitness pa 1 and regional reliability pa 2 of each individual are selected as the decision attributes of the promising individuals in IFMADM.In addition to using fitness measures to assess whether the quality of an individual is promising, the regional reliability of an individual is also measured.This is because the fitness of each individual on the surrogate model is not the real fitness of the objective function, so the distance between the individual and the evaluated point is taken as the decision attribute and used to measure the reliability of individual i. The fitness attribute pa 1 of each individual is described by IFSs, and the conversion equation is as follows. where prefit is a matrix, prefit i, k represents the fitness of individual i predicted by surrogate model k, minprefit represents the smallest element in prefit, fit is a vector, fit i represents the synthetic fitness (calculated by Eq. ( 3)) of individual i, minfit represents the smallest scalar in fit, and e is the natural constant. In this paper, two types of individual fitness information are extracted as individual fitness attributes, the average fitness of each surrogate model and the synthetic fitness. where μ i ( pa 1 ), π i ( pa 1 ), γ i ( pa 1 ) represent the degree of membership, hesitation, and nonmembership of the individual i on the attribute pa 1 , respectively, maxlfit and minlfit represent the maximum and minimum values in lfit, respectively, maxlprefit represents the maximum value in lprefit, and 1/2 is to limit π i and γ i to [0,0.5] to ensure that μ i + π i + γ i 1 holds true.The 1/2 below has the same meaning. Equation ( 15) represents the degree of support that individual i is the promising point.The higher the value is, the greater the degree of support that individual i is a promising individual, and vice versa.Equation ( 16) is a measure of the degree of hesitation that individual i is the promising point.As shown in Eq. ( 15), the larger π i is, the smaller μ i .Therefore, the larger Eq. ( 16) is, the less likely it is that individual i is the promising point, and vice versa.Equation (17) represents the degree of nonmembership.The higher the value is, the higher the opposition that individual i is the promising point, and vice versa. The regional reliability attribute pa 2 of each individual is described by IFSs, and the conversion equation is as follows. where μ i ( pa 2 ), π i ( pa 2 ), and γ i ( pa 2 ) represent the degree of membership, hesitation, and nonmembership of the reliability attribute of individual i, respectively, d is a vector, d i represents the Euclidean distance between individual i and its nearest evaluated point, d max represents the maximum value of all d, ne is a vector, ne(i) represents the number of evaluated points in the neighborhood of individual i, ne max represents the number of evaluated points in the neighborhood with the most evaluated points, and max{} in the outer layer is used to prevent the denominator from being zero. Equation (18) indicates that the more evaluated points in the neighborhood of the individual there are, the more reliable the individual and the higher the reliability of supporting the individual as a promising individual.Equation (20) indicates that the farther the individual is from the nearest evaluated point, the less reliable the individual is, and the higher the reliability of opposing the individual as a promising individual. Uncertain individual selection In this paper, the maximum fitness deviation ua 1 and regional unreliability ua 2 are selected as intuitionistic fuzzy decision attributes.These attributes are chosen for two reasons.First, the larger the fitness difference of an individual in each surrogate model is, the higher the uncertainty of the individual.Second, the farther from the evaluated point is individual is, the lower the degree of reliability of the individual.The maximum fitness deviation ua 1 of each individual on the surrogate model is described by an IFS, and the conversion equation is as follows. where maxprefit i and minprefit i represent the maximum and minimum fitness of individual i on all surrogate models, respectively.Equation ( 21) is used to calculate the maximum difference of individual i in the surrogate model. ) where μ i (ua 1 ), π i (ua 1 ), and γ i (ua 1 ) represent the degree of membership, hesitation, and nonmembership of individual i on attribute ua 1 , respectively, and maxprefit and minprefit represent the maximum and minimum fitness in all surrogate models for all individuals, respectively.Equation (22) uses the maximum difference of individual i on the surrogate model to describe the degree of membership of individual i as an uncertain individual.The greater the difference is, the more uncertain the individual gains, and vice versa.Equation (23) uses the magnitude of the fitness difference to express the hesitation that individual i is an uncertain individual.Equation (24) expresses the degree of opposition to the unreliability of the individual.The larger γ i (ua 1 ) is, the higher the reliability of individual i, and vice versa. The regional unreliability ua 2 of each individual is described by an IFS, and the conversion equation is as follows. The definitions of the parameters in Eq. ( 25) are consistent with those in Eq. ( 20), while the definitions of the parameters in Eq. ( 27) are consistent with those in Eq. (18).Both the regional reliability attribute and the regional unreliability attribute use two pieces of information, the distance of the nearest evaluated point and the number of evaluated points in the neighborhood.The more evaluated points there are in the individual neighborhood and the closer they are to the evaluated points, the higher the regional reliability, which is the opposite of regional unreliability.Therefore, the membership degree and nonmembership calculation methods of reliability and unreliability are opposite each other. Through the above conversion equation, the information contained in the individuals is expressed by intuitionistic fuzzy sets, and promising individuals and uncertain individuals are determined.Its pseudo code is similar to Algorithm 1. Input: All individuals, weights of all decision attributes 1: For i=1 to all individual 2: Use Equations ( 15)-( 17) to form the membership degree, hesitation degree, and nonmembership degree, respectively, of the attribute pa1 of the individual i 3: Use Equations ( 18)-( 20) to form the membership degree, hesitation degree, and nonmembership degree, respectively, of the attribute pa2 of the individual i 5: Executive Equations ( 8)- (11) to select the individual with the largest as the promising individual promising_point 6: For i=1 to all individual 7: Use Equations ( 22)- (24) to form the membership degree, hesitation degree, and nonmembership degree, respectively, of the attribute ua1 of the individual i 8: Use Equations ( 25)-( 27) to form the membership degree, hesitation degree, and nonmembership degree, respectively, of the attribute ua2 of the individual i RESAPSO process The PSO, BES, IFMADM, and intuitionistic fuzzification methods are described in detail above.The remaining details of RESAPSO are as follows: The initial model is constructed using Latin hypercube sampling (LHS), which is commonly used in SAEA.The kriging model, radial basis function neural network, and polynomial regression are used as surrogate models.As mentioned above, the algorithmic pseudocode is given in Algorithm 2. Neighborhood selection There are two main functions of the neighborhood.The first function is to select the evaluated points used to estimate fitness.In this case, if the neighborhood is too large, the evaluated points that are too far away can provide useless surrogate model fitting information, while if the neighborhood is too small, there will be no evaluated points in the neighborhood to provide model fitting information.The second function is to calculate the fitting accuracy of the surrogate model on the evaluated points.In this case, a neighborhood that is too large will cause the wrong fitting information to be learned, while a neighborhood that is too small will cause the evaluated points to be unable to be updated for a long time.Therefore, the setting of the neighborhood range should change with the change in the dimension, and the following equation should be satisfied. dim where dim represents the dimension and ub i represents the upper bound of the ith dimension in the search space.lb i represents the lower bound of the ith dimension, and neig i represents the neighborhood size of the ith dimension.The left part of Eq. ( 27) represents the hypervolume of the search space, the cumulative product on the right represents the hypervolume of the neighborhood, and 5 × dim on the right represents the number of initial sampling points.In the case that the initial sampling points are evenly distributed, the above equation allows each individual to have an evaluated point in the neighborhood.In the case of a nonuniform distribution, some individuals can be affected by multiple evaluated points, thereby improving the estimation accuracy. The following equation can be obtained by transforming Eq. (28) In Eq. ( 29),x i represents the ratio of the maximum length to the neighborhood on the ith dimension. Assuming that the shape of the neighborhood is proportionally reduced in the search space, that is, x dim , the following equation can be used to calculate the neighborhood radius. The 1/2 in Eq. ( 30) expresses the neighborhood in the form of a radius. BES and IFMADM strategy performance tests The experiment in this section verifies the effectiveness of the proposed BES and IFMADM by executing RESAPSO on different benchmark functions.The benchmark function is shown in Table 2. RESAPSO is executed independently 20 times on each benchmark function.Each execution of RESAPSO outputs the fitting results of the kriging model, polynomial regression model, radial basis function neural network model, and BES, and the average of the 20 fitting results is the final result.In addition, the 20 test results are sorted from smallest to largest according to the optimal fitness searched, and all the evaluated points of the test results corresponding to the median are taken as the experimental results of IFMADM. The parameter settings of RESAPSO are as follows.The optimizer (PSO) parameters are as follows: population size popsize 100, maximum number of iterations maxiter 500, c 1 c 2 2, and w 1.2-0.8× (t/maxiter), where t represents the number of iterations at runtime.Initially, the sampling number of LHS is 5 × dim, where dim represents the dimension of the problem.The neighborhood parameters required by BES can be calculated using Eq.(30).The surrogate models used in this paper include the kriging model, radial basis function neural network, and polynomial regression model, which are implemented using the SUR-ROGATES toolbox [51]. In Fig. 3, predict represents the model obtained using BES, while real represents the real objective function.KRG represents the kriging model, and PR represents the polynomial regression model.In addition, RBF represents the radial basis function neural network, and the red asterisk represents the evaluated point.From the analysis of the above four functions, it can be found that for different objective functions, different surrogate models have their advantages and disadvantages.The best model on Ackley is the RBFNN model, and the worst is the kriging model.The worst model on Rastrigin is the RBFNN model, and the best is the PR model.On Griewank, because there are few evaluated points, it is impossible to fit such a complicated model, so the fitting effect is generally poor.The best model on Schwefel is the RBF model, and the worst is the PR model.In particular, the best fitting effect is the kriging model in the interval [40,60] of the Schwefel function.Therefore, to obtain better fitness prediction results, the algorithm needs to adaptively adjust the approximation of each surrogate model to different objective functions or even in different regions and use the approximation to synthesize the estimated fitness. From the above 4 functions, it can be seen that the various surrogate models are better balanced when BES is implemented.On Ackley and Griewank, although there is a large deviation from the real objective function using the kriging model, with the help of BES, RESAPSO can correctly identify the degree of approximation between each surrogate model and the real objective function.Thus, the kriging model does not have a significant impact on the estimated results.RESAPSO has been completely approximated to the real objective function in the interval [− 100,60] of the Rastrigin function.A better approximation degree is obtained using RESAPSO compared with any surrogate model in the intervals [− 100, 70], [− 40, 20], and [65, 90] of the Schwefel function. In addition, Fig. 3 shows that different surrogate models have different characteristics.The kriging model is easy to overfit.For example, the curve amplitude of the kriging model on the Ackley function is larger; thus, this model is more suitable for a complex multimodal function.The PR model has difficulty fitting curves with large changes, such as the Griewank and Schwefel functions.This model is suitable for a unimodal function with little change.The RBF model is more balanced, and good performance is obtained, which is one of the reasons why most SAEAs use RBF as the surrogate model. In addition, from the evaluated points depicted in the above four functions, it can be found that the distribution of evaluated points is relatively uniform, and clustering occurs only at extreme points.This is because a promising individual and an uncertain individual are selected in each decision.This measure not only ensures the exploratory ability of RESAPSO but also allows the regions where the global optimum may exist to be fully exploited. Experimental design The experimental platform is MATLAB 2018a based on the Windows 10 64-bit system, and the CPU, with 16 GB memory, is AMD Ryzen 7 5800X.The comparative experiment comprehensively examines the performance of RESAPSO from three aspects, accuracy, convergence speed, and stability.The benchmark functions are CEC2015 10D and CEC2017 10D, and their search spaces are both [− 100,100] D [52,53].The maximum number of real function evaluations (FEs) is 11 × dim, and each algorithm runs independently 20 times.In this paper, the following SAEAs are selected as comparison algorithms: CAL-SAPSO [17], which also uses PSO as the optimizer and RBFNN, PR, and the kriging model as surrogate models; SA-COSO [12] for high-dimensional and computationally expensive problems; [40], which mainly improves the search strategy; and ESAO [54] and SA-MPSO [55], which have been proposed in recent years.The parameter settings of the comparison algorithm follow those of the original paper.The RESAPSO parameter settings are described in the "BES and IFMADM strategy performance test" section. In addition, to compare the performance of each algorithm more comprehensively, Friedman ranking is introduced in the experiment.Friedman ranking is used to obtain the average ranking of each algorithm on all functions according to the ranking of each algorithm in each function.Therefore, the smaller the score is, the better the overall performance of the algorithm.The Wilcoxon signed rank test with a significance level of 0.05 is used.The statistical results on CEC2015 and CEC2017 are shown in Tables 2 and 3, respectively. Table 3 shows the mean and standard deviation of each algorithm on the CEC2015 10D.In Table 3, Friedman represents the Friedman ranking, and U/M/H/C represents the number of functions ranked first for each algorithm for the unimodal, multimodal, hybrid, and composition functions, respectively.+ , −, and indicate that RESAPSO is significantly better than the comparison algorithm, significantly inferior to the comparison algorithm, and has no significant difference from the comparison algorithm, respectively.Table 3 shows that RESAPSO ranks first among the 13 functions with respect to the mean value.SHPSO, CAL-SAPSO, SA-COSO, ESAO, and SA-MPSO rank first for 3, 2, 2, 0, and 1 functions, respectively.RESAPSO is significantly better than the other algorithms for unimodal functions, approximately 10 5 better than the other algorithms in terms of F1, and approximately 10 6 better than the other algorithms in terms of F2.There are two reasons for these results.First, for each IFMADM, one promising individual and one uncertain individual will be selected.When the objective function is a unimodal function, half of the computing resources are used to exploit the global optimum.Second, IFMADM can be used to select promising individuals more efficiently to improve the accuracy of local exploitation.For multimodal functions (F3-F9), RESAPSO outperforms the other algorithms for a total of 4 multimodal functions.This is because IFMADM can comprehensively evaluate all individuals based on the search history information of the PSO and obtain the most uncertain individual, thereby improving the fitting ability of each surrogate model and then improving the global optimization ability of RESAPSO.For the hybrid function, RESAPSO, SHPSO, and SA-MPSO each rank first for one function.SHPSO, CAL-SAPSO, and SA-COSO each rank first for a composition function.In addition, RESAPSO's Friedman rank is 2.4, which proves that RESAPSO has a better overall performance. Table 4 shows the mean and standard deviation of each algorithm on CEC2017 10D.It can be seen from the table that RESAPSO ranks first for 12 functions.SHPSO, CAL-SAPSO, SA-COSO, ESAO, and SA-MPSO rank first for 4, 4, 0, 1, and 9 functions, respectively.For the unimodal functions (F1-F3), the mean value using RESAPSO is more than 10 5 better than that of the other algorithms for F1.Benefiting from the expression and discernibility of intuitionistic fuzzy sets on uncertain problems, RESAPSO can be used to analyze the region where the extrema point is most likely to exist.Therefore, better results can be achieved using RESAPSO for a unimodal function with only one extreme point.For the multimodal functions (F4-F10), RESAPSO ranks first among the three functions and performed better than the other algorithms.This proves that IFMADM can be used to accurately select unexplored regions.Thus, the surrogate model can have a better global fit, improving the exploration ability of RESAPSO.RESAPSO has the best overall performance on the hybrid functions (F11-F20), ranking first for 6 functions.This is because the hybrid function is more complicated.Better fitting results cannot be obtained on a unimodal function and a multimodal function at the same time using a single surrogate model.SAEA based on a multimodel strategy cannot be used to effectively distinguish the fitting degree from different surrogate models in different neighborhoods.Therefore, better performance cannot be achieved.RESAPSO uses BES to dynamically and adaptively analyze the fit of each model in the region, thereby improving the reliability of RESAPSO's estimation of individual fitness.Therefore, RESAPSO has a greater advantage with respect to hybrid functions.RESAPSO is slightly inferior to SA-MPSO with respect to the combination function and ranks second.In summary, RESAPSO has more advantages for most functions. In addition, RESAPSO ranks first in terms of the Friedman rankings on CEC2015 and CEC2017, with scores of 2.4 and 1.8, respectively, which shows that RESAPSO has better solution accuracy and optimization efficiency than the latest SAEAs and has stronger comprehensive optimization capabilities. Convergence curve comparison Although the convergence accuracy can reflect the searching ability of each algorithm on each benchmark function, the It can be seen from Fig. 4 that for functions F1, F4-F9, F12-F15, and F18, there is a relatively obvious rapid decline process in the later stage of RESAPSO operation.There are two reasons for this situation.First, in the early stage of RESAPSO, the uncertain individuals selected by IFMADM explore the search space better, so the selected promising individuals are closer to the global optimum.Second, the fitness of each individual can be more accurately estimated using BES, providing PSO with more reliable accuracy so that it can find the global optimum for a model that is more similar to the real objective function.Therefore, RESAPSO performs better on unimodal functions, multimodal functions, and hybrid functions that can be decomposed into unimodal functions and multimodal functions.On F27, F28, and F30, RESAPSO is better than the other algorithms.For functions F22 and F25, RESAPSO also has a faster search speed in the later stage.Hence, the results of the algorithms for functions F23 and F29 are similar. The simulation experiment of the above convergence curve shows that RESAPSO can maintain high convergence accuracy for all 30 functions.Compared with the comparison algorithm for the unimodal, multimodal, and hybrid functions, RESAPSO has a rapid convergence ability.Even for more complex combined functions, RESAPSO can still maintain a continuous and stable convergence rate. Convergence stability comparison In multiple independent tests, the minimum, median, maximum, upper quantile, and lower quantile are simultaneously considered in a boxplot, and outliers are identified quickly.Thus, compared with the standard deviation, a boxplot can describe the overall stability of an algorithm in more detail and display it more intuitively in the form of a graph.Therefore, a boxplot based on the CEC2017 10D data is drawn to analyze the stability of each algorithm. Figure 5 shows the boxplots of the experimental results of all algorithms on CEC2017 10D.For the unimodal functions, the stability and accuracy of RESAPSO on function F1 are significantly better than those of the other algorithms.On F2, SHPSO performs best, and the accuracy of RESAPSO on F3 is similar to that of SHPSO.This proves the effectiveness of RESAPSO in terms of the local search.The reason is that the proposed fuzzy information extraction method enables IFMADM to effectively distinguish promising individuals in the population, thereby improving the local development capabilities of RESAPSO.For multimodal functions, RESAPSO has better accuracy and stability for F8-F10.Although the accuracy values for F6 and F7 using RESAPSO are not as good as those using SA-MPSO and CAL-SAPSO, the equal accuracy is better.For hybrid functions, the stability and accuracy values of RESAPSO for functions F12-F15, F18, and F19 are better than those of the other algorithms.This is because a hybrid function is more complicated.RESAPSO can better identify the fit of each surrogate model in the neighborhood through BES and provide a more reliable estimate for IFMADM.IFMADM is used to analyze the fuzzy information of individuals to select promising individuals and uncertain individuals for local development and global exploration of the search space to achieve better optimization results.Among all the composition functions, RESAPSO has better stability in general.Although the optimal results for F29 using RESAPSO are not as good as those using SHPSO, RESAPSO has better stability.Although the stability of RESAPSO for F30 is slightly worse than the other algorithms, its accuracy is better. Combining the boxplot results of the 30 benchmark functions, better stability can be maintained for the multimodal function and the composition function using RESAPSO.For the unimodal function and the hybrid function, better stability and higher accuracy are obtained using RESAPSO. RESAPSO for feature selection and hyperparameter optimization in landslide displacement prediction China is a region with frequent geological disasters.In 2021, there were 4772 geological disasters in China, of which 2335 were landslides, and the economic loss reached 3 billion yuan.The Three Gorges area is an important economic hub in the middle and lower reaches of the Yangtze River in China.The abundant rainfall and high reservoir water level also make this area a landslide-prone place.Along with landslides, a large amount of loess, sediment, and vegetation are poured into the river, blocking the river's course, increasing the water level, and eroding the slope, thus increasing the landslide displacement [56].During the dry season, the landslide displacement rate is slow.The transition between rainy and dry seasons causes the formation of step landslides, which can easily cause landslide disasters. Through memory units, LSTM neural networks can better maintain the feature dependence among long sequences, and thus are suitable for solving complex nonlinear systems such as landslides, which are strongly influenced by time.However, the influencing factors of landslide are complex and diverse, and different influencing factors and hyperparameters of LSTM can affect the prediction accuracy of the model.Therefore, this paper will use RESAPSO for feature selection and LSTM hyperparameter optimization to improve the effectiveness of landslide displacement prediction. The deep learning environment configuration in the experiment is as follows: Python 3.8, Keras 2.08.The LSTM neural network is designed based on the Keras library, and The Baijiabao landslide [57] is located on the right bank of Xiangxi River, a first-class tributary of Yangtze River in Three Gorges reservoir area (110°45 33.4 E, 30°58 59.9 N), 2.5 km from the mouth of Xiangxi River. the average width of the landslide is about 400 m, longitudinal length about 550 m, average thickness 45 m, area 2.2 × 10 5 with an area of 2.2 × 10 5 and a volume of 9.9 × 10 6 .The average thickness is 45 m, the area is 2.2 × 10 5 and the volume is 9.9 × 10 6 .Engineering geological plan of Baijiabao landslide is shown in Fig. 6.ZG323 records the local monthly displacement data.At the same time, we obtained the daily rain capacity and daily reservoir level from the nearby reservoir.In this paper, the monthly reservoir water level is obtained by taking the average value of the daily reservoir level.The monthly rain capacity, reservoir level, and displacement data of ZG323 are plotted as Fig. 7. Figure 7 shows the rainfall, reservoir water level, and displacement curves from March 2007 to September 2018.Where the histogram represents rainfall, the blue dashed line represents the reservoir water level, and the red solid line represents the cumulative displacement.It can be seen from the figure that the water level of the reservoir decreases significantly when it is in the rainy season.When the rainy season ends, the reservoir level rises significantly.This is because the rainfall process has an obvious hysteresis effects.Whenever the rainy season comes, the reservoir will release water in advance, reducing the reservoir water level to a lower level to prevent the reservoir water level from breaking through the warning level.In addition, whenever it is in the rainy season, the displacement curve grows with a large range, which proves that rainfall has a large impact on landslide displacement.The opposite is true for the reservoir level, where landslide displacement is accelerated when the reservoir level drops.However, rainfall and reservoir level usually have hysteresis effects on landslide displacement, so it is necessary to select appropriate features to avoid the model learning useless and redundant features.Therefore, this paper uses the reservoir water level and rainfall as the training features of the model.The landslide data are available monthly, and the predicted value is the cumulative landslide displacement.In order to better train the model, this paper extracts a variety of features from the reservoir water level and rainfall.After extraction, the feature set contains a total of 10 features, and each feature and its description are shown in Table 5. The hyperparameters of LSTM include the time step, number of units, and input gate activation function.For features, there are only two possibilities, selecting and not selecting.Thus, the features can be converted into a binary form, and 10 features can be converted into two binary lengths of 5. Since rainfall and reservoir levels are recorded on a daily basis, Therefore, the interval size of the input data on the time sequence is adjusted by the time step to prevent the influence of random noise on the model.There are three optional activation functions, namely, the sigmoid, tanh, and ReLU functions.Therefore, the search space is 5-dimensional and constitutes the following objective function.To test our algorithm, we compare it with the current mainstream methods, including hand-tuned LSTM, PSO, and Bayesian optimization (BO).Since the grid method requires manual setting of the grid width, it is not included.For PSO, we test both 100 and 500 FEs to verify the performance of our algorithm.In PSO, the following common parameter settings are adopted [58]:c 1 c 2 1.49445,w 0.9 − 0.5 × (t/maxiter), where maxiter represents the maximum number of iterations.The RESAPSO parameter settings are described in the "BES and IFMADM strategy performance test" section.There are 100 FEs for BO and RESAPSO.The LSTM model considers all training features, i.e., θ 1= 32 and θ 2 32, and its hyperparameters are θ 3 5,θ 4 1, andθ 5 1.For training LSTM, the maximum training epoch is 150.To prevent random interference from heuristic algorithms, except for hand-tuned LSTM, each algorithm uses the average of 10 independent tests as the statistical result.In addition, to prevent random interference during LSTM neural network training, each optimal solution Θ best constructs 10 networks and trains them independently 10 times, and the average value of these 10 network losses is used as the fitness of the optimal solution Θ best .The Wilcoxon signed ranks test with a significance level of 0.05 is used.The final statistical results are shown in Table 6.Note that the standard deviation of the LSTM model in Table 6 is the result of ten training sessions under one set of hyperparameters, while in the remaining models, it is the result of ten sets of hyperparameters optimized by RESAPSO or PSO. The + and symbols in Table 6 indicate that RESAPSO is significantly better than the comparison algorithm, and there is no significant difference between RESAPSO and the comparison algorithm, respectively.PSO-LSTM (100) and PSO-LSTM (500) indicate that the FEs of PSO are 100 and 500, respectively, and LSTM represents an LSTM model that does not use optimization algorithms.Table 6 shows that the prediction accuracy of RESAPSO is better than that of PSO (100) and PSO (500), Bayesian optimization and manual tuning.In addition, the prediction accuracy of RESAPSO is significantly better than that of PSO (100), and there is no significant difference between the accuracy values obtained using PSO (500) and BO.This means that in the feature selection and hyperparameter optimization of landslide displacement prediction, RESAPSO and 500 iterations of PSO have the same ability.In summary, RESAPSO only needs one-fifth of the time of PSO to achieve the same prediction accuracy, which greatly reduces the optimization time while maintaining the prediction accuracy.The landslide displacement prediction result is shown in Fig. 9. In Fig. 9, in order to make the comparison between algorithms clearer, the regions with large gaps between algorithms are enlarged.As shown in the figure, in the Three Gorges area, every year from August to February of the following year is the non-rainy season, and the landslide displacement at this stage is gentle creep period, and all models have good performance in landslide displacement prediction at this stage.During the rainy season from March to July each year, the landslide displacement is more intense and presents as "step-like" in this period, and the prediction model is usually difficult to predict accurately.The prediction of the step-like period is the most critical part of the prediction of landslide displacement because the drastic displacement change is likely to cause a landslide disaster.As can be seen from the figure, in the maximum displacement interval [980-1100 mm] from 2016-3 to 2016-7, RESAPSO-LSTM is closest to the actual measured values compared to other algorithms.In 2017-3 to 2017-7, performed slightly worse.In the minimum displacement interval [1200-1250 mm] from 2018-3 to 2018-7, the RESAPSO-LSTM prediction is the closest to the actual displacement, while the PSO500-LSTM and LSTM predicted a large fluctuation in displacement.The above shows that the RESAPSO-LSTM model can accurately predict both larger displacement changes and smaller displacement predictions during landslide step-like periods. Conclusion Aiming at computationally expensive problems, this paper proposes a reliability-enhanced surrogate-assisted particle swarm optimization, RESAPSO.To solve the weight adaptive problem of the multisurrogate model ensemble strategy, this paper considers the influence of neighborhood on the accuracy of the surrogate model and proposes a Bayesian evaluation strategy that uses a posteriori probability as the confidence of each surrogate model.By taking the confidence of the evaluated points in the neighborhood as the sample points, the Bayes's theorem is used to calculate the posterior probability, and it is used as the confidence of the surrogate model in the neighborhood, so as to reasonably integrate multiple surrogate models and then improve the fitness prediction accuracy.In addition, in order to more reasonably select promising points and uncertain points, this paper uses the intuitionistic fuzzy sets to extract multi-dimensional decision attributes from individuals and uses IFMADM to select promising points and uncertain points. SHPSO is a SAEA that mainly improves the search strategy of PSO, and this paper mainly improves the model update strategy.The experimental results of the comparison between RESAPSO and SHPSO show that, compared with the improvement of search strategy, the model updating strategy proposed in this paper is more effective.Similar to CAL-SAPSO, RESAPSO also uses the PSO and Kriging models, but this paper uses the multi-surrogate model method.The results of the experiment with CAL-SAPSO show the advantages of the multi-model strategy.SA-COSO is a SAEA for high-dimensional problems, while this paper mainly focuses on low-dimensional problems.Comparative experiments confirm that SAEA for high-dimensional problems is not necessarily applicable to low-dimensional problems.ESAO and SA-MPSO are the SAEAs proposed in recent years, and the experimental results show the advanced performance of our algorithm. Experimental results on CEC2015 and CEC2017 show that RESAPSO is superior in accuracy, convergence speed, and stability and ranks first in the Friedman test.In addition, On F1 and F2 of CEC2015, RESAPSO outperformed other algorithms by about 10 5 , and 10 6 , respectively; on CEC2017 F1, RESAPSO outperformed other algorithms by about 10 5 .On these two functions, RESAPSO almost reaches the optimal value.In order to predict the landslide displacement, the RESAPSO-LSTM model is established, which effectively solves the landslide feature selection and the optimization of LSTM hyperparameters and uses fewer evaluation times while improving the prediction accuracy.The experimental results show that RESAPSO-LSTM has the highest prediction accuracy compared with the contrast model, and the optimization time of RESAPSO is about one-fifth that of PSO.The RESAPSO-LSTM model can reflect the displacement prediction in time in both large and small displacement changes in the displacement prediction of a landslide step abrupt period, providing a more effective prediction method for the risk warning of a landslide in a severe deformation period. As a landslide is a very complex nonlinear dynamic system, in addition to rainfall and reservoir water level, landslides are also affected by different internal and external factors, such as soil water content, groundwater, vegetation coverage, clay properties, and geological structure.In addition, landslides are affected by many complex random factors, such as human engineering activities and extreme weather, showing very complex nonlinear evolution characteristics.At present, due to the lack of detailed monitoring data for these factors, this paper only studies the influence of rainfall and reservoir water level as the main trigger factors on landslide displacement.With the increase of monitorable factors, the landslide displacement prediction problem will be transformed into an expensive computational problem with high dimensions, which will limit the application of Kriging models.As a result, in future work, the RBFNN model and PR model with fast calculation speed are considered to fit the global trend, while the Kriging model with slow calculation speed but high accuracy is used for local accurate fitting, so that the landslide disaster prediction in the complex environment can be solved with massive, high-dimensional, and multi-feature fusion. Fig. 4 continued 123 Fig. 5 continued 123 Fig. 6 Fig. 7 Fig.4 continued the first two months A5 Average reservoir water level in the month A6 Average reservoir water level in the previous month A7 Average reservoir water level in the first two months A8 Reservoir change in the month A9 Reservoir change in the previous month A10 Reservoir changes in the first two months In Fig. 6, the red line represents the landslide boundary, and the yellow triangle represents the installation location of detector, the sensing data of the location of ZG323 in the figure are used. Fig. 9 Fig. 9 Landslide displacement prediction results [19]main contributions of this paper are as follows.Due to the error between the surrogate model and the true objective function, the fitness of each individual is imprecise.Therefore, intuitionistic fuzzy sets[19](IFSs) are used to deal with the fuzzy information carried by individuals.In addition to individual fitness, the local accuracy of the surrogate model is considered as one of the decision attributes.Therefore, based on IFMADM, both individual fitness and credibility are considered when selecting promising and uncertain points.By using IFSs to address the problem of imprecise individual information on the surrogate model and using IFMADM to consider the fitness and local credibility of the individual at the same time, the promising individuals and the uncertain individuals are selected more accurately to improve the accuracy of the surrogate model.d.RESAPSO is compared with a variety of SAEA on wellknown benchmark functions to prove the advantages of RESAPSO in terms of its optimization capabilities. a. Based on the surrogate model, a new surrogate-assisted evolutionary algorithm, RESAPSO, is designed to meet the needs of computationally expensive problems.Using RESAPSO, we search for the best subset of landslide features while optimizing the hyperparameters of the LSTM network and apply this model to the landslide displacement prediction problem.b.A Bayesian evaluation strategy (BES) is designed based on Bayes' theorem.BES is an individual fitness evaluation strategy suitable for multisurrogate models.It uses the error of each surrogate model and the real function in the local region to estimate the internal confidence of each model in the local region and adjusts the weight of each surrogate model adaptively.Specifically, the error between the predicted fitness of each surrogate model and the true fitness is calculated and represented in the form of a probability that sums to 1.Then, multiple c.Intuitionistic fuzzy multiattribute decision-making (IFMADM) is used to select appropriate individuals from the population to update the surrogate model. Table 1 Characteristics of each algorithm [27]satyazdi et al. usedthe differential evolution algorithm to search the hyperparameters of the LSTM neural network, including batch size, drop rate, the number of neurons per layer, etc., and the optimized model was used to predict the stock market SEODP (2020)[27]Bai et al. designed a random mutation method to prevent the algorithm from falling into local optimum while speeding up the search, and used it for the optimization of multi-layer networks and convolutional neural networks Ji-Hoon Han (2020) [28] Han et al. reduced the time to evaluate hyperparameters by adjusting the population size of the GA, and used the optimized model for diagnose motor fault MadDE (2021) [29] Biswas et al. designed a multi-adaptive strategy that can automatically adjust the control parameters and search mechanism of differential evolution (DE), thereby improving the global search capability of the algorithm. Table 2 Benchmark functions Table 3 The statistical results of the comparison on CEC2015 10D The best values are highlighted in bold details of the algorithm cannot be well reflected.Therefore, a convergence curve diagram is drawn based on the experimental results on CEC2017 10D to analyze the exploration and exploitation capabilities of RESAPSO. Table 5 Description of each characteristic Table 6 The statistical results of the algorithm comparison for predicting landslide displacement
2023-03-24T15:32:41.480Z
2023-03-22T00:00:00.000
{ "year": 2023, "sha1": "3a6fbed85a5867b1f2a4d4078555b6582024cc16", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40747-023-01010-w.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "3d5bcddcfa7e017bbf131d2cc8071e6ee84580ab", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
256855677
pes2o/s2orc
v3-fos-license
“Sedentarisation” of transhumant pastoralists results in privatization of resources and soil fertility decline in West Africa's cotton belt Transhumant pastoralism is an ancient natural resource management system traditionally connecting ecosystems across north-south precipitation gradients in West Africa. As rural population grew, several governments in the region have promoted their settlement, i.e., the “sedentarisation” of nomadic pastoral peoples to avoid conflict over land use and access to resources with local sedentary populations. Former transhumant pastoralists settled down and started growing crops using the manure of their livestock. This led to the dwindling of traditional agreements and exchanges (manure against crop residues) between pastoralists and agriculturalists, that resulted in less nutrients flowing between livestock, food crops and the main cash crop in the region: cotton. As a consequence, soil fertility declined, grazing areas are overexploited, and crop production is increasingly dependent on mineral fertilizers, which are produced outside the region, exposing the livelihood of local farmers to the volatility of international (oil) markets. How do local farmers perceive the effect of this virtual “privatization” of natural resources? Is the production of cotton, a main agricultural export of west African countries, a viable option in this new situation? What does this imply for the research and policy agendas to support agricultural development? We explored these questions through engaging in discussion with farmers, herders and extension agents in three cotton growing zones of Benin. Introduction In the sub-humid savannahs of West and Central Africa, traditional pastoral systems are extensive, and based on the seasonal utilization of large grazing areas by traditional, nomadic pastoral peoples (Dongmo et al., 2012). Livestock systems in the region are still relying on transhumance during variable periods of the year, following north-south rainfall patterns (Lesse et al., 2015). As population increases and grazing resources become scarce, transhumance induces low livestock productivity due to the lack of nutritional quality of fodder and its spatiotemporal variability. This drives animal movements over long distances in search for better fodder quality and water resources (Eboh et al., 2008). Crop residue biomass is essential to complement livestock diets, covering up to 80% of dietary needs during the dry season (Delgado et al., 1998;Diogo et al., 2018). Traditionally, pastoralists exchanged crop residues for animal manure by corralling their animals on agricultural fields. This resulted in important biomass . transfers (fecal excretions) over large areas, which contributed to maintain soil fertility of cropping fields. This system has been however disrupted nowadays due to population growth, conflict over natural resources, and new settlement policies in several countries of the region. The expansion of cultivated areas has taken on considerable importance, driven by the demographic growth of indigenous populations, the arrival of migrants from other regions (including agro-pastoralists) and the promotion of cash crops, notably cotton (Ayantunde et al., 2011;Diarisso et al., 2015). Cotton production started in West Africa since the French colonial period (Bassett, 2006;Haas, 2021). Since then, cotton production increased quickly for over two decades beginning in the early 1960's (Speirs, 1991;ICAC, 2019). Cotton production plays an important role in the economy of West Africa (Soumaré et al., 2021). It represents nearly 30% of exports which contributes, in terms of value added, to 7% Gross Domestic Product (World Bank, 2016). Nowadays, Benin is the first cotton producing country in west Africa. During the last years, Benin promoted the use of external inputs such as mineral fertilizers and pesticides, which resulted in a 65% increase in production and an average 31% yield increase (DSA, 2021). This success correlated however with a generalized decline in soil fertility (Amonmide et al., 2019). As part of this process, Benin adopted a law to promote the settlement (or "sedentarisation, " as it is locally known) of transhumant herders, which contemplates development projects that included granting them access to land for farming. As pastoralists became also crop farmers, they started using the manure of their animals to fertilize their own fields, while still partly maintaining animals on crop residues from other farmers during the dry season. Strong competition for organic resources emerged as a consequence of pastoralists' sedentarisation . There is an urgency to transition toward more sustainable soil management practices in Sub-Saharan Africa, and particularly in Benin, where soils are extremely degraded ( Figure 1D). How do local farmers perceive soil fertility decline and its main causes? What importance do they ascribe to the virtual "privatization" of natural resources that took place after sedentarisation? Is cotton production, the main agricultural export of West African countries, a viable option in this new situation? What does this imply for research and policy programs aiming at supporting agricultural development? We explored these questions by engaging in discussions with farmers and extension agents in three cotton-growing areas of Benin (Kandi, Pehunco and Savalou- Figures 1A, B). Participatory workshops using a semi-structured questionnaire and graphical support allowed us to collect perceptions on the causes and consequences of soil fertility decline according to each type of actor ( Figure 2A). We delineated fuzzy cognitive maps together with them to describe the problem from a system perspective, and assigned weights to the relative importance they ascribed to the different factors, their causes and consequences ( Figure 2C) (cf. Aravindakshan et al., 2021). Multi-dimensional scaling followed by an ascending hierarchical classification based on relative importance scores allowed for the grouping of different perceptions according to actors and zones. A total of 126 actors participated in three participatory workshops at each of the three study areas. The workshops involved 14, 12, and 17 farmers; 13, 16, and 10 herders; and 17, 16, and 11 extension agents respectively in Kandi, Pehunco, and Savalou ( Figure 1C). Context: A new configuration of rural West Africa In the cotton-growing areas of West Africa, agriculture and livestock were typically practiced by different ethnic groups (Vall and Diallo, 2009). The adoption of livestock by other ethnic groups than the pastoralist Fulani and the mutation of pastoral systems began with the inter-ethnic peace restored during the colonial period (late 19 th and early 20 th century). In the 1950's, the introduction of livestock by cotton companies encouraged the first experiments in livestock rearing among traditionally crop farmers, who very quickly turned it into a secondary activity (Dongmo et al., 2012). In Benin, the practice of transhumance began with the southward migration of the Fulani people and continues to be practiced by them (Lesse et al., 2015). The farming system was characterized by the practice of fallow as a method of regenerating soil fertility. The nomadic pastoralists progressively settled down in order to, if not own land for living and agricultural activities, at least secure access to them. They have adopted cereal-based agriculture that relies on animal manure, hired labor, and herbicides, allowing them to diversify income and compensate for the decapitalization of livestock that is often experienced as a tragedy (Dongmo et al., 2012). For the past few decades, west African agriculture, and more specifically agro-pastoral systems, have been experiencing recurrent shocks that are increasingly strong and diverse under the effect of major global changes (Yapi-Gnaore et al., 2014;Diogo et al., 2021). New cropping systems have gradually been introduced, based on mechanized tillage for land preparation and crop maintenance, mechanized sowing, use of herbicides, use of fertilizers and organic manure for soil fertility maintenance, and the application of insecticide treatments for cotton pest control (Blanchard, 2010;Yemadje et al., 2022). These new systems expanded to the detriment of traditional livestock and slash-and-burn agricultural systems, which relied on the restoration of soil fertility through fallowing (Kintché et al., 2015). Gradual "sedentarisation" of herders means that they divided their herds into two or three smaller herds that were moved to different grazing sites by different family members. Herders return each year after transhumance to their former territories, now occupied by growing numbers of cotton farmers. Herders' movement toward ecosystems that have not yet been cleared from woody vegetation alternate with transhumance movements toward territories formerly occupied by them (Sounon et al., 2019). The coercive policies on land use, privatization, and sedentarisation, plus social exclusion and marginalization led to a progressive erosion of indigenous social and economic structures. The new sedentary pastoralists were able to diversify their activity portfolio by growing crops for food and the market, capitalizing on crop-livestock interactions on their own farm (Diarisso, 2015). But this sedendarisation was also perceived as a deprivation of resources by some herders (Diogo et al., 2021), because it implied the reduction of the mobility of the herd, which formerly gave access to natural national and international grazing areas. Multi-dimensional scaling of these factors followed by a hierarchical classification based on the relative importance assigned to them allowed grouping the different perceptions according to actors and zones. This classification resulted in three classes ( Figure 2B). Class 1 includes herders of Kandi and Pehunco and is characterized by the factor "Lack of organic matter input to soil" as the main factor in the decline of soil fertility. Class 2 includes farmers from the three zones, extension agents from Pehunco and Savalou, and herders from Savalou. This group identified "Tillage" as the main factor in the decline in soil fertility. The third class includes extension agents from Kandi who identified "Soil erosion" as the main factor in the decline of soil fertility. Extension agents in Kandi and Pehunco produced a more complex description of the processes leading to soil fertility decline than crop or livestock farmers did ( Figure 2D). These results indicate different perceptions of soil fertility decline between crop and livestock farmers. While the former identified lack of organic matter inputs to soil (due to removal of crop residues, no or insufficient manure applications) as the main cause of soil fertility decline, livestock farmers identified soil tillage as the main factor. We did not observe any difference in perception between the different zones, which indicates that the problem is similar in the cotton zones of Benin. However, as indicated by the fuzzy cognitive maps (Figure 2D), the description of the process of soil fertility decline by all actors tended to be more complex in Kandi, where conflict over land is more acute. The decline in soil fertility has led to an increase in cultivated areas, a drastic decrease in the size of individual livestock herds ( Figure 1E) due to difficulties in accessing feed. Sedentarisation led to the disappearance of farms that specialize in a single agricultural activity and the emergence of mixed farms. Nowadays, livestock farmers increasingly produce crops and crop farmers keep livestock, exerting greater pressure on the natural . /fsufs. . resource base. Although none of the three types of participating actors pointed to pastoralists' sedentarisation as a main cause for soil fertility decline, the growing preponderance of mixed farms leads to high competition for manure, grazing areas and feed resources, which leads to less organic matter inputs to soil. Discussion The experience of farmers and their knowledge of the environment represent benchmarks for assessing soil fertility (Kambiré et al., 2022). Our study reveals a decrease in the size of cattle herds per individual due to the scarcity of fodder quality and quantity, the disappearance of specialized farms, a concentration of fertility on former pastoralists fields due to the mobility of their animals, and soil management factors as the main causes of soil fertility decline according to farmers, herders and extension agents. Favoring participatory approaches that integrate farmers' opinions in the assessment of alternatives to soil fertility management is a widely shared approach in the research and development community today. Tittonell and Giller (2013) compiled ample evidence on drivers of soil fertility decline in Sub-Saharan Africa, as the main threat to the food and nutritional security of smallholder farmers. As reported by these authors, farmers in our study zones have strongly linked soil fertility decline with frequent tillage and the lack of organic matter input to soils. Tillage changes soil structure, its porosity and the distribution of fresh organic matter restored or supplied (Bouthier et al., 2014). Also, tillage exposes soil to oxidation, increasing greenhouse gas emissions and loss of organic matter (Smith et al., 2011;Lognoul et al., 2017), and in Ferruginous soils it may lead to soil degradation through compaction and erosion, affecting water availability and long term crop yields (Smith et al., 2011;Lognoul et al., 2017;Yemadje et al., 2022). Failure to return organic matter to soil decreases soil fertility (Warren et al., 2015). Organic matter inputs to soil can be achieved by parking animals on the crop fields after collecting the crop residues, or through storing the residues in-situ by keeping the livestock away. But in the West African context, due to the development of animal traction and the diversification of farms, crop residues are increasingly used to feed livestock. For example, up to 90% of farmers' crop residues may be consumed by the livestock of (agro) pastoralists . Compensating for this removal by the return of animal dung is a trade-off that is struggling to take off because the manure available in the village is mainly used to fertilize the pastoralists' fields. As a result, the croplands of farmers with few livestock are continuously tapped for nutrients (Baudron et al., 2015;Diarisso, 2015). Livestock-mediated transfers of organic matter and nutrients from non-cultivated to cultivated areas has been historically one of the main determinants of soil fertility maintenance in semi-arid and sub-humid regions of West Africa, at least for farms with large . /fsufs. . numbers of cattle (Harris, 2002;Achard and Banoin, 2003;Schlecht et al., 2004;Diarisso et al., 2015). The sedentarisation process initiated by herders as an adaptation to global change, and legalized by policies, allowed peaceful agreements between crop and livestock farmers, which prevented long standing conflicts; but it does not necessarily guaranty the sustainable use of resources. By increasing the number of mixed crop-livestock farms in a territory, sedentarisation of herders led to strong competition for grazing and feeding resources, and to a shortage of animal manure to fertilize an ever expanding agricultural area. Such a result may be counterintuitive, as crop-livestock integration on farm is often proposed as a sustainable pathway for agricultural intensification (e.g., Tittonell et al., 2015;Martin et al., 2016;Paul et al., 2020). Manure contracts made by farmers with foreign pastoralists on transhumance are no longer possible, while sedentary livestock keepers carrying out a mixed activity on their farms will not necessarily be open to a manure contract. When a livestock keeper engages in crop production, the substitution rate between livestock and crops is low, that is, the cropping area increases without a concomitant proportional decline in livestock. This is because crop production is more intensive than livestock production, which relies mainly on natural grazing (Ayantunde et al., 2011). The result is higher pressure on the natural resource base at community level. Cotton, being a locomotive for the rural economy of West Africa, is not spared from the impacts of these changes and is cited by some authors as the cause of soil degradation due to its rapid expansion and nutrient requirements (Njomaha, 2003;Da et al., 2019). However, mineral fertilization of cotton has significant effects on the yields of subsequent food crops in the rotation, due to the residual effects of fertilizers, particularly on savanna soils that have been exploited and have low levels of organic matter (Ripoche et al., 2015). Fertilizers are provided by the cotton industry on credit, and these are often the only nutrient input to these farming systems annually. Several alternative sustainable management techniques have been tested in Sub-Saharan Africa to improve soil fertility, yields and enhance soil organic matter (Vanlauwe et al., 2010;Chivenge et al., 2022;Thierfelder and Mhlanga, 2022;Yemadje et al., 2022), yet their adoption by farmers remains limited. This situation calls for researchers and policy makers to find a solution that allows both agriculture and livestock to coexist in a sustainable manner. In the current socio-political context of West Africa, a return to cross-border transhumance may be more inconvenient than sedentarisation. Avoiding a new tragedy of the commons By threatening cotton cultivation, sedentarisation and the consequent unorganized expansion of mixed crop-livestock farms may have thus both direct and indirect negative impacts on local food security. In such a challenging climatic, socioeconomic and institutional context (uncertain and changing), cropping and livestock systems must be transformed to adapt and ensure their viability. Sedentarisation policies evolved as a response to conflict but without a parallel redesign of the agroecosystems, which resulted in a "tragedy of the commons" in terms of organic matter resources. Research and policy must find new compromises between sedentary pastoralism and agriculture, as livestock and crop production cannot be disconnected or conducted individually. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
2023-02-15T16:17:15.175Z
2023-02-13T00:00:00.000
{ "year": 2023, "sha1": "a1cbd33e7dc47f4f0ad531ffb3406c3357ce19cd", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fsufs.2023.1120315/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "db21563f0efea906c4a341396e00ee7dddaba900", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
30391294
pes2o/s2orc
v3-fos-license
QCD thermodynamics with Wilson quarks at large kappa We have extended our study of the high temperature transition with two flavors of Wilson quarks on 12^3 x 6 lattices to kappa=0.19. We have also performed spectrum calculations on 12^3 x 24 lattices at kappa=0.19 to find the physical lattice spacing and quark mass. At this value of kappa the transition is remarkable in that the plaquette and psi-bar-psi show a large discontinuity while the Polyakov loop changes very little. This and several other features of the transition are more suggestive of a bulk transition than a transition to a quark-gluon plasma. However, if the temperature is estimated using the rho mass as a standard, the result is about 150 MeV, in agreement with the value found for the thermal transition with Kogut-Susskind quarks. uuencoded compressed Postscript file I. INTRODUCTION The study of high temperature QCD has proven to be much more dicult with the Wilson formulation of lattice quarks than with the Kogut-Susskind (K-S) formulation. Progress with Wilson quarks has been slow and frustrating. Nevertheless, it is important to continue these studies in order to determine whether the results of lattice simulations are independent of the regularization scheme used. The Wilson formulation provides a way to simulate two avor QCD without step size errors in the algorithm and without the need to take the square root of the fermionic determinant. The price to be paid is that there is no remnant of the chiral symmetry left to protect the quark mass from additive renormalization, which leads to cumbersome and expensive ne tuning in the search for the chiral limit. In the rst simulations of high temperature QCD with two avors of Wilson quarks it was found that at the values of 6=g 2 for which most low temperature calculations were done, 4:5 6=g 2 5:7, the high temperature transition occurs at a value of quark hopping parameter for which the pion mass measured at zero temperature is quite large [1,2]. In other words, it is dicult to nd a set of parameters for which the temperature is the critical temperature and the quark mass is small. Further work conrmed that the pion mass is large at the deconnement transition for this range of 6=g 2 [3,4]. (A recent study has concluded that for four time slices the chiral limit is reached at a very small value of 6=g 2 3:9 [5].) Previous simulations with Wilson fermions have located t , the value of the hopping parameter at which the high temperature crossover or phase transition occurs, as a function of 6=g 2 for N t = 4 and 6. The critical value of the hopping parameter, c , for which the pion mass vanishes at zero temperature has been located with somewhat less precision [1,2,6,7,3,4]. Some measurements of hadron masses have been carried out on zero temperature lattices for values of and 6=g 2 close to the t curve, allowing one to set a scale for the temperature, and to estimate c in the vicinity of the thermal transition [3,4,9]. In a recent work with four time slices we have found that the transition or crossover is steepest for 0:19, becoming more gradual for larger or smaller [10]. In our previous work at N t = 6, we observed coexistence of the low and high temperature phases over long simulation times at = 0:17 and 0:18. The change in the plaquette across the transition is much larger than for the high temperature transition with Kogut-Susskind quarks [4]. We have extended these observations in the present project. First, we carried out a series of runs on 12 3 2 6 lattices at = 0:19. Here our hope was to explore the high temperature transition at a smaller physical quark mass. This would mean a smaller pion to rho mass ratio at the crossover. (For both N t = 4 and 6 the to mass ratio at the thermal crossover decreases as increases.) We also made hadron spectrum measurements on 12 3 2 24 lattices at = 0:19 so that we could nd the physical lattice spacing at the transition. We have made a short set of runs at = 0:20 in order to determine the location of the thermal transition for this value of , and have carried out a series of runs on the high temperature side of the transition in the region 0:18 0:19 in order to obtain more information on the nature of the transition. Simulations were carried out using the hybrid Monte Carlo algorithm with two avors of dynamical Wilson quarks [8]. The parameters of our new runs are listed in Table I. For the 12 3 2 6 lattices we used trajectories with a length of one unit of simulation time in the normalization of Ref. [3]. For the 12 3 2 24 lattices we used trajectories one half time unit in length. For reference we show a phase diagram for the relevant range of and 6=g 2 in Fig. 1. In our previous runs with six time slices we found strong metastability at = 0:17 and 0:18. All of the thermodynamic quantities had large discontinuities at these points. However, at = 0:19 the Polyakov loop does not change sharply at the transition, while the plaquette and do. This is shown in Fig The dierent nature of the transition at = 0:19 and 0:18 led us to ask whether there is a sharp change in behavior between these two points, i.e., an intersection of two phase transition lines in the ; 6=g 2 plane. We therefore performed runs at = 0:1825, 0:1850 and 0:1875 with 6=g 2 approximately 0.01 above the transition value | 6=g 2 6=g 2 hep-lat/9404006 12 Apr Squares represent the high temperature transition or crossover for N t = 6 and diamonds the zero temperature c . Octagons are the high temperature crossover for N t = 4. Previous N t = 6 work included in this gure is from Refs. [7] and [4], and the N t = 4 results are from Refs. [7], [3] and [6]. The smaller symbols for N t = 6 and c are from older simulations with spatial size eight, and the darker square and diamond are the new results of this work. We show error bars where they are known. For series of runs done at xed the error bars are vertical, while for series done at xed 6=g 2 the bars are horizontal. The plusses are a set of simulations just on the high temperature side of the N t = 6 thermal transition, which are discussed later (Fig. 5). hot start, are shown in Fig. 5. One sees that the value of the Polyakov loop decreases rather smoothly as increases. Following the program of Ref. [4], we made zero temperature runs on 12 3 2 24 lattices at = 0:19 to determine the physical quark mass and lattice spacing. It was not practical to run exactly at the transition, 6=g 2 = 4:80, because the quark matrix was extremely illconditioned. Instead, we ran at 6=g 2 = 4:77 and 4.79, values for which the simulation was far less costly in cpu time. In addition, a run on an 8 3 2 16 lattice at 6=g 2 = 4:76 was reported in Ref. [3]. The pion and rho masses, and the quark mass from the axial current are tabulated in Table II. From extrapolating the axial current quark mass or the squared pion mass to zero we can estimate the zero temperature gauge coupling, 6=g 2 c , at which c = 0:19. Unfortunately, the results from these two quantities do not agree as well as they did at smaller . From extrapolating the quark mass we nd 6=g 2 c = 4:916(16), while from extrapolating the squared pion mass we nd 6=g 2 c = 4:886(5), with 2 = 0:6 for one degree of freedom. Table III contains the meson masses at the thermal transition. Values for = 0:16, 0:17 and 0:18 are from Ref. [4]. The errors in this table include eects of uncertainty in the masses at a particular 6=g 2 and the eects of an uncertainty of 0.01 in the crossover value of 6=g 2 at each . The uncertainty in 6=g 2 t is the larger of these eects. At = 0:16 we did not observe metastability, so the crossover value 5.41 is an estimate of where the physical quantities are changing most rapidly. Because m and m both decrease as increases, the eect of the uncertainty in 6=g 2 t on the error in m =m is smaller than a naive combination of the errors. Fig. 1. The point at = 0:18, taken right on the transition 6=g 2 , is from the hot start run (Fig. 3). Because the plaquette and are bulk quantities, while the Polyakov loop explicitly tests ordering in the imaginary time, or temperature, direction, it is tempting to speculate that we are seeing a zero temperature transition here. To test this hypothesis by running at larger N t would be an expensive undertaking. In Fig. 1 it can be seen that at 0:17 the N t = 6 thermal crossover is well separated from the N t = 4 thermal crossover, and quite close to the estimated c . However, at = 0:19 the N t = 4 and 6 crossovers are relatively close together, and well separated from c . This can be dramatized by plotting the zero temperature m 2 at the thermal crossover 6=g 2 , shown in Fig. 6a. An intersection of the N t = 4 and 6 lines on this graph would be equivalent to an intersection of the N t = 4 and 6 lines in Fig. 1. Of course, we could equally well plot m 2 =m 2 , shown in Fig. 6b, where an intersection looks much less inevitable. An intersection of the N t = 4 and 6 lines would be characteristic of a bulk transition, as would the vanishing of the gap in the Polyakov loop noted above. It is probably signicant that the N t = 4 transition is steepest for 0:19 [10]. FIG. 6. Pion mass squared as a function of at the thermal transition line. (a) The squares are for N t = 6 and octagons for N t = 4. The ratio of (m =m ) 2 at t as a function of . (b) Most the error comes from the eect of our uncertainty in the location of the thermal crossover, here estimated at 1(6=g 2 ) = 0:01. In Ref. [4] we noted that the change in the plaquette across the N t = 6 thermal transition was surprisingly large. This can be made quantitative from the non-perturbative free energy (or minus one times the pressure) obtained by integrating the plaquette over a range of couplings [13]. With our plaquette normalization, FIG. 7. The N t = 4 (crosses), N t = 6 (squares) and cold (octagons) values of the plaquette as a function of 6=g 2 at = 0:19. (a) The cold values are extrapolated to larger 6=g 2 (diamonds) to allow the estimation of the non-perturbative pressure. Non-perturbative p=T 4 for N t = 6 as a function 6=g 2 at = 0:19 assuming the linear extrapolation of the cold plaquettes. (b) The dotted line gives the expected free quark-gluon gas value for two avors of zero mass quarks. @(pa 4 ) @(6=g 2 ) = 2 (2 hot 0 2 cold ) : (1) Here p is the pressure and 2 hot and 2 cold are the average plaquettes on the 12 3 2 6 and extrapolated 12 3 224 lattices respectively. The vacuum subtraction is done in the usual way by subtracting the result of the zero temperature simulation. In Fig. 7 we show the plaquettes used in the integration. As discussed above, it was impractical to perform cold runs for 6=g 2 > 4:79. If we assume that the curve can be linearly extrapolated to somewhat larger 6=g 2 values, we can still make a crude approximation for the pressure across the transition. The extrapolated values of symmetric plaquettes are shown in Fig. 7 with diamonds. The pressure is then the integral of the dierence of the hot and cold plaquettes. Because of the large jump in the plaquette, the integral grows very fast. The Wilson pressure overshoots the ideal gas value by nearly an order of magnitude. Even taking into account the uncertainty in the extrapolation of our cold data points, this clearly demonstrates the huge change of the action in Wilson thermodynamics at large . In the case of Kogut { Susskind fermions, or for smaller with Wilson fermions, this procedure leads to a pressure comparable to the ideal gas result. In contrast, we note that the change in the plaquette across the transition in Fig. 7a is almost the same for N t = 4 and 6, although there is a clear shift in the position of the transition. Again, this is what one would expect for a bulk transition, while for a thermal transition naive scaling would predict that the gap should decrease by a factor of (4=6) 4 in going from N t = 4 to N t = 6. From previous considerations it is clear that the thermal transition with Wilson quarks is still quite far from giving a trustworthy description of continuum physics. Remarkably, the critical temperature T c in units of m is consistent with the K-S quark simulation results when the temporal size is increased to N t = 6. In Fig. 8 we display the current status from K-S studies and our Wilson simulations. While the Wilson results with N t = 4 disagree with K-S estimates, and oer little help in extrapolating to the physical regime, the N t = 6 results nicely line up with those using K-S quarks. IV. CONCLUSIONS We have extended our work on Wilson thermodynamics for = 0:19, nding disturbing and unexpected properties: the lack of a jump in the Polyakov loop while the other indicators of the transition have a clear signal, and the overshooting of the non-perturbative pressure. The lack of a jump in the Polyakov loop, the similarity in the jump in the plaquette at N t = 4 and N t = 6, and perhaps even our inability to perform spectrum calculations at 6=g 2 > 4:79 are more characteristic of a bulk transition than a transition to a quark gluon plasma. Still, when the temperature of the crossover is estimated in a standard way the results are consistent with N t = 6 work at smaller and with results using Kogut-Susskind quarks. Much work remains to be done. At present, the Wilson and Kogut-Susskind formulations do not give us a concise and consistent picture of many aspects of the transition. We hope that future simulations provide the answers to these questions.
2018-04-03T01:39:28.255Z
1994-04-12T00:00:00.000
{ "year": 1994, "sha1": "092ef6c4db0b8431ffdac39fbc9bc357e9948ccd", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "092ef6c4db0b8431ffdac39fbc9bc357e9948ccd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
189862342
pes2o/s2orc
v3-fos-license
Double direct hernia, triple indirect hernia, double Pantaloon hernia (Jammu, Kashmir and Ladakh Hernia) with anomalous inferior epigastric artey: Case report Highlights • First kind of case report in literature.• A unique case with double direct and triple indirect type of inguinal hernia.• Named as Jammu Kashmir and Ladakh Hernia.• Inferior epigastric artery may traverse an anomalous course in inguinal canal.• Presence of multiple hernia sacs ,if undetected, is risk for recurrence. Introduction The exact diagnosis of inguinal hernia is usually made during its repair. A careful exploration of groin is diagnostic in identifying multiple unilateral hernias [1,2]. Indirect inguinal hernia is considered to be of congenital origin. Knowing peculiar type of hernia reduce the risk of inferior epigastric vessels injury and lowers the rate of recurrence [3]. Normally the sac with its contents lies anterior or antero-lateral to cord. Direct inguinal hernias are acquired and the sac protrudes through posterior wall. A rare type of hernia exists called as Pantaloon or saddle-bag hernia or Dual hernia or Romberg's hernia. This hernia is a combination of indirect and direct sacs on both sides of IEA. There is an obvious direct inguinal hernia and a small indirect type in Pantaloon Hernia. A missed small indirect sac in Saddle bag hernia is a common cause of recurrence [4]. IEA along with vein follows usual course in subperitoneal tissue, and an aberrant course lying outside peritoneum with deviation * Corresponding author at: Doctors Lane, Amira Kadal, Srinagar, Kashmir ,190001, India. E-mail address: imtazwani@gmail.com from normal path traversed is rare. Congenital origin of Inguinal hernia contributes to occurrence of multiple hernia types in the inguinal hernia. The work has been reported in line with the SCARE 2018 Criteria [5]. Case presentation A 46 year old male evaluated for right groin swelling had diagnosis of inguinal hernia. There was no history of chronic cough, LUTS or any debilitating disease. Laboratory parameters were normal. Ultrasonography abdomen was normal and of the scrotum confirmed the inguinal hernia. Patient was planned for mesh hernioplasty. On exploration of groin, no inferior epigastric artery was located at medial margin of the deep inguinal ring. The inferior epigastric artery was pursuing anamalous pathway having normal anatomical origin from external iliac artery and piercing rectus sheath. each having individual cough expansibility and reducibility [S1 & S2, Fig. 1]. Direct Sac 2 was larger and having wider neck than Sac 1. On exploring the cord, three indirect type of hernia were found on a single cord and few small preperitoneal fatty herniation were seen. There was one bubonocele and two funicular type of indirect hernia present [ Fig. 2]. Sac A, Bubonocele, had no content, Sac B funicular type and Sac C, Funicular type had omentum as the content. All three indirect types had individual sac with its peritoneal opening. Sac C, Third sac with omentum as a content was mobilised from cord and was abutting posterior wall of second sac, Sac B [ Fig. 3]. Transfixation with ligation of each sac was done individually and released back into the peritoneal cavity [ Fig. 4]. IEA was buttresed in normal posterior wall of canal to prevent entrapping of vessel in the mesh. Lichenstein tension free repair with single mesh was done for this double direct hernia. Follow up period of 23 months was normal. Discussion An Indirect inguinal hernia is usually a congenital. A patent processus vaginalis and increased cumulative mechanical exposure are risk factors for indirect inguinal hernia occurrence [6]. Aberrant hernia has been suggested to occur due defective regulatory mechanism of hormones, peptides from the genitofemoral nerve and insufficient release of calcitonin gene-related peptide that have an effect on testicular descent [7]. Landmark point for demarcation of hernia into direct or indirect type is an IEA. Hernia occurring on medial side of IEA is a direct type, whereas those lateral to this is indirect type. IEA is usually located in the area between 4 and 8 cm from midline [8]. IEA originate from external iliac artery just above 5 mm inguinal ligament normally, traverse inguinal canal in subperitoneal, under tranversalis fascia, back to the interfoveolar ligament along medial edge of deep inguinal ring coursing upwards, pierces transversalis fascia and passing infront of linea semicircularis, ascends between rectus abdominis and posterior lamella of its sheath. A great variation with regard to the IEA in relation to its course is observed [9,10]. Clarifying the site of the sac appearance decrease the chance of inferior epigastric vessel injury [3]. There is no documented case where IEA was lying outside tranversalis fascia, superficially on posterior wall of inguinal canal. Placing onlay mesh with this aberrant superficial course of IEA imparts risk of being getting entrapped in mesh and subsequent torrential hemmorhage [11]. Normally, this vessel is located at medial margin of deep ring but location at mid inguinal point is a rarity. This aberrant superficial path traversed by IEA produces two individual sacs of direct hernia on its either side. Both hernia sacs had individual cough expansibility and reducibility with no cross fluctuation. Lloyld et al. reported occurrence of two cases of 'third kind of hernia' with defect lying between the deep ring and inferior epigastric vessels [12]. In contrast to present case, they had two sacs in one case and one direct hernia (third type) in second case. Their findings were demonstrated via TAPP approach, this present case had an open surgery. This third kind of hernia existing with direct hernia reported by Lloyld et al. [12] was corrected to be supravesical hernia coexisting with direct hernia [13]. The present case had an anomalous course pursued by IEA which was lying superficially with two individual direct sacs on either side of that vessel. Clinical significance in this scenario allows sometimes deep ring to be widened on medial side whenever required, otherwise this approach is contraindicated. Two sacs in unilateral indirect hernia is till date reported in one case only, that too reported in a pediatric case [14]. This case was unique, three indirect type hernia on unilateral side were present, all opening individually with variability in contents of each sac. Congenital anatomical variation during development may lead to this anomaly. Conclusion Double direct and triple indirect type of hernia in a unilateral inguinal hernia is unique. Presence of multiple hernia sacs in an inguinal hernia is the risk for recurrence, if not detected. Inferior epigastric artery in inguinal canal may traverse anomalous course. Careful exploration of the groin is mandatory in diagnosis of unique inguinal hernia. Conflicts of interest None. Sources of funding None. Ethical approval The publication of my article, if the study is exempt from ethnical approval in my institution. Consent Written and signed consent to publish a case report obtained from patient. Author contribution IW made study concept or design, data collection, data analysis or interpretation, writing the paper. Provenance and peer review Not commissioned, externally peer-reviewed. Appendix A. Supplementary data Supplementary material related to this article can be found, in the online version, at doi:https://doi.org/10.1016/j.ijscr.2019.05. 035.
2019-06-16T13:13:00.661Z
2019-06-04T00:00:00.000
{ "year": 2019, "sha1": "4f2b17a663378ca1a54d17e86f9bb82c49bae5c0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijscr.2019.05.035", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfcb253aa1292cb55adbd20f0282a6d503d765cc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244647214
pes2o/s2orc
v3-fos-license
HPK Parenting Promotion Program in Kepulauan Seribu 2018-2020: Reviewed from The Theory of Hafied Cangara Communication Strategy This study aims to determine about 1000 HPK Parenting Promotion Program In Kepulauan Seribu 2018-2020: Reviewed from the Theory of Hafied Cangara Communication Strategy. This type of research is descriptive research using qualitative methods. Collecting data by conducting observations, interviews, and documentation. The communication strategy carried out by the Thousand Islands PPAPP Sub-Department refers to Hafied Cangara's theory, where in this case the communication strategy in the KIE promotion program for the Care of 1000 HPK is carried out through the first stage and then continues with planning, implementation, evaluation and reporting. The results showed that the communication strategy carried out in the 1000 HPK KIE Promotion Program was carried out through a 1000 HPK parenting training program directly face to face while practicing using the tools provided and online through webinars. In the process of implementing the program, referring to the 2018 National Priority Promotion Projection Technical Guidelines and KIE for the First 1000 Days of Life (HPK). According to Hafied Cangara's theory, the failure of the communication strategy of the 1000 HPK KIE Parenting Promotion Program for 20182020 is the absence of a research process first carried out Kepulauan Seribu Sub-Department PPAPP. 1000 HPK 2018-2020. Thus, to realize the 1000 HPK Care KIE program 2018-2020 to the fullest, a process is needed before the program is implemented so that it is right on target and the community is enthusiastic about participating in the 1000 HPK Care KIE Promotion program. Introduction Indonesia has succeeded in reducing the stunting problem. In 2013, stunting cases in Indonesia reached 37.2%. Then, the Basic Health Research (Riskesdas) stated that in 2018 this stunting case managed to decrease compared to the previous year, which was 30.81% (Kemkes.go.id, 2019). In 2019 it decreased again to 27. 67% (Kabar24.bisnis.com, 2019). Although Indonesia has succeeded in reducing the stunting problem, its achievements still need to be continuously improved because the World Health Organization (WHO) standard sets a 20% stunting case limit standard (CNN Indonesia, 2017). Stunting is also a serious public health problem in developing countries (Indriani & Retno, 2018) Stunting refers to the condition of a child's height that is shorter than his age, caused by a lack of nutritional intake for a long time in the first 1000 days of life (HPK). As adults, children are vulnerable to attacks from non-communicable diseases such as heart disease, stroke, diabetes, or kidney failure; hampering Indonesia's demographic bonus where the ratio of the non-working age population to the working age population decreases; threat of reducing intelligence level by 5-11 points. In addition to nutritional factors, stunting is caused by a lack of public knowledge, especially pregnant women, mothers of toddlers and posyandu cadres about stunting (Astuti, 2018) The data from the 2018 basic health research on the proportion of nutritional status in the 2013-2018 provinces explains that the province in Indonesia with the highest stunting problem is Nusa Tenggara Timur. Nusa Tenggara Timur Province has a high prevalence of stunting in children aged 0-23 months of 29.8%. According to the NTT Health Office, the percentage of stunting in children aged 0-23 months in Kupang, the capital of NTT province, is 25% (Ilma et al., 2019) while the lowest proportion of stunting cases is DKI Jakarta Province. DKI Jakarta with the lowest proportion of stunting cases, it turns out that there are still areas in DKI Jakarta that still have a high stunting rate, namely the Thousand Islands. All areas in DKI Jakarta should have lower stunting cases, however, the Kepulauan Seribu still have a high rate of stunting cases while DKI Jakarta is one of the provinces with the lowest stunting cases in Indonesia. This is then what we want to explore why the Kepulauan Seribu still has a high rate of stunting cases. In addition, the Kepulauan Seribu is also one of the priority loci of stunting reduction set by Badan Perencanaan Pembangunan Nasional (BAPPENAS). Methods Judging from the type of data, the research approach used in this study is a qualitative approach. Qualitative research is a special research object that cannot be examined statistically or quantified to understand by describing it in the form of words and language according to Basrowi and Kelvin (Suwandi, 2008). Where this type of research uses descriptive research. Descriptive method, namely research that aims to describe as accurately as possible about something that is a particular object, symptom or group and to answer questions about phenomena and events that are currently happening. (Sugiyono, 2009). Data collection techniques used in this study were interviews and observation. Results and Discussion The KIE promotion program for the care of 1000 HPK started with the Presidential The first indicator in communication strategy according to Hafied Cangara is research. The importance of this research is to see the problems in the field so that the program made is right on target. In addition, this research strategy is carried out to find the problems that will be faced so as to produce good strategy formulation materials and plan what needs to be provided for the 1000 HPK IEC promotion program and determine how to distribute it. (Khoeroh et al., 2017). However, in the implementation of the research strategy, the Thousand Islands PPAPP Sub-Department was not carried out when formulating the The implementation of this program is very beneficial for pregnant women and mothers of two-year-old toddlers (baduta) because with this program the community understands that the care of 1000 HPK is very important to prevent stunting and the pattern of parenting for children to change for the better. From this research, the researcher sees that the material provided in this program is quite complete and can add to the knowledge of parenting 1000 HPK. Not only about the care of 1000 HPK but also about the impact of HIV and Aids on 1000 HPK. The response from the community with this program is also very good regarding the delivery of messages that are easy to understand, the material is very useful, even the community provides input for the implementation of the program that can be carried out every month so that the community or those who need knowledge of caring for 1000 HPK get it and there is also input for targets not only to pregnant women and mothers of two-year-old toddlers (baduta), but broaden the target to young newly married mothers and mothers who are preparing to have children. Not only about the care of 1000 HPK but also about the impact of HIV and Aids on 1000 HPK. The response from the community with this program is also very good regarding the delivery of messages that are easy to understand, the material is very useful, even the community provides input for the implementation of the program that can be carried out every month so that the community or those who need knowledge of caring for 1000 HPK get it and there is also input for targets not only to pregnant women and mothers of twoyear-old toddlers (baduta), but broaden the target to young newly married mothers and mothers who are preparing to have children. Changes after being given counseling or parenting training for 1000 HPK. At the evaluation stage based on the strategic management process, evaluation is a process of assessing the results of the performance of program implementers by looking at the implementation of the strategy and comparing it with the expected performance so that it can be seen whether the current strategy is going well or contains the number of targets that have been achieved and the realization of activities. This report is carried out to see whether the activities are running according to the plans that have been made or less than optimal in their implementation. In addition to conducting an evaluation, this report can also be taken into consideration by the Kepulauan Seribu Sub-Department PPAPP in subsequent activities. Currently the government is encouraged to develop and implement E-government based on Although previously also using online media, but not entirely done online. So, in 2020 Kepulauan Seribu Sub-Department PPAPP starts holding webinars through zoom meetings. In the webinar, many obstacles are encountered because the two-way communication webinar is not fully implemented. The spirit of mothers is more inclined to face-to-face training or counseling. The community mostly only listens to the material presented by the resource person regarding the care of 1000 HPK, cannot directly submit complaints or questions they want to ask. According to the results of the study, the obstacles faced when offline or face-to-face were regarding the presence of fewer BKB mothers than their students and the number of face-to-face implementations was more limited than the online implementation which could cover more targets in one meeting. With the training webinar not being able to directly practice the materials that have been given, it is difficult to see if the materials provided are also carried out by the community. In addition, if there are people who still do not understand technology, the implementer must provide a place to be able to view the webinar together. In the Thousand Islands, it is also undeniable that the signal is sometimes a bit difficult, thus hampering webinar activities. The webinar is held for more than 2 hours causing boredom so that people don't focus on receiving webinar material. Basically, every program or policy implementation has obstacles that must be faced in its implementation, both internal and external constraints. To add enthusiasm and reduce existing obstacles, participation and cooperation from the Family Mobilization and Resilience Sector (PK2), PKB in Kepulauan Seribu, Poktan, Dasawisma, PKK, PKKBB, and PKBRW cadres are needed as managers and implementers in the field with pregnant women or children under two. which is the target in the implementation of the program so that the implementation can be well coordinated and in accordance with the objectives. Conclusion Based on the results of the study, it was shown that the communication strategy of Kepulauan Seribu Sub-Department PPAPP in preventing stunting through the KIE promotion program for the care of 1000 HPK 2018-2020 seen by the five stages of the communication strategy according to Hafied Cangara, did not fulfill the five stages. The first stage that must be carried out by the Thousand Islands PPAPP Sub-Department is research. However, the Thousand Islands PPAPP Sub-Department did not conduct previous research before implementing the KIE promotion program for the care of 1000 HPK. This first stage affects the next stage of communication strategy. Kepulauan Seribu Sub-Department PPAPP continues to rely on technical guidelines for national priority promotion projects and KIE for the First 1000 Days of Life (HPK). This can be seen at the evaluation and reporting stages. Kepulauan Seribu Sub-Department PPAPP does not have a special application or platform for the KIE promotion program for the care of 1000 HPK, but is only obliged to submit reporting to the BKKBN and reporting through the BAPPENAS monitoring and evaluation application. So, the communication strategy carried out by the Kepulauan Seribu Sub-Department PPAPP can be considered less than optimal. Kepulauan Seribu Sub-Department PPAPP faces several obstacles in the implementation of the KIE promotion program for the care of 1000 HPK in the context of stunting prevention, namely the Kepulauan Seribu PPAPP Sub-Department needs to change the implementation of the program directly to online media due to the covid-19 pandemic, during the webinar implementation there is no communication in a two-way manner, which sees the spirit of the target mothers more inclined to direct implementation because they can practice the material provided, the community cannot directly submit complaints or questions regarding the care of 1000 HPK, it is difficult to see whether the training or materials provided through webinars carried out into their daily activities, and the most basic obstacle faced was the signal in Kepulauan Seribu which was rather difficult to obtain.
2021-11-26T16:33:08.546Z
2021-11-18T00:00:00.000
{ "year": 2021, "sha1": "ffd841642cf25b7825d9b3380e0b75b01679b26d", "oa_license": null, "oa_url": "https://journal.iapa.or.id/proceedings/article/download/513/288", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b4b0a2bc21dcfa2cefe84b8351b4823c7d49583", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [] }
224823454
pes2o/s2orc
v3-fos-license
The Eye as a Non-Invasive Window to the Microcirculation in Liver Cirrhosis: A Prospective Pilot Study Microcirculatory dysfunction is associated with organ failure, poor response to vasoactive drugs and increased mortality in cirrhosis, but monitoring techniques are not established. We hypothesized that the chorioretinal structures of the eye could be visualized as a non-invasive proxy of the systemic microvasculature in cirrhosis and would correlate with renal dysfunction. Optical Coherence Tomography (OCT) was performed to image the retina in n = 55 cirrhosis patients being assessed for liver transplantation. OCT parameters were compared with established cohorts of age- and sex-matched healthy volunteers (HV) and patients with chronic kidney disease (CKD). Retinal thickness, macular volume and choroidal thickness were significantly reduced relative to HV and comparable to CKD patients (macular volume: HV vs. cirrhosis mean difference 0.44 mm3 (95% CI 0.26–0.61), p ≤ 0.0001). Reduced retinal thickness and macular volume correlated with renal dysfunction in cirrhosis (macular volume vs. MDRD-6 eGFR r = 0.40, p = 0.006). Retinal changes had resolved substantially 6 weeks following transplantation. There was an inverse association between choroidal thickness and circulating markers of endothelial dysfunction (endothelin-1 r = −0.49, p ≤ 0.001; von Willebrand factor r = −0.32, p ≤ 0.05). Retinal OCT may represent a non-invasive window to the microcirculation in cirrhosis and a dynamic measure of renal and endothelial dysfunction. Validation in different cirrhosis populations is now required. Introduction Decompensation and organ dysfunction in liver cirrhosis are characterised by systemic inflammation, regional microcirculatory alterations and profound systemic haemodynamic adaptations [1,2]. In patients with cirrhosis, splanchnic vasodilatation causes arterial 'steal' from the systemic circulation into the splanchnic bed [3], which decreases the effective blood volume and in turn triggers a variety of compensatory mechanisms. Marked changes occur in the renal circulation secondary to neurohormonal activation (renin-angiotensin-aldosterone system, sympathetic nervous system, vasopressin), a loss of renal autoregulation and an imbalance of intra-renal vasoconstrictors and vasodilators. Cardiac dysfunction (including cirrhotic or alcoholic cardiomyopathy) compounds circulatory derangements and kidney hypoperfusion. Accordingly, haemodynamic changes have been demonstrated in a range of extrahepatic vascular beds using modern vascular imaging techniques [4,5]. However, although the macrocirculation has been extensively characterised in cirrhosis, the microcirculation has been relatively understudied [6]. Emerging data suggest that, as in patients with severe sepsis [7], dysregulated systemic inflammation and microcirculatory alterations in different cirrhosis phenotypes may correlate with poor clinical outcomes [8]. Furthermore, despite normalisation of systemic haemodynamic variables in cirrhosis using fluids and vasoactive drugs, there is not necessarily a parallel improvement in microcirculatory perfusion and restoration of tissue oxygenation. This loss of haemodynamic coherence could explain the variability in response to terlipressin in patients with hepatorenal syndrome, illustrated by the drug's heterogeneous effect on renal perfusion indices [9]. Assessment of the microcirculation could therefore play a potentially critical role in understanding the complex pathophysiology in an individual patient, monitoring of treatment interventions, and prognostication across different clinical states of cirrhosis. Although there are no techniques to monitor the microcirculation in widespread clinical use, a number of modalities have recently been examined. In particular, novel handheld microscopes [10] have been used to visualise the sublingual microcirculation in critical illness (e.g., sepsis, high-risk surgery) [11] and also to study the effects of pharmacological therapies targeting the microcirculation [12]. The retinal vasculature is an established non-invasive proxy of systemic microvascular health. Optical coherence tomography (OCT) allows direct visualisation of chorioretinal microvascular structures. We recently used OCT to show that chorioretinal thinning in chronic kidney disease (CKD) is associated with lower eGFR and correlates with circulating markers of inflammation and endothelial function [13]. As renal (and other organ) dysfunction in decompensated cirrhosis is common, associated with a high mortality, and characterised by a systemic proinflammatory and pro-oxidant milieu, we hypothesised that OCT could be used to detect and monitor chorioretinal changes within the eye, providing a surrogate measure of the renal and extrahepatic microcirculations. Here we report an initial pilot study in a cohort of patients with liver cirrhosis undergoing assessment for liver transplantation and show significant chorioretinal alterations that correlated with renal function and markers of endothelial dysfunction. Furthermore, these OCT features were dynamic and resolved substantially following liver transplantation. Ethics This observational study was conducted according to the ethical principles of the Declaration of Helsinki 2013 and following approval from the North West-Haydock Research Ethics Committee (REC Reference: 17/NW/0692) and the National Health Service (NHS) Lothian Research and Development department (Reference: 2017/0326). All patients gave written informed consent to participate in the study. Participants Consecutive male and female adult patients with liver cirrhosis admitted to the Edinburgh Transplant Centre (Royal Infirmary of Edinburgh, Edinburgh, UK) over a 6-month period were invited to join this study. Inclusion criteria were: male or female subjects over 18 years of age; patients with cirrhosis being assessed for liver transplantation; able to give informed consent and able to understand and willing to comply with the requirements of the study. Exclusion criteria were: lack of capacity to give informed consent; patients with acute liver failure being assessed for liver transplantation. Permission was obtained to record the results of all investigations performed routinely as part of the NHS transplant assessment process. These data included: routine blood tests (full blood count, urea and electrolytes, liver function tests and coagulation); urinary sodium and creatinine clearance; anthropometric assessments. Estimated glomerular filtration rate (eGFR) was calculated using the Modification of Diet in Renal Disease-6 (MDRD-6) equation. The MDRD-6 equation has greater accuracy in patients with cirrhosis (compared to the traditional MDRD-4 equation) and the Organ Procurement and Transplantation Network consensus supports the use of MDRD-6 when assessing renal function in transplant assessment patients [14]. Results of additional tests including pulmonary function tests, cardio-pulmonary exercise testing, ECG and echocardiogram were recorded, but were not included in this analysis. Optical Coherence Tomography (OCT) Retinal assessment included retinal thickness, retinal nerve fiber layer (RNFL) thickness, macular volume and choroidal thickness as previously described [13], using the Heidelberg Spectralis OCT imaging platform that yields images with an axial or depth resolution of 3 µm/pixel and lateral resolution of 10 µm/pixel enabling identification of the retinal layers and choroid for quantification. The OCT imaging and analysis methodology is shown in Figure 1. Each procedure was performed under the same degree lighting (i.e., a dimmed room so as to avoid the need for pupillary dilatation) and took approximately 5-10 min to complete. Where possible, both eyes were scanned; however, images obtained from the right eye were preferentially used for analysis. In order to minimize bias, all OCT image analysis was performed by an expert assessor (Kirstie Hetherington) who was blinded to clinical status. Imaging metrics (retinal thickness, RNFL thickness, macular volume, and choroidal thickness) in cirrhosis patients were compared with two pre-existing cohorts, of age-and sex-matched healthy volunteers (HV, n = 50) and patients with chronic kidney disease (CKD, n = 50), who had previously undergone OCT assessment on the same high-resolution Heidelberg SPECTRALIS ® platform. Modification of Diet in Renal Disease-6 (MDRD-6) equation. The MDRD-6 equation has greater accuracy in patients with cirrhosis (compared to the traditional MDRD-4 equation) and the Organ Procurement and Transplantation Network consensus supports the use of MDRD-6 when assessing renal function in transplant assessment patients [14]. Results of additional tests including pulmonary function tests, cardio-pulmonary exercise testing, ECG and echocardiogram were recorded, but were not included in this analysis. Optical Coherence Tomography (OCT) Retinal assessment included retinal thickness, retinal nerve fiber layer (RNFL) thickness, macular volume and choroidal thickness as previously described [13], using the Heidelberg Spectralis OCT imaging platform that yields images with an axial or depth resolution of 3 μm/pixel and lateral resolution of 10 μm/pixel enabling identification of the retinal layers and choroid for quantification. The OCT imaging and analysis methodology is shown in Figure 1. Each procedure was performed under the same degree lighting (i.e., a dimmed room so as to avoid the need for pupillary dilatation) and took approximately 5-10 min to complete. Where possible, both eyes were scanned; however, images obtained from the right eye were preferentially used for analysis. In order to minimize bias, all OCT image analysis was performed by an expert assessor (Kirstie Hetherington) who was blinded to clinical status. Imaging metrics (retinal thickness, RNFL thickness, macular volume, and choroidal thickness) in cirrhosis patients were compared with two pre-existing cohorts, of age-and sex-matched healthy volunteers (HV, n = 50) and patients with chronic kidney disease (CKD, n = 50), who had previously undergone OCT assessment on the same high-resolution Heidelberg SPECTRALIS ® platform. ETDRS) map (shown in the left panel) was automatically measured and then all areas combined to give the macular volume. The retinal layer is defined as the area between the internal limiting membrane (ILM) and the hypo-reflective line between the retinal pigment epithelium (RPE) and the choriocapillaries (CC) (depicted in the en face view of the macula shown in the right panel). The ETDRS map subdivides the macula, and retinal thickness was measured in eight zones (IS, inner-superior; IN, inner-nasal; II, inner-inferior; IT, inner-temporal; OS, outer-superior; ON, outer-nasal; OI, outer-inferior; OT, outer-temporal). All measurements were made by a trained technician who was blinded to all participant details. Sample Collection and Analysis Blood was collected for routine serum biochemistry tests and plasma biomarker analysis. Validated ELISA kits were used to measure circulating levels of von Willebrand factor (Human von Willebrand Factor ELISA kit, #ab108918; Abcam, Cambridge, UK) and Endothelin-1 (Endothelin-1 Quantikine' ELISA kit from, #DET100; R&D Systems, Abingdon, UK). A urine sample was collected for urinary protein to creatinine ratio (uPCR) and biomarker analysis. Follow-Up Participants who were listed and received a liver transplant during the timeframe of the study were invited for a follow-up study visit at the Royal Infirmary of Edinburgh Clinical Research Facility, approximately 6 weeks after their transplant date. At this visit all study assessments were repeated. Morbidity data were collected for all transplanted patients including the warm ischaemic time, graft function at 6 weeks, the development of AKI or need for renal replacement therapy at the time of transplantation, length of Intensive Care Unit stay, and overall hospital stay. Sample Size This was a pilot study and, as such, the sample size was pragmatic, based upon the anticipated recruitment rate and study duration. Approximately 4-5 patients per week are admitted for liver transplant assessment, therefore based on a refusal rate of 50%, we anticipated recruitment of 54 patients over a 6-month period. Statistical analysis Summary statistics (n, mean, standard deviation (SD), median, min, max) are presented for all recruited patients, and also for the subgroup who received a liver transplant during the period of the study to allow comparison. All data were assessed for normality, and log transformed if appropriate, before parametric tests were used. Two-tailed independent sample t-tests were used to compare continuous pre-transplant data according to AKI at transplantation, graft loss and survival ('yes' × 'no'). Chi squared tests were used to examine relationships between liver disease severity scores and categorical outcomes. One-way analysis of variance (ANOVA) was used to compare continuous post-transplant outcomes according to liver disease severity (≥3 categories, e.g., Child-Pugh Class A/B/C). Pearson's correlations were used to assess relationships between continuous pre-transplant and post-transplant data. A p-value < 0.05 was considered statistically significant. All statistics were calculated using IBM SPSS ® Statistics, version 24 (IBM, Armonk, NY, USA). Participant Disposition A total of 55 patients with cirrhosis were recruited. Of these, two participants were too unwell to undergo OCT scanning, one participant was unwilling to attend, and three participants were unable to comply with the examination process. Results of the remaining 49 participants were used for analysis; 29 (59%) were male, mean age 58 ± 9 years and mean eGFR 100 ± 24 mL/min/1.73 m 2 . The mean Model for End-Stage Liver Disease (MELD) score was 14 (range 6-27) and the mean United Kingdom Model for End-Stage Liver Disease (UKELD) score was 53 (range 45-62). Seven (14%) participants had Child-Pugh (C-P) class A disease, 22 (45%) C-P class B, and 20 (41%) C-P class C. OCT imaging metrics in cirrhosis patients were compared with pre-existing cohorts of age-and sex-matched HV and CKD patients. Baseline patient characteristics are summarised in Table 1. Table S1). These abnormalities were comparable to, or more severe than, those shown in CKD patients, despite marked disparity in eGFR (eGFR cirrhosis, mean ± SD; 100 ± 24 mL/min/1.73 m 2 , CKD: 37 ± 23 mL/min/1.73 m 2 (Figure 2A). In keeping with a thinner retina, participants with cirrhosis had a significant reduction in macular volume (HV vs. cirrhosis mean difference 0.44 mm 3 , 95% CI 0.26-0.61, p < 0.0001) ( Figure 2B; Supplementary Table S2). Moreover, in all three macular locations significant choroidal thinning was recorded ( Figure 2C; Supplementary Table S3). This was most marked in locations II and III where the choroid was found to be~30% thinner in cirrhosis relative to HV. No significant difference was found in either retinal thickness, macular volume or choroidal thickness between patients when grouped by aetiology of liver disease (alcohol related liver disease, chronic viral hepatitis, non-alcoholic fatty liver disease, primary biliary cholangitis, primary sclerosing cholangitis, or cryptogenic cirrhosis). Correlation of Chorioretinal Parameters with Renal Function and Liver Disease Severity Retinal thickness and macular volume were shown to correlate significantly with (logtransformed) creatinine and eGFR. As renal function declined, so too did retinal thickness and macular volume (Figure 3). No significant association was found between MELD score and either retinal thickness or macular volume; however, the data suggested a non-significant trend towards lower retinal thickness and macular volume with increasing severity of liver disease, as defined by cirrhotic prognostic subgroup and variceal severity ( Figures S2 and S3). Choroidal thickness did not correlate with renal function, MELD, or severity of liver disease. Correlation of Chorioretinal Parameters with Renal Function and Liver Disease Severity Retinal thickness and macular volume were shown to correlate significantly with (log-transformed) creatinine and eGFR. As renal function declined, so too did retinal thickness and macular volume ( Figure 3). No significant association was found between MELD score and either retinal thickness or macular volume; however, the data suggested a non-significant trend towards lower retinal thickness and macular volume with increasing severity of liver disease, as defined by cirrhotic prognostic subgroup and variceal severity ( Figures S2 and S3). Choroidal thickness did not correlate with renal function, MELD, or severity of liver disease. Alterations in Chorioretinal Parameters with Liver Transplantation A total of 14 participants underwent liver transplantation over the duration of the study and were invited back for repeat OCT. Three participants were lost to follow up (one participant died and two did not attend), and two were unable to comply with the examination process; therefore, comparison of chorioretinal parameters before and after transplantation was possible in nine participants. Retinal thickness (Figure 4; Supplementary Table S4) and macular volume measurements (Supplementary Table S5) had increased significantly 6 weeks after liver transplant (retinal thickness: F = 9.5, p = 0.003 (two-way mixed design ANOVA); macular volume pre-OLT (mean ± SD) 7.9 ± 0.3 mm 3 vs. post-OLT 8.1 ± 0.3 mm 3 , p = 0.0007). No significant change was seen in choroidal thickness when re-measured after OLT. The data also suggested that choroidal thinning in location I, when measured at liver transplant assessment, may predict post-transplant acute kidney injury (AKI 170 ± 9 µm vs. no-AKI 231 ± 11 µm, p < 0.05) ( Figure S1), although patient numbers were small. A similar pattern was seen at location II and III, however these data did not reach statistical significance. Alterations in Chorioretinal Parameters with Liver Transplantation A total of 14 participants underwent liver transplantation over the duration of the study and were invited back for repeat OCT. Three participants were lost to follow up (one participant died and two did not attend), and two were unable to comply with the examination process; therefore, comparison of chorioretinal parameters before and after transplantation was possible in nine participants. Retinal thickness (Figure 4; Supplementary Table S4) and macular volume measurements (Supplementary Table S5) had increased significantly 6 weeks after liver transplant (retinal thickness: F = 9.5, p = 0.003 (two-way mixed design ANOVA); macular volume pre-OLT (mean ± SD) 7.9 ± 0.3 mm 3 vs. post-OLT 8.1 ± 0.3 mm 3 , p = 0.0007). No significant change was seen in choroidal thickness when re-measured after OLT. The data also suggested that choroidal thinning in location I, when measured at liver transplant assessment, may predict post-transplant acute kidney injury (AKI 170 ± 9 μm vs. no-AKI 231 ± 11 μm, p < 0.05) ( Figure S1), although patient numbers were small. A similar pattern was seen at location II and III, however these data did not reach statistical significance. Chorioretinal Parameters and Markers of Inflammation and Endothelial Dysfunction Based on these data, further work was performed to explore the mechanistic roles of inflammation and endothelial dysfunction in mediating chorioretinal changes. Plasma von Willebrand factor (vWF; an endothelial activation marker) [15] and endothelin-1 (ET-1; an endogenous vasoconstrictor strongly linked with endothelial dysfunction) [16] were measured. Both vWF and ET-1 were markedly elevated before OLT, correlating significantly with severity of liver disease (MELD and variceal staging (data not shown)) and decreased substantially when rechecked 6 weeks after transplantation (Table 2). Moreover, there was a statistically significant, negative association between both plasma vWF and ET-1 level and choroidal thickness ( Figure 5). No Chorioretinal Parameters and Markers of Inflammation and Endothelial Dysfunction Based on these data, further work was performed to explore the mechanistic roles of inflammation and endothelial dysfunction in mediating chorioretinal changes. Plasma von Willebrand factor (vWF; an endothelial activation marker) [15] and endothelin-1 (ET-1; an endogenous vasoconstrictor strongly linked with endothelial dysfunction) [16] were measured. Both vWF and ET-1 were markedly elevated before OLT, correlating significantly with severity of liver disease (MELD and variceal staging) and decreased substantially when rechecked 6 weeks after transplantation (Table 2). Moreover, there was a statistically significant, negative association between both plasma vWF and ET-1 level and choroidal thickness ( Figure 5). No significant association was found between vWF or ET-1 and measures of retinal thickness and macular volume. Table 2. Serum levels of endothelin-1 (ET-1) and von Willebrand factor (vWF) pre and post orthotopic liver transplant (OLT). Discussion Recent advances in multimodal retinal imaging devices enable non-invasive visualisation of the chorioretinal microvascular structures at high resolution. Examination of the microvasculature in this way has been used extensively in the research of both retinal and neurological disorders [17,18]. Moreover, retinal microvascular changes have been linked to increased cardiovascular risk [19,20], including the incidence of stroke [21] and coronary heart disease [22], suggesting that these abnormalities may reflect systemic microcirculatory dysfunction, and could represent an early, noninvasive technique to detect subclinical vascular pathology [22]. Cirrhosis is associated with widespread microcirculatory dysfunction and haemodynamic abnormalities. Correction of macrocirculatory derangement (fluid resuscitation and vasoactive drugs) does not always lead to microcirculatory improvement, and haemodynamic coherence is lost [23]. Such microcirculatory dysfunction is independently associated with adverse outcomes, even after normalisation of systemic haemodynamic parameters [24,25]. We have shown, to our knowledge for the first time, significant chorioretinal abnormalities in patients with cirrhosis of diverse aetiology, attending for liver transplant assessment. Compared to an age-and sex-matched cohort of healthy volunteers, participants with cirrhosis exhibited significant retinal thinning and reduced macular volume, with changes comparable to or more severe than those seen in CKD. Moreover, as in CKD, retinal thickness and macular volume were found to correlate significantly with eGFR. It is widely recognised that serum creatinine based estimating equations overestimate GFR by >20% in patients with cirrhosis [26]. It is possible that OCT scanning may represent a more effective indicator of renal risk (both acute kidney injury at transplantation and progressive renal dysfunction thereafter) when compared to serum creatinine or eGFR. Further work is required to understand the causality of these chorioretinal abnormalities and ascertain their ability to predict risk. Intriguingly, these chorioretinal abnormalities were dynamic, and reversed substantially following liver transplantation. Furthermore, the choroid (a dense microvascular A B Discussion Recent advances in multimodal retinal imaging devices enable non-invasive visualisation of the chorioretinal microvascular structures at high resolution. Examination of the microvasculature in this way has been used extensively in the research of both retinal and neurological disorders [17,18]. Moreover, retinal microvascular changes have been linked to increased cardiovascular risk [19,20], including the incidence of stroke [21] and coronary heart disease [22], suggesting that these abnormalities may reflect systemic microcirculatory dysfunction, and could represent an early, non-invasive technique to detect subclinical vascular pathology [22]. Cirrhosis is associated with widespread microcirculatory dysfunction and haemodynamic abnormalities. Correction of macrocirculatory derangement (fluid resuscitation and vasoactive drugs) does not always lead to microcirculatory improvement, and haemodynamic coherence is lost [23]. Such microcirculatory dysfunction is independently associated with adverse outcomes, even after normalisation of systemic haemodynamic parameters [24,25]. We have shown, to our knowledge for the first time, significant chorioretinal abnormalities in patients with cirrhosis of diverse aetiology, attending for liver transplant assessment. Compared to an age-and sex-matched cohort of healthy volunteers, participants with cirrhosis exhibited significant retinal thinning and reduced macular volume, with changes comparable to or more severe than those seen in CKD. Moreover, as in CKD, retinal thickness and macular volume were found to correlate significantly with eGFR. It is widely recognised that serum creatinine based estimating equations overestimate GFR by >20% in patients with cirrhosis [26]. It is possible that OCT scanning may represent a more effective indicator of renal risk (both acute kidney injury at transplantation and progressive renal dysfunction thereafter) when compared to serum creatinine or eGFR. Further work is required to understand the causality of these chorioretinal abnormalities and ascertain their ability to predict risk. Intriguingly, these chorioretinal abnormalities were dynamic, and reversed substantially following liver transplantation. Furthermore, the choroid (a dense microvascular network receiving >80% retinal blood flow) was~30% thinner in cirrhosis compared with HV, representing significant vascular rarefaction [27]. Importantly, choroidal thinning was positively associated with markers of endothelial dysfunction (ET-1) and systemic inflammation (vWF). This is consistent with the theory that choroidal thinning may reflect systemic microvascular dysfunction. In a similar fashion, video microscopy (VM) has been used to facilitate in vivo visualisation of the sublingual microcirculation. Using this technique, Sakr et al. showed an association between the degree of microcirculatory dysfunction and progression to multiorgan failure and death in patients with septic shock [7]. Using the same technology, Sheikh et al. demonstrated a significant reduction in sublingual microvascular blood flow in patients with decompensated cirrhosis, compared to those with compensated disease [28]. Moreover, small but significant alterations in the sublingual microcirculation were shown in patients with cirrhosis compared to HV matched for age, sex, and cardiovascular risk factors [29]. However, a recent study using VM in combination with Near Infrared Spectroscopy did not show any association between peripheral microcirculatory parameters and the severity of liver disease [30]. A key observation in this pilot study was that chorioretinal abnormalities in cirrhosis patients resolved substantially following liver transplantation. Further work is required to validate our observations and to elucidate the cause(s) of these OCT changes in cirrhosis, such as the potential role of increased sympathetic tone. Indeed, while the choroidal circulation has autonomic innervation, the retinal circulation does not. Thus, the thinning of the outer retina and choroid would be consistent with increased sympathetic tone affecting the choroidal vasculature. We did not investigate measures of sympathetic activity (e.g., serum norepinephrine) in the current pilot study, but these would be an interesting area for future research in different cirrhosis settings, such as acute decompensation of cirrhosis. A limitation of this study was the small sub-group (n = 9) of patients transplanted within the study period. A more prolonged period of follow-up would increase the number of participants with chorioretinal data before and after OLT, improving the statistical power. Future studies could also use data linkage to explore whether these OCT metrics are predictive of renal, liver, and cardiovascular outcomes in cirrhosis populations. It is conceivable that chorioretinal microvascular changes may also represent a dynamic and accessible non-invasive response marker for guiding the use of vasoactive pharmacological agents in cirrhosis such as non-selective β-blockers for variceal prophylaxis or vasoconstrictors for hepatorenal syndrome. The recent development of portable OCT machines will permit evaluation in different clinical settings, including in patients who are too unwell to transfer to a research facility. Supplementary Materials: The following are available online at http://www.mdpi.com/2077-0383/9/10/3332/s1: Table S1. Two-way ANOVA results of retinal thickness in healthy volunteers (HV), chronic kidney disease (CKD) and cirrhosis patients; Table S2. Ordinary one-way ANOVA results of macular volume in healthy volunteers (HV), chronic kidney disease (CKD) and cirrhosis patients; Table S3. Two-way ANOVA results of choroidal thickness in healthy volunteers, chronic kidney disease and cirrhosis patients; Table S4. Results of retinal thickness before and after liver transplantation; Table S5. Paired t-test results of macular volume before and after liver transplantation; Figure S1. OCT variables and the development of acute kidney injury at liver transplant; Figure S2. Retinal thickness in different cirrhosis prognostic subgroups and variceal grades. Figure S3. Macular volume in different cirrhosis prognostic groups and variceal grades. . The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2020-10-22T18:55:02.351Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "58a6c4eac9ac82a34bd87ed4c4b6220ed834efc6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/9/10/3332/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "972d5d4b830a879977bab5a6730f79b35e91eb21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265024217
pes2o/s2orc
v3-fos-license
Evaluation of Cardiac Substructures Dose Sparing in Single and Dual Isocenter RapidArc™ Radiotherapy Planning for Synchronous Bilateral Breast Cancer Purpose This study compares the dosimetry and dose sparing of cardiac substructures in single isocenter and dual isocenter RapidArc™ (Varian Medical Systems, Palo Alto, California, United States) radiotherapy planning for synchronous bilateral breast cancer. Methodology Six synchronous bilateral breast cancer (SBBC) patients received adjuvant radiation with the prescribed dose of 40.05 Gy in 15 fractions to the planning target volume (PTV) without local lymph nodal regions. PTVs and organs at risk (OARs), including both lungs, esophagus, spinal cord, heart, and left anterior descending coronary artery (LAD), both atria and ventricles were contoured. Single isocentric RapidArc (SIRA) and dual isocentric RapidArc (DIRA) plans were made for each patient and dosimetric differences between these two techniques were evaluated. Results There was no statistically significant difference in conformity index (CI) values between SIRA and DIRA plans, with 0.9681±0.01 and 0.9721±0.01 (p=0.505), respectively. SIRA planning showed superior homogeneity with homogeneity Index (HI) values of 0.0999±0.01 compared to DIRA planning with HI values of 0.1640±0.12 (p=0.230). The mean LAD dose of SIRA was valued higher than that of DIRA planning. Lower mean doses were obtained for both lungs in SIRA plans compared to DIRA plans. Meanwhile, doses to the right atrium, left atrium, left ventricle, right ventricle, and esophagus showed no statistical significance between these two techniques, except in the spinal cord. Conclusion Both SIRA and DIRA plans have satisfactory outcomes in sparing OARs. Meanwhile, SIRA techniques have less setup time and overall machine time. Introduction Breast cancer is now one of the most commonly diagnosed cancers and the fifth leading cause of cancer mortality.According to Global Cancer Observatory (GLOBOCAN) 2020, female breast cancer accounts for 2.3 million (6.9%) cancer deaths [1].Bilateral breast cancer (BBC) is becoming more common due to rising breast cancer incidence rates, better treatment options, and longer life expectancies.About 2-11% of all breast cancers are bilateral.For patients with unilateral breast cancer (UBC), the cumulative incidence rate of developing contralateral breast cancer at 10 years is approximately 3.4%.For women with a BRCA mutation, it is 13-40% [2].BBC can be classified as synchronous BCC (SBBC) or metachronous BCC (MBBC), depending on how long it took for the first and second tumour diagnoses to occur.While MBBC demonstrated nonsuperior survival compared to UBC, BBC and SBBC demonstrated a worse prognosis than UBC.SBBC is described as two malignant tumours in each breast within six months [3]. There are no specific recommendations for managing BBCs [4].There are numerous issues with radiation therapy in BBCs, including increased dose at the chest wall's core, increased total lung dose, and increased heart dose [5]. The three-dimensional (3D) conformal radiotherapy technique (CRT) approach with two tangential fields is frequently utilised in the treatment planning of unilateral breast cancer.While in SBBC, to minimise the overlapping region and maintain target coverage, a traditional tangential field with dual isocentres is laborious [4].Inhomogeneity of the target coverage and poor cosmetic effects can be observed in 3DCRT.Compared to 3DCRT, intensity-modulated radiation therapy (IMRT)/volumetric modulated arc therapy (VMAT)/RapidArc™ (Varian Medical Systems, Inc., Palo Alto, California, United States) (RA) can improve the target dose coverage and achieve acceptable cosmetic effects of cardiopulmonary sparing.Using modern radiation therapy techniques, such as IMRT/ VMAT/RA, to treat both breasts simultaneously in SBBC patients is feasible, tolerable, and safe [6].Using synchronous bilateral hypofractionated radiation (SBHRT) in SBBC patients with dual versus monocentric VMAT technique is rare, with few papers demonstrating its usefulness. Cardiovascular disease following radiation therapy has surpassed breast cancer as the major cause of nonbreast cancer death [5].As a result, radiotherapy-induced cardiotoxicity is a significant issue that requires extensive research.Darbey et al. discovered a linear association between mean heart dose (MHD) and the incidence of ischemic heart disease, which rose by 7.4% for each Gy of MHD [7].As a result, lowering the MHD is crucial for lowering long-term cardiotoxicity.Most studies have discovered that radiotherapyinduced cardiotoxicity is also closely related to the dose of key heart substructures, such as the left anterior descending artery (LAD) and the left ventricle because studies have shown that high-grade coronary stenosis in the LAD is increased in women receiving radiation for the breasts. This observational retrospective study aimed to evaluate the clinicopathologic and dosimetric analysis of a single isocentric versus dual isocentric RA technique in SBBC patients treated with hypofractionated radiotherapy with heart and substructure sparing in a tertiary cancer centre in North East India. Patient selection Six patients diagnosed with SBBC who underwent treatment in BBCI with external radiotherapy between September 2020 and January 2023 were retrospectively studied.Table 1 shows the demographic and tumour characteristics of the six patients included in the study.The median age of the patients was 57 years. Simulation and contouring All patients were immobilised in the breast board in a supine position with a 15˚ inclination angle.A planning computed tomography (CT) scan was done in a Philips Brilliance Big Bore CT simulator (Koninklijke Philips N.V., Amsterdam, Netherlands) without intravenous contrast.The acquisition was done with a slice thickness of 3 mm.The scan was imported into the Eclipse™ treatment planning system version 15.6 (Varian Medical Systems). The clinical target volume (CTV) included the bilateral chest wall/bilateral breasts and post-modified radical mastectomy/breast conservative surgery.Planning target volume (PTV) breast/chest wall was generated by 5 mm isotropic expansion of the CTV.The PTV thus obtained was cropped by 3 mm to exclude the regions extending outside the body.The organs at risk (OARs) included the heart and its chambers, left anterior descending coronary artery (LAD), bilateral lungs, esophagus, and spinal cord, delineated following the Radiation Therapy Oncology Group (RTOG) recommendations.The average PTV was 1169±615 cc, ranging from 564.3 cc to 1939.3 cc. RA planning In this study, the same physicist created all the plans using the Varian Eclipse treatment planning system with 6 MV photon beams, with the prescribed radiation dose of 40.05 Gy in 15 fractions (for all cases, 95% of PTV should receive 95% of the prescribed dose).For each patient, two RA plans were created, one using the single isocentre (Single Isocentric RA (SIRA) technique) and the other using two isocentres (Dual Isocentric RA (DIRA) approach).In the SIRA technique, the isocentre was placed in the center of the body midway between the two breasts/chest walls.In the DIRA technique, one isocentre was placed close to the left-sided PTV, and the other isocentre was placed near the right-sided PTV.In the SIRA technique, four arcs were utilised for treatment planning.Out of the four arcs, in two arcs, the avoidance sector was placed across the lung region on both sides.In the DIRA technique, two arcs were utilised without any sectoral avoidance.The arc start and stop angles were selected as per the posterior extension of the target and it was not identical in all six patients.The typical value of the arc start angle was 150-210° in clockwise rotation and 210-150° in anti-clockwise rotation.However, for the same patient, the arc start angle for the SIRA plan was identical to the arc start angle for the DIRA plan.The plans were optimized using the photon optimizer (PO) algorithm version 15.6 (Varian Medical Systems).In both plans, the optimization objectives were kept the same.An anisotropic analytical algorithm (AAA) was used for the final dose calculation with a grid size of 2.5mm in all plans. Plan evaluation and statistical analysis All the plans were evaluated for the homogeneity index (HI) and conformity index (CI) of the PTV and the dosimetric parameters of OARs.CI [8] and HI [9] were calculated using the formulae below. OAR dose constraints of plans were based on department protocol obtained from the Quantitative Analyses of Normal Tissue Effects in the Clinic (QUANTEC) study [10]. Quantitative Results Comparison of PTV and OAR doses using dosimetric parameters, including CI and HI of SIRA and DIRA plans of all patients, are presented in Table 2. Figure 1a illustrates a SIRA plan, and Figure 1b depicts a DIRA plan.The green dose colour wash represents 95% of the prescribed dose to the PTV. Figure 2 There was no statistically significant difference in CI values between SIRA and DIRA plans.The CI of the SIRA plan was 0.9681±0.01,while that of the DIRA plan was 0.9721±0.01 with a p-value of 0.505.However, the SIRA planning produced a better homogeneity for bilateral whole breasts/bilateral chest wall irradiation than the DIRA planning.SIRA planning showed superior homogeneity with HI values of 0.0999±0.01compared to DIRA planning with HI values of 0.1640±0.12(p=0.230).The V95% of SIRA and DIRA plans were not significantly different, with mean values of 96.81±0.6% and 97.21±1.34%,respectively.The volume receiving 107% of the prescribed dose was less in both groups. Discussion SBBC is a rare condition.There are no standard guidelines for its treatment till now, and due to the increased need for breast-preserving treatment, bilateral irradiation is required.The enormous and complex target volume and the necessity to minimise the dose to the heart and lungs make synchronous bilateral breast/chest irradiation difficult.BBCs have a large C-shaped target volume that can vary substantially in shape and volume.Furthermore, the target is closer to the skin, and OARs with large volumes are irradiated. The 3DCRT-based radiation therapy strategy for BBCs has significant shortcomings, including insufficient target coverage and inhomogeneous dose distributions.Yusoff and colleagues compared 3DCRT and IMRT treatment strategies for BBC patients.They found that whereas both treatment strategies provided equivalent PTV coverage, IMRT outperformed OAR dose distribution to the lungs and heart [11].Modern radiation therapy advancements like IMRT, VMAT, and helical tomotherapy enable us to treat SBBC patients with improved target dose uniformity and OAR sparing.Several studies have discussed which radiation technique is the best for SBBC irradiation according to the dosimetric characteristics [12]. In the current investigation, Arc treatment was employed.According to the QUANTEC study, the incidence of radiation pneumonitis is less than 20% if V20Gy<30% and V30Gy< 15% with conventional fractionation [12].In this study, both the right and left lungs achieved the dose constraints, with the mean dose received by the right lung in SIRA and DIRA plans being 11.22±2.19Gy and 12.04±2.96Gy, respectively.The mean dose received by the left lung is 11.24±1.96Gy and 12.12±2.06Gy, respectively.The V5Gy, V10Gy and Dmean of the heart and its chambers and LAD doses were also compared between the two plans.The mean LAD dose of SIRA was higher than that of DIRA planning. In an earlier study on SBBC, Banaei et al., who compared the mono isocentric (MIT) and dual isocentric techniques (DIT) in chest wall radiation therapy of mastectomy patients, showed that there was no significant difference between the two radiotherapy planning techniques regarding dose distribution in the OARs and the 95% of the prescribed dose coverage of the target tissue.However, the maximum dose delivered by 107% of the prescribed dose coverage was higher in DIT.Therefore, they recommended MIT instead of DIT for better conformal radiotherapy [12]. The number of arcs in RA planning is acknowledged to influence the results of treatment plans significantly. According to a 2009 study by Nicolini et al. [13], two half-arc RA plans provide better dose distribution than one or two full-arc therapy plans. Bilateral lungs are invariably exposed to more radiation in arc technique when both breasts are treated simultaneously.Subramanian et al. suggested the hybrid VMAT technique to address this issue.However, this approach is only used for SBBC and does not include regional lymph node irradiation [14]. Motion control and breath control were not used in this investigation.Breath control for 3DCRT is now available in our institute, but not for RA.Previous studies have established the mid-ventilation phase as a suitable substitute for breath control because it is the phase during which targets can be "seen" by static beams for the most extended duration when proper margins are defined [15]. When planning RT for SBBC, Kim et al. noted that it was challenging to determine the precise dose from two different isocentres in some treatment planning systems [5].Apart from the dosimetric aspects, the SIRA technique requires less setup time as it involves setting up the patient at the isocentre, setup verification using imaging, implementing setup correction (if any) and treatment.For the DIRA plans, these steps must be performed sequentially for each isocentre.Hence, the overall patient-in to patient-out time will be more in the DIRA approach. The table below summarises studies on SBBC with regard to lung and heart doses (Table 3).The study's limitations include the small number of patients, which is due to the rarity of SBBC.The clinical follow-up of the cases, as well as the daily setup variations, have not been documented. Conclusions This study reports the cardiac dose sparing (heart and its chambers and left anterior descending artery) and compares the SIRA technique with the DIRA technique.The SIRA planning produced a better homogeneity for bilateral whole breast/bilateral chest wall irradiation than the DIRA planning.In SIRA planning, the setup time and the overall machine time are less.Advanced long-term follow-up studies with a large number of patients are required to determine the clinical efficacy of the RA programmes in terms of oncologic outcomes and treatment toxicities. FIGURE 2 : FIGURE 2: The dose volume histogram showing both the target volumes and organ at risks of SIRA plan (triangle) and DIRA plan (square) techniques DIRA: dual Isocentric RapidArc; SIRA: single isocentric RapidArc TABLE 1 : Patient characteristics analysis of plans was done by dose volume histogram (DVH) analysis.IBM SPSS Statistics for Windows, Version 25.0 (Released 2017; IBM Corp., Armonk, New York, United States) was used to conduct the statistical research, with statistical significance defined as a p-value of 0.05. Parameters Single Isocentre Rapid Arc (SIRA) Dual Isocentre Rapid Arc (DIRA) P-value depicts the DVH of target volumes and OARs of both SIRA and DIRA. TABLE 2 : Comparison of PTV and OAR doses of SIRA and DIRA plans using dosimetric parameters structure (in percentage) receiving at least 30Gy; V107: Volume of structure in cc receiving at least 107% of the prescribed dose; CI: conformity index; HI: homogeneity index; OAR: organs at risk; DIRA: dual Isocentric RapidArc; SIRA: single isocentric RapidArc TABLE 3 : List of previous studies on synchronous bilateral breast cancer demonstrating lung and heart doses DIRA: dual Isocentric RapidArc; SIRA: single isocentric RapidArc
2023-11-06T16:10:06.842Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "b2eac0a23d5966618237f37dfe74a685d360f5f1", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/189253/20231104-32457-1xinopk.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c8e7903b4f07c689f00a3b608500b898f4b431a", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [] }
260450483
pes2o/s2orc
v3-fos-license
Suspected clinical toxoplasmosis in a 12-week-old puppy in Singapore Background Toxoplasma gondii is traditionally known as a parasite of felids, with possible infection in intermediate hosts such as dogs and humans, and thus a disease of public health significance. Published data on the prevalence of toxoplasmosis in dogs and cats in Singapore is scanty, and this paper documents a suspect clinical case of toxoplasmosis in a free-roaming puppy trapped from an offshore island of Singapore. Case presentation A 12-week-old puppy presented with hindlimb weakness and sarcopenia, with rapidly progressing ascending paralysis and respiratory distress, one week after trapping. Toxoplasmosis was suspected after indirect fluorescence antibody testing (IFAT) revealed anti-T. gondii antibodies. The puppy responded quickly to clindamycin treatment and was discharged from hospital after 10 days. Conclusion While rare and undocumented, veterinary clinicians in Singapore are advised to also include toxoplasmosis infection as a differential diagnosis in dogs presenting with similar clinical signs. This is especially so for dogs which have access to the outdoors. Background Toxoplasma gondii (which causes toxoplasmosis) is traditionally known as a parasite of felids, which are recognized as the only known definitive hosts.Humans and a wide range of animals, including dogs, can also become infected as intermediate hosts.There are numerous records on the prevalence of toxoplasmosis in dogs worldwide [1,2].A recent review found that the seroprevalence in dogs varied greatly between regions [3].In this review, risk factors such as age, sex, and housing were reported as positive associations [3].Primary infection with Toxoplasma in dogs is thought to be rare, and most reported cases are in dogs with co-infections or known immunosuppression [3]. At the time of this report, there is no known published evidence of toxoplasmosis in dogs in Singapore, to the knowledge of the authors.However, due to the presence of significant numbers of free-roaming cats in the community, the risk of exposure of T. gondii oocysts in cat faeces to dogs is not insignificant.Rats have also been suggested as a possible reservoir of infection and transmission to dogs [9,10].Transplacental infection of toxoplasmosis in dogs has been reported to have occurred naturally in one case in Australia [11], and it has also been documented in experimentally inoculated animals [12]. Case presentation Two free-roaming dogs (FRD), one male (Dog 1) and one female (Dog 2), likely to be littermates and estimated to be about 12-weeks of age, were trapped on an island off Singapore and sent to the Animal Management Centre (AMC) 1 .At triage, both puppies were observed to be quiet and alert, and no abnormalities were detected on physical examination.Both puppies received the first dose of core canine vaccine (Boehringer Ingelheim Recombitek© C6/CV), a topical ectoparasiticide containing fipronil and (S)-methoprene (Frontline© Plus), and oral treatment containing pyrantel, oxantel and praziquantel (Ilium© Pyraquantal) for intestinal parasites.As part of an ongoing national biosurveillance programme for FRDs, plain and EDTA blood samples were obtained from both dogs to screen for Leptospira sp.(PCR), Leishmania sp.(SNAP Leishmania; IDEXX), Dirofilaria immitis, Borrelia burgdorferi, Anaplasma sp. and Ehrlichia sp.(SNAP 4Dx Plus; IDEXX).None of the above pathogens were detected in the two dogs.Blood parasites (e.g., Babesia sp.) were also not observed on the peripheral blood films of the two dogs.Both dogs were fed a commercial diet consisting of mixed dry kibble and canned wet food, with no access to uncooked animal product. One week after admission into AMC, the puppies developed mild diarrhoea (Purina© faecal score 5-6).The clinical signs resolved within 3 days following treatment with oral crospovidone (100 mg/kg, BID).The following week, a small non-pruritic focal area of alopecia was noticed along the dorsum of Dog 1. Woods lamp examination of the two puppies was negative and dermatophyte culture (KRUUSE© Dermatophyte test) of hair plucking from around the alopecic area of Dog 1 was performed.Both puppies were given oral afoxolaner (Nexgard©) to empirically treat for potential demodicosis and daily application of a quaternary-ammonium compound based antiseptic ointment on the alopecic area (F10© Germicidal Barrier Ointment) was initiated.The two pups were shifted to a new kennel for isolation on a precautionary basis.The dermatophyte culture was negative after 10 days, and both dogs were given the second dose of core canine vaccine (about three weeks after the first dose).Both dogs were adopted into their new homes a few days later (3 weeks after intake). One week after adoption, the owner reported that the male puppy (Dog 1) was inappetent, dull and constipated.It also appeared to have progressive paresis in the hind legs (Fig. 1), and an occurrence of falling off a low chair, after onset of the paresis.The female puppy (Dog 2) was clinically healthy throughout.Dog 1 was examined at a veterinary clinic where slow general proprioception, bilateral sarcopenia in the hindlimbs, and mild lameness in the left hindlimb were observed by the attending veterinarian.Spine and hip radiographs were normal.A complete blood count showed mild non-regenerative anaemia (haematocrit 0.286 L/L; reference interval (RI) 0.373-0.617L/L) that was microcytic (MCV 59.0 fL; RI 61.6-73.5fL)and non-regenerative (reticulocytes 75.2 K/ µL; RI 10-110 K/µL), mild monocytosis (1.62 × 10 9 /L; RI 0.16-1.12× 10 9 /L) and biochemistry results showed a mild elevation of globulins 4.0 g/dL (RI 2.3-3.8g/dL).Screening for vector-borne diseases (SNAP 4Dx Plus; IDEXX) was repeated by the attending veterinarian, revealing that the dog was seropositive for B. burgdorferi.Subsequent PCR performed by the Centre for Animal and Veterinary Sciences (CAVS) did not detect B. burgdorferi DNA in the blood.The dog was empirically started on oral doxycycline (5 mg/kg BID), tramadol (2.5 mg/kg BID) and iron supplements.Differential diagnoses include spinal trauma due to the fall or a congenital orthopedic condition. As the neurological signs continued to deteriorate, Dog 1 was referred to an emergency veterinary hospital after one day and was noted to be tachycardic (170 bpm), tachypnoeic with an increased effort and short shallow breaths, and pyrexic (39.5 °C).Neurological examination indicated moderate tetraparesis with the right limbs affected worse than the left, inability to support head or weight on limbs, and reduced patella and withdrawal reflexes.Cranial nerve assessment was normal with no evidence of spinal pain.Thoracic radiographs indicated a mild generalised bronchial pattern and a small radiodense circular structure in the cranial abdomen, suspected within the stomach. The following morning, the dog was anaesthetised with methadone (0.2 mg/kg IV) as pre-medication and alfaxalone (1 mg/kg IV) as induction agent, intubated and maintained on oxygen and isoflurane.A cerebellomedullary cistern site puncture was attempted but no cerebrospinal fluid could be collected.The owner later reported that the dog had access to cycad trees and had been seen chewing on seeds within the last week.A gastroscopy was performed to investigate stomach contents, which only consisted of food material (kibbles, pumpkin) and some sand.Repeat abdominal radiography confirmed absence of the previously noted circular structure. Further blood screening for acetylcholine receptor antibody were normal (0.13 nmol/L, RI < 0.6 nmol/L indicates normal serum titer) and Neospora caninum antibody were tested negative by IFAT at an external laboratory (IDEXX).Serological screening for Toxoplasma gondii via IFAT was performed at CAVS.Two-fold serial dilutions of serum, starting at 1:16, were prepared using serum diluting buffer.20 µl of each serum dilution was added to a well on a substrate slide coated with T. gondii tachyzoites of RH strain (VMRD Inc, USA), and incubated for 30 min at 37°C in a humid chamber.The slide was then rinsed with Fluorescent Antibody (FA) rinse buffer and washed twice on an orbital shaker for 5 min and blotted dry.After which, 20 µl of anti-canine Immunoglobulin G (IgG) or Immunoglobulin M (IgM) antibody was applied to each well and subjected to the same incubation and wash steps.The slide was then examined using a fluorescence microscope (Olympus BX41, Japan) at 200x to 400x magnification.This revealed endpoint titres of 1:512 for (IgG) and 1:64 for (IgM) in the dog.With a working diagnosis of toxoplasmosis, the dog was started on oral clindamycin (20 mg/kg IV q12).Eye lubrication and regular limb physiotherapy were instituted as general nursing care. Two days after treatment for suspected toxoplasmosis was initiated, the animal's condition began to improve, with appetite returning, and it was able to be weaned off oxygen support by day-4 from admission to the emergency hospital.Due to the lack of ambulation, the dog had developed firm stools and constipation which were managed well with lactulose (0.4ml/kg PO q12).Head control improved by day-5, and the dog was standing supported by day-7.Dog 1 was released on day-10 from the hospital for ongoing home care with a 28-day course of clindamycin (15 mg/kg PO q12).Although Dog 2 was clinically healthy throughout this period, the attending veterinarian put it on a prophylactic 2-week course of clindamycin (15 mg/kg PO q12).A Serum sample was also collected from Dog 2 and screened similarly using IFAT.Though Dog 2 was non-clinical, its test indicated higher antibody titres of 1:32,678 for IgG and 1:256 for IgM.Thereafter, serum samples from both dogs were subjected to monthly IFAT to monitor the antibody titres over time.A consistent decrease in IgG and IgM levels was observed in both dogs every month; except for Dog 1 at the 2-month mark, where IgG and IgM titres remained stable.By the 3-month mark, IgM titres of both dogs were no longer detectable at the cut-off of 1:16 and IgG titres had dropped at least three-log.The antibody titres of both dogs over a three-month period are shown in Fig. 2. By the 3-month mark, Dog 1's neurological signs had fully resolved.A biopsy was taken from the biceps femoris muscle of Dog 1 when he was anaesthetised for routine castration.No T. gondii DNA was detected in the muscle sample via PCR. Further investigation in free-roaming dog population As an extension of the national biosurveillance programme for free-roaming dogs, serum from free-roaming dogs (FRD) that were trapped from the same offshore island and admitted into AMC was opportunistically obtained and screened for IgG and IgM antibodies against T. gondii.From January 2021 to February 2022, 20 dogs were screened and one dog (5%) was seropositive (IFAT endpoint titre IgG 1:256 and IgM 1:258).The seropositive dog did not display any clinical signs suggestive of active infection with T. gondii. Discussion and conclusions This is the first presumptive case of primary clinical canine toxoplasmosis documented in Singapore, to the best knowledge of the authors.Despite not detecting the presence of parasites in muscles, the clinical case was diagnosed with a combination of consistent clinical signs (ascending paresis and paralysis); serological screening for T. gondii and N. caninum (which ruled out the latter); ruling out of other factors including Lyme disease, trauma and cycad poisoning, and response to treatment with clindamycin. Lyme disease, trauma and cycad poisoning were important differentials which could have contributed to the acute clinical presentation.Lyme disease was ruled out as test kit and laboratory PCR tests were negative.Lyme disease has also not been reported in dogs in Singapore and Singapore is not known to be endemic for the disease. Trauma from the puppy's fall from a low height was unlikely the causative factor in the absence of any radiological findings and unrelated to the rapid clinical progression to respiratory failure.From the history, it was more likely to have been secondary to the paresis and general weakness that the puppy was already experiencing. Cycad poisoning from consumption of cycad seeds similarly was ruled out due to absence of clinical signs such as vomiting, diarrhoea and hepatic involvement. In this case report, the young age of the puppy at the onset of clinical signs, as well as the high IgM antibody titres in both dogs, suggested that the dogs were infected around the same time.The IgM antibody titres subsequently dropped to undetectable levels after three months from the onset of clinical signs.Based on the kinetic profile of the IgM and IgG antibodies [13], it is postulated that the T. gondii infection is likely acute and probably occurred shortly before the onset of clinical signs.However, there were no pre-clinical serum samples to confirm this.It should also be noted that the interpretation of the IgG and IgM antibody titres is complicated by the paucity of studies between antibody titres, development of clinical signs and the chronicity of toxoplasmosis. Given the anecdotal absence of cats in the area where the puppies were trapped, the involvement of other sources of infection such as wild felids could not be ruled out.Based on limited studies, the likelihood of detecting the presence of antibodies against T. gondii increases with age, due to the increased likelihood of exposure to oocysts disseminated into the environment from cat faeces.Due to their young age, the likelihood of concurrent exposure of both dogs to T. gondii oocysts from the environment was relatively low.A study by Arantes et al. [14] showed that toxoplasmosis could be sexually transmitted but it is worth highlighting that all the offspring from female dogs (n = 4) in that study that were artificially inseminated with semen from infected dogs did not survive past 18 days. Most documented cases of primary clinical toxoplasmosis were either in puppies or were associated with immunosuppression [4,5,7].In particular, infection with canine distemper virus has been highlighted to lower resistance to pre-existing T. gondii infection, and even vaccination with modified live canine distemper virus could aggravate the infection [15].While there was no clear indication of an immunosuppressive condition in Dog 1, including the cause of its alopecia, it might be possible that the core canine vaccination, compounded by stress from the new shelter and home environment, could have predisposed to clinical progression in the dog.Dog 2, while asymptomatic throughout, was noted to have had higher IgM and IgG antibody titres than Dog 1.This stronger immune response could have explained the absence of clinical signs in Dog 2, and supports our postulate that the clinical signs were due to T. gondii infection. While there is one limited study on the prevalence of toxoplasmosis in cats in Singapore which documented the prevalence to be around 5% [16], there is no information on the disease prevalence in dogs.While dogs are not the definitive host of T. gondii, infection could still be of public health relevance as humans could be exposed when in contact with dogs that have encountered contaminated cat faeces via the dogs' body surfaces, mouth, or feet.This is especially so in Singapore's context, where free-roaming cats are relatively common.The epidemiology of the disease in dogs, both on mainland and offshore islands of Singapore, hence, warrants further investigation. Recommendation Although primary clinical toxoplasmosis is considered rare, this case report indicates the clinical importance of T. gondii infection in dogs (particularly those with outdoor access) and informs of the need for clinicians to include T. gondii infection in the list of differentials for dogs presenting with ascending paralysis and muscle wasting. Fig. 2 Fig. 2 Toxoplasma gondii IgG and IgM antibody titres over a period of 3 months.The IFAT cut-off value of 1:16 is indicated by dashed lines.(A) IgG levels of Dog 1 had dropped three-log to a titre of 1:64, (B) while IgG levels for Dog 2 decreased five-log to a titre of 1:1024.For both dogs, IgM titres dropped below the cut-off value at the 3-month mark
2023-08-04T13:52:16.005Z
2023-08-04T00:00:00.000
{ "year": 2023, "sha1": "9c70f8ff3fb5cfe5f3173d762cd49c23d41238c0", "oa_license": "CCBY", "oa_url": "https://bmcvetres.biomedcentral.com/counter/pdf/10.1186/s12917-023-03674-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6aa08fa39a16ec143e897f5569f80fa68d479016", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
155754673
pes2o/s2orc
v3-fos-license
Regional Environment in the Context of Regional Differences in the Russian Society Background/Objectives: The way the problems of “catch-up” development of Russian regions is presented and fixed in the public mind often provokes inter-regional migration, which narrows to the limits of redistribution of labor resources in the raw areas or cities. Methods/Statistical analysis: This article is based on the methodological apparatus of sociological theories identifying the nature and content of the modernization process and social change in general. The methodological basis is constituted by works of Russian and foreign scientists developing the theory of complex systems, decision-making, raising the questions of regional management and management of large territorial environments. Findings: Changes in the geopolitical situation, the transition to high-performance economy make impossible to claim that if the Russian Federation is a self-sufficient country as for its territory and mineral resources, the development of the regions is provided by “leaders”. In our view, having major regional differences this scheme of “locomotive pulling the wagons” is not optimal, since there are serious battles in the distribution of social and economic resources, and recipient regions are in a position of eternal asylums. Noting that the colossal amount of regional differences relates to the fact that the scheme “the rich – poor” regions is reproduced, and 2/3 of the RF subjects having the proportion of people with incomes below the subsistence level exceeds the Russian average,12 it can be stated, that the regional environment can, if not radically, then substantially change this situation through the new redefining of movement of technologies and intensity of communications. Applications/Improvements: Regional environment should attract more scientists’ attraction as it has sufficient intellectual, social and human potential for social development. Regional Environment in the Context of Regional Differences in the Russian Society Yuriy Sergeyevich Bortsov1*, Sergey Aleksandrovich Dyuzhikov1, Aleksandr Vasilyevich Popov1, Andrey Valeriyevich Rachipa1 and Irina Anatoliyevna Yankina2 1Southern Federal University, Rostov-on-Don, Russia; monblan@yandex.ru 2Taganrog Institute of Management and Economics, Taganrog, Russia Introduction The way the problems of "catch-up" development of Russian regions is presented and fixed in the public mind often provokes inter-regional migration, which narrows to the limits of redistribution of labor resources in the raw areas or cities. A more optimistic picture is represented in the fact that it is important to believe in its own strength for the regional environment, to achieve the required level of self-minded and optimistic-minded people in the contingent of socially and economically self-regulating people. It may be noted that the development of Russian regions in the framework of all-national development strategy does not eliminate the variability of approaches to regional problems. Moreover, the implementation of federal target programs, inter-regional and regional projects shows which development priorities are primacy that depends on the independence and competence of the regional environment. Expectations of growth, especially associated with the arrival of foreign investors, often drives in a situation of unjustified expectations, and the position of those regional communities, where mechanisms of self-support and self-development are found at least at the local level (Novgorod, Pskov, Yaroslavl, Nizhny Novgorod, Rostov regions) seems winning. Literature Review and Research Methodology In the context of our study the following works directly related to the problems stated in the article are the most important: certain problems of modernization Verhofstadt,4 Cohn-Bendit, 5 Tabellini 6 consider theoretical and practical aspects of the region management, regional development; issues about modernization of large territorial societies are payed attention to by Beck and Grande, 7 special aspects of regional potential are considered by Posukhova and Zayats 8 ; Volkov 9 identifies and analyzes different actors of the regional administration and development. This article is based on the methodological apparatus of sociological theories identifying the nature and content of the modernization process and social change in general. The methodological basis is constituted by works of Russian and foreign scientists developing the theory of complex systems, decision-making, raising the questions of regional management and management of large territorial environments. When writing this article, the authors used a series of scientific and special methods designed to ensure a holistic paradigmatic understanding of regional issues in a regional differentiation of Russian regional environment. For this purpose, primarily a systemic approach was used which provides wide, multi-dimensional picture of the functional unity of regional individual elements. Also, the authors used such classical methods as analysis and synthesis, methodology of knowledge of general and special, primary and secondary, historical and logic, dialectics of complex phenomena contradictory development. Conceptual, attributive and functional approaches to research have also found the application of their own. Authors rely on the results of empirical studies, data of expertise and analytical reports of All-Russian sociological studies. Main Results Duality and ambiguity of globalization practices reinforces the need not only in the definition of national-cultural civilizational identity, 10 but also makes regional identity, the awareness of belonging and involvement in the life of the region as a social and social and cultural space an important component. At that it is an interesting fact that during this period, although the proportion of Russians who are highly dependent on the state and social policy carried by it remained almost unchanged (34%), 10 Growth in employment, self-employment, expansion of self-regulating legal practices commits to the fact that in the conditions of occurrence of large corporations or rigidity of public policy the involvement of citizens into the solution of regional current affairs is recognized, strong opinion about the benefits of social and political participation is formed. Based on the fact that none of the regional environment as a collective entity should have precedence over the other, it may be said that a sharp decline in political activity of citizens may be well offset by an interest in social development issues. It would be wrong to assume that indifference to the regional life has a total character: from 12% to 14% of Russians insistently show interest in regional issues. 10 On the one hand, it seems a minor indicator, on the otherweighty enough to drive from the dead-lock life in regional environment. The allegation that the regional environment is passive, may acquire a character of some general sociological constants. But in general in real life regional environment deviates from the stated position, because the emphasis in social development can be differently arranged. A key problem of regional environment is the lack of consolidation and mobilization and non-inclusion of real resources, which the region has in addition to the real factors. What is seen as a way out of this situation and what should be done to ensure the involvement of the regional environment into the social development really, tangibly and effectively? Attention is drawn to the fact that in Russian regions the political activity discredited itself, because it is linked with interests of neither specific individuals, or specific groups of people or the region as a whole. The social activity is defined as desired, but negatively assessed according to the criterion of the impact on decisions in a regional management system. Recently it has become commonplace to appeal to the state of the regional authority, to increase political and public control over the activities of the regional administrations and to connect them with the voters' interests. Without disputing the desirability of democratic and accountable forms of regional management, we one should talk about the necessity in a sharp increase of educational potential, social potential of regional environment in order to the fact that the system of regional management requirements should, at least, correspond to the state of environment, that at the level of regional management not visitors from the capital or leaders with gifts to the people are expected, but really qualified personnel that can be co-opted from the regional environment, secondly, to formulate universally valid priorities for regional development; third, to create an atmosphere of mutual trust alongside with hopes for that with the increasing of social activity of people a qualitative change in a crisis situation can occur. Sociological analysis shows that changes in selfperception of people in Russian society over the past two decades are truly enormous. And they are manifested primarily in the substantial reduction of those who feel themselves as social outsiders, simultaneously in the increase of those considering themselves as members of the middle strata. 10 This certainly affects the stability in the Russian society, but, unfortunately, the consolidation of social complacency at the individual level is not modeled on the state of the activity at the level of regional environment. Feeling themselves quite socially-satisfied, people often consider position in the regional environment as unsatisfactory. This can be explained by what is called microscopisation of social relations and individualization of life strategies, narrowing the sphere of security and social belonging. One forgets that assessing the situation in the country as a crisis one (67%), 10 one should remember that Russia is regionally heterogeneous and, along with the negative depressive tendencies in the regional map of the country there are regional environments, fitting into the contemporary socio-economic context. Regional environment objectively becomes a subject of social development, because it is a subject of preservation of the state and, at the same time, solution to a question of regional management, heterogeneous naturally, economically, socially and culturally. The inefficiency of the economic and political hypercentralism allows broadening of powers of regional development. But the real shift from administrative to social practices is possible if regional environment involves into the processes of regional development by applying efforts at different levels. Economic considerations may play some role, but the main reason is a request for a life quality increase in regional environment. The most popular qualities are the concernment in regional interest, diligence, initiativity, respect to people and pursuit of long-term planning. Discussion Agreeing with the statement that the regional environment is not identical to the people, that certain time and provided that quite a dense layer of professional managers has been formed in the regional space. In contrast to the popular belief that people expect only positive distributive social policy from the regional management, the increase of such criteria as professionalism and competence of officials, their diligence and efficiency, initiative, good organizational skills is noted. 11 In other words, people's groups representing regional environment, in spite of constant social independence, realize that the electoral mechanisms are not a sufficient guarantee to improve the quality of regional management, as well as the effectiveness of regional management is determined by openness, receptivity and innovativeness of regional environment. As noted above, people clearly have prejudice to the role of regional environment, although the majority of Russians recognize that it is hardly possible to ensure sustainable development of the regions without the activity at the regional level: there is, apparently, a lack in bringing society in a positive and mobilization condition, confidence in the fact that the most important problems can be solved with the help of the initiative in the region without the participation of the center or in connection with the center. Accusing officials in indifference to the interests of the country (41% of people), and noting the low business skills, incompetence, there is a feeling of an outsider or considerations about the fact that the life in the region will straighten out. 10 The task of people is to assist in the work of the regional authorities. And this is a minimum requirement. It is a breakaway from the indifference position that laid as it seems the potential of regional environment consolidation. And the thing is as follows. For the regional development the formation of a new image of regional "we" in a complex controversial regional context is of significant importance, which would insert successfully into a new system of relationships, developed new social meanings and values 10 . Perceptions about the region as a dependent and passive community correspond only to the dependent control formula. Another negative consequence of existing things is the fact that, speaking about the problems of regional life, complaining about arbitrary and selfishness of power, the regional subjectivity is perceived as some unachievable ideal. Thus, it can be said that for bringing the regional environment in the required subjective state, firstly, it is necessary to identify and promote, facilitate and open up new opportunities for the groups of social growth; understand that regional authorities cannot solve these problems alone out of mobilizing resources of regional environment. This is not about a resource of patience or conformism. It is much more important to pay tribute to the fact that the regional environment can be considered as a collective self-esteem, self-awareness of the situation in the region and of those problems that are by no means similar for all Russian regions, and demonstrate that the social development of the regions becomes sustainable if starts out from the balance of regional and national interests. The phenomenon of chronic inefficiency of regional life, reflected both in the minds of officials and the majority of Russians, is closely related to the fact that the regional environment serves only a passive consumer of social goods. The fact that the breakthrough of separate regions (Belgorod, Kaluga regions) suggests otherwise, has a considerable reason. Using either agricultural or professional qualification, or logistical resources, we can achieve considerable results, not pretending to be a leader or a famous example of regional development. It is important to emphasize that, despite the criticism of the regional elites, or the center, the representatives of the regional environment are not less likely often evaluate their own state negatively as well. Respondents often say about the fact that they are faced with the inertia of regional life, the narrowness of the regional cooperation: they point out that the level of mutual trust and cooperation narrowed down to the scope of the family or friends. These estimating reactions fit well into the society disintegration and that is why the biggest confidence in the institution of the President and quite often indifferent or cautious attitude to the structures of regional government exist. Believing it is necessary to provide economic and other incentives and preferences to regions, there is a question about the effectiveness of preferential treatment (as evidenced by the unsuccessful yet experience of the Kaliningrad region and the Primorsky Territory), and that the regional environment has not developed criteria for the effectiveness of the management structures in specific situations in specific regional context. In varying degree, the social development of the Russian regions is uneven. However, it is possible to distinguish substantial enough possibilities for more efficient usage of the regions' capacity for sustainable dynamic development. One mentions the traditional fuel and energy potential, including energy efficiency and transformations are required, so as a working social community will appear, this period cannot be attributed to the distant future. Furthermore, assessment of the state of regional societies suggests that the improvement of situations in economic and social spheres is connected with the fact to what extent people feel that they belong to the regional community, at that differentiating in the assessment of their own socio-status positions and social complacency they can have a good opportunity to realize themselves as a representative of the regional environment, to start from regional identity as the current formula of social cooperation. The lack of consolidating intentions and the tendency to social self-isolation reproduce blaming subjectlessness. The urgency of the formation of regional environment subjectivity cannot be given whomsoever. The influence of regional management system, as well as institutions engaged in regional representation, is irreplaceable in this process. The fact that, in such a way, the regional environment tends to increase, that gradually, albeit slowly, is formed as a collective subject of social development, one can see prospects of shift of regions management from the administrative-centralized to participial, socio-programmed system. The target function of the management system is increasing which is designed to ensure regional authorities' action in coordination with the center as representatives and those who are interested in the social development of the regions. Until now, however, a negative impact of regional disparities remains obvious. The policy of redistribution of budgetary resources, mitigates inequalities, but it is mainly "patches" holes and is aimed at promoting social modernization. 12 The formation of regional environment as a subject of social development entails as an important consequence that, first, the region ceases to be a zone of borrowing resources, recipient deprived of desire to build up its own funds. Second, the stability is acquired by the fact that the best coordination mechanisms of the regional authorities and the various regional groups of the society are forming. Third, as for the regional environment it should be also noted that in Russian society fundamental changes have taken place which relate to the fact that the number of paternalistically-spirited and socially depended groups sharply reduced. In terms of the acquisition of the controlling by regions of a new social quality (transfer of social sphere within the competence of regional management), it is important to has sufficient intellectual, social and human potential in order to, in spite of the gaps between the quality of life and resource potential, set ambitious but achievable targets for sustainable social development and to determine in general the consolidation increase of Russia's space as an adequate response to the risks of destabilizing the economic and social separation of the country.
2019-05-17T14:39:58.489Z
2016-02-09T00:00:00.000
{ "year": 2016, "sha1": "56eab30c43a98b9025c60ea77b25617632158834", "oa_license": null, "oa_url": "https://doi.org/10.17485/ijst/2016/v9i5/87608", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d777e3cc751477a75ba1c86d7818184ca4a87a0f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
239729880
pes2o/s2orc
v3-fos-license
Intelligent IoT-Aided Early Sound Detection of Red Palm Weevils Smart precision agriculture utilizes modern information and wireless communication technologies to achieve challenging agricultural processes. Therefore, Internet of Things (IoT) technology can be applied to monitor and detect harmful insect pests such as red palm weevils (RPWs) in the farms of date palm trees. In this paper, we propose a new IoT-based framework for early sound detection of RPWs using fine-tuned transfer learning classifier, namely InceptionResNet-V2. The sound sensors, namely TreeVibes devices are carefully mounted on each palm trunk to setup wireless sensor networks in the farm. Palm trees are labeled based on the sensor node number to identify the infested cases. Then, the acquired audio signals are sent to a cloud server for further on-line analysis by our fine-tuned deep transfer learning model, i.e., InceptionResNet-V2. The proposed infestation classifier has been successfully validated on the public TreeVibes database. It includes total short recordings of 1754 samples, such that the clean and infested signals are 1754 and 731 samples, respectively. Compared to other deep learning models in the literature, our proposed InceptionResNet-V2 classifier achieved the best performance on the public database of TreeVibes audio recordings. The resulted classification accuracy score was 97.18%. Using 10-fold cross validation, the fine-tuned InceptionResNet-V2 achieved the best average accuracy score and standard deviationof 94.53%and±1.69, respectively. Applying the proposed intelligent IoT-aided detection system of RPWs in date palm farms is the main prospect of this research work. Introduction Date palm trees are not only food but also one of the main income sources in the Arab world countries, especially in Saudi Arabia [1]. Although date palm is the oldest known cultivated tree, it is recently confronted with several contemporary issues. The red palm weevil (RPW) or Rhynchophorus ferrugineus, which originated in Asia and was first discovered in the Gulf region in the 1980s, has caused enormous losses to date palm farmers [2]. Over the last four decades, RPW has steadily extended its global influence. This species quickly spread across the Middle East's Gulf area, North Africa's Maghreb countries, and Europe's Mediterranean basin countries as shown in Fig. 1. The elimination of highly infested trees alone costs the Gulf countries and the Middle East around 8 million dollars per year as reported by the Food and Agriculture Organization (FAO) [3]. As a result, seeking a cost-effective solution to the RPW dilemma would not only save the "blessed" palm tree but will also assist farmers and governments in reversing the mounting financial losses that have arisen. Decision-makers, academics, and farmers generally believe that early recognition could save thousands/millions of healthy trees by using easy steps to quarantine infested trees and secure non-infested trees and offshoots [4]. Accordingly, primary disease discovery could theoretically assist in the victory over the RPW. Since obvious early symptoms of invasion occur only once it is too late to save the tree, the red palm weevil poses a significant threat to palm tree preservation. The weevil's vague behavior and inherent biological characteristics have made it very difficult to be recognized and thereby managed. There are multiple characteristics of the RPW characterize its growing stages as depicted in Fig. 2 [5]. Nowadays, the elements of RPW management systems have many weaknesses and difficulties. Early recognition of the weevil infestation, limitations of biological control measures in field circumstances, and a lack of farmer engagement in control operations are some of these issues [6]. Therefore, the early detection of such predators does provide the best chance of eradicating them and minimizing the likelihood of palm tree damages. Fortunately, with the advancement in artificial neural networks (ANN) technology, the detection possibility of RPW in its early stages could be achieved [7]. Visual examination of the tree for symptoms appearance, identification of the sound made by nourishment larvae as well as chemical analysis of volatile signatures generated by infested date palms are considered the most known RPW early detection approaches [8]. Also, thermal imaging to monitor temperature changes in invaded palms is another classical method to detect RPW. The most expensive approach is sound detection and the most reliable one is based on the observation of symptoms [9]. So instead of focusing on conventional detection approaches, there is a need to develop computational solutions for finding and controlling RPW species that are both accurate and permanent. It is also important to figure out what main characteristics to look for when identifying plagued palm trees. Lately, a fusion of computer science, sensors, and advanced electronic technology is being used to construct reasonable and fast mechanisms for automatic recognition of RPW. Acoustic devices, X-ray imaging, remote sensing tools, and radio telemetry are among the most significant and promising new trends to control the RPW [10]. Smart precision agriculture exploits modern information and wireless communication technologies to achieve challenging agricultural processes and/or regular tasks automatically [11,12]. For instance, Internet of Things (IoT) has been applied in real-life applications of agriculture, such as precision management of water irrigation, crop diseases and insect pests [13][14][15]. IoT mainly relies on wireless sensor network (WSN) to measure and collect environmental parameters like soil moisture and humidity. Then, these collected data can be saved or analyzed to assist decisions of specialists or famers, and/or to operate water irrigation pumps [11]. Artificial intelligence (AI) techniques such as machine learning and deep learning models [16,17] have been recently employed to analyze acquired agricultural and environmental data. Crop health monitoring presents a major aspect of smart precision agriculture [18], specifically identifying the infectious status of insect pests in the farm. Traditional techniques and manual detection of insect pests are insufficient, time-consuming and relatively expensive. Therefore, early detection of the plant pests is a high priority for farmers to use suitable pesticides, avoiding the loss of crops [19]. Hence, the focus of this study is proposing a new solution for continuous health monitoring of date palm trees against RPWs by using IoT and deep learning models. Transfer learning approach overcomes the drawback of traditional deep learning methods in case of small dataset and limited resources for training phase. The main idea of this approach is transferring the knowledge from a similar task, and again using the pre-trained deep learning model for achieving another task with minimal computation power [20]. Advantages of transfer learning technique have been widely exploited in several applications, e.g., medical and healthcare systems [21,22], industrial manufacturing [23], and robotic systems [24]. Moreover, transfer learning models showed significant results of achieving smart water irrigation [11] and plant diseases and pest's classification [25] in the field of agriculture. Convolutional neural networks (CNNs) are still the most popular deep learning method. Residual neural networks (Resnet) [26], MobileNet [27] and Xception [28] are three of well-known pretrained models based on transfer learning approach. This paper presents a new IoT-based sound detection system of RPWs using deep transfer models of residual inception networks. The main contributions of this study are presented as follows. • Proposing a new intelligent detection system of RPW sounds at early stage of infectious date tree palms using IoT. • Developing a transfer learning-based classifier to accomplish accurate sound identification of RPWs. • Conducting extensive tests and comparative study of our proposed method with other methods in previous studies to validate the advanced capabilities of our early detection system of RPW. The reminder of this paper is divided into the following sections. Section 2 introduces a review of previous studies including different machine and deep learning models for identifying RPWs. The public sound dataset of RPWs, architecture of transfer learning models and description of our proposed RPW detection system are presented in Section 3. Evaluation results and discussions of deep learning classifiers to detect clean and infested palm trees are given in Sections 4 and 5, respectively. Section 6 presents conclusions and outlook of this study. Related Works Deep learning (DL) is a cutting-edge machine learning (ML) technology that does quite well in various tasks such as image classification, scene analysis, fruit detection, yield estimation, and many others [29]. DL can create new features from a restricted range of features in the testing dataset, which is considered one of the key advantages over other ML algorithms. Convolutional Neural Networks (CNN), Fully Convolutional Networks (FCN), Recursive Neural Networks (RNN), Deep Belief Networks (DBN) and Deep Neural Networks (DNN) are examples of DL architectures that have been widely deployed to a variety of research domains [30]. Recently, several DL techniques have been applied to different agricultural-based methods with increasing significance. Researchers in [31] performed a survey of several DL techniques applied to different agricultural issues. The authors examined the models employed, the data source, the hardware utilized, and the probability of real-time deployment to investigate future integration with autonomous robotic mechanisms. The authors in [9] tested the ability of ten cutting-edge data mining classification models to forecast RPW infections in its early phases, before major tree damage occurs, using plant-size and temperature measurements obtained from individual trees. The experimental results demonstrated that using data mining, RPW infestations could be expected with an accuracy of up to 93%, a precision of above 87%, a recall of 100%, and an F-measure of greater than 93%. To identify the presence of RPW using its own bioacoustics features, a new signal processing platform has been designed [32]. An analysis of the parameters for selecting the best time frame length as well as window feature is given. The findings indicate that the established method with the selected representative characteristics is more effective. The authors in [33] proposed a study to create algorithms that can classify the RPW and differentiate it from other insects present in palm tree habitats using image recognition and artificial neural network (ANN) techniques. It was discovered that an ANN of three-layer using the Conjugate Gradient with Powell/Beale Restarts method is the most effective for identifying the RPW. In [34], the normal and thermal images of palm trees have been used to detect RPW, blight spots and leaf spots diseases. CNN was used to distinguish between blight spots and leaf spots infection, and support vector machine (SVM) was used to distinguish between the leaf spots and RPW pests. The accuracy ratio success rates for the CNN and SVM algorithms were 97.9% and 92.8 percent, respectively. Based on remote images from the Alicante area in Spain, researchers in [35] introduced the first region-wide geographical collection of Phoenix dactylifera and Phoenix canariensis palm trees. The presented detection model, which was created using RetinaNet, offers a quick and easy way to map isolated and densely dispersed date and canary palms and other Phoenix palms as well. In order to monitor palms remotely, an IoT-based smart prototype has been also suggested for the early detection of red palm weevil invasion [36]. The data is collected using accelerometer sensors, then signal processing and statistical methods are applied to analyze this data and define the attack fingerprint. In [37], a solution for early identification of RPW in large farms is presented using a hybrid of ML and fiber-optic distributed acoustic sensing (DAS) system. The obtained results showed that ANN with 99.9% and CNN with 99.7% accuracy values can effectively distinguish between healthy and infested palm trees. Tab. 1 summarizes the related work review and their main characteristics. It is evident from the literature that image-based methods had high accuracy values compared sound-based ones. This motivates this work to consider the improvement of sound detection models. TreeVibes Device and Dataset Sounds of RPW and other borers are collected using the TreeVibes recording device, as depicted in Fig. 3 [38]. It is a public database and is freely available at http://www.kaggle.com/ potamitis/treevibes (last accessed on 20 February 2021). The piezoelectric crystal as an embedded microphone has been used for sensing the vibrations inside the trees, i.e., sounds of the borers including the RPWs. The acquired signal can be converted to short audio signals to be stored and transmitted through wireless IoT networks and a cloud server. An example of mean spectral profile for three different sounds of a RPW is shown in Fig. 3b [39]. The TreeVibes device cannot count the number of borers or their location in the palm tree, but it is able to detect their feeding sounds within 1.5 to 2 m radius of a spherical region [38]. Therefore, the positioning of TreeVibes sensing device on the trunk of palm tree is crucial to achieve the expected early detection performance. The TreeVibes database includes 35 folders with short and annotated audio recordings. The sampling frequency of these recordings is 8.0 kHz. All tree vibration sounds have been saved in a wave format. The proposed classifier was trained and tested on 26 and 9 folders, respectively. The infested data represent the potential sounds of feeding and/or moving wood-boring pests, such as Rhynchophorus ferrugineus (Red palm weevil), Aromia bungii (Red necked longicorn) and Xylotrechus chinensis [35]. In this study, we assumed the infested status of date palm trees is caused by RPWs only. Transfer Learning Models This section gives an overview of the proposed transfer learning models for identifying the health status of palm trees as follows. First, different version architectures of Inception models are described, highlighting the main features of each transfer learning model. Second, merging between Inception and Resnet architectures in the Inception-Resnet model is also presented, showing a comparison between two structures of Inception-Resnet-V1 and V2. Fig. 4 depicts main architectures for two versions of Inception model. The first version of Inception classifier was introduced by Szegedy et al. [40] to achieve advancement over the state-of-the art classifiers on the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). Inception-V1 improved the accuracy performance of detection and classification by increasing the depth and width layers of the CNN model at constant computational cost. The optimized Inception-V1 architecture was based on the Hebbian principle and multi-scale processing. Inception-V1 with a dimension reduction of 22 layers CNN is also called GoogleNet [40], as shown in Fig. 4a. Figure 4: (a) Inception-V1 with dimensions reduction (GoogleNet) [40], and (b) Inception-V2 [41] Inception-V2 and V3 were introduced in 2016 by the same Google research group, as shown in Fig. 4b [41]. The 5 × 5 convolution module of Inception-V1 was replaced by two 3 × 3 convolutions in these advanced Inception models. However, Inception-V3 added other capabilities to enhance its network architecture as follows. First, factorizing 7 × 7 convolutions and RMSProp optimizer are used. Second, auxiliary classifiers are batch normalized. Third, label smoothing is provided to prevent overfitting in the deep network. Inception-ResNets Inception-ResNet and Inception-V4 were presented to validate the positive influence of residual connections on deep learning-based classification [42]. Here, these models of Inception were modified through "Reduction Blocks", changing the width and the height of its grid network architecture. The functionality of reduction blocks was inspired by the outstanding performance of residual neural network, namely ResNet [26]. The hybrid Inception and ResNet module resulted two sub-versions, namely InceptionResNet-V1 and V2 [42], as depicted in Fig. 5. Both InceptionResNet-V1 and V2 have the same structural modules and reduction blocks. Nevertheless, the computational costs of Inception-ResNet-V1 and V2 are similar to the computing budgets of Inception-V3 and Inception-V4, respectively. The hyper-parameter settings such as optimizer and batch size present the only difference between these two Inception-ResNets. Proposed RPW Detection System Schematic diagram of our proposed intelligent IoT-aided detection system of RPWs is shown in Fig. 6. The proposed RPW detection system includes three main modules as follows. First, TreeVibes sensor devices are carefully mounted on each palm trunk to setup wireless sensor network in the farm. Palm trees are labeled based on the sensor node number in the wireless network to identify the infested cases successfully. Moreover, the infested tree location can be monitored on the farm map by using a global positioning system (GPS) associated with the TreeVibes device [38]. Second, a cloud server received wireless acquired sound signals of palm trees. These audio data on the cloud server can be stored for further on-line analysis by our fine-tuned deep transfer learning model, i.e., InceptionResNet-V2, as depicted in Fig. 6. Third, the binary classification task of clean and infested trees can be automatically done either on the cloud server or by a computer system of the user. In the field, there are many different sounds that can be recorded. For instance, agricultural environment includes bird or animal vocalizations, rains and wind sounds, and voices of farm workers. Therefore, it may be a challenging task to extract sounds of RPW inside trees from these external noisy signals in the field. However, characteristics of generated audio signal by the RPW borers are distinguished impulsive trains (see Fig. 3b). That plays an important role to enhance the accurate performance of proposed Inception-ResNet-V2 classifier, as presented in Section 4. Performance Analysis Metrics The performance of fine-tuned InceptionResNet-V2 classifier was analyzed for identifying the RPW infestation using the following evaluation metrics. The cross-validation estimation [43] is used to build a confusion matrix. A 2 × 2 confusion matrix contains the following possible results of hypothesis testing for two predicted classes: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). In addition, the classification measures of accuracy, recall or sensitivity, precision, and F1-score are given in Eqs. (1)-(4), respectively. Evaluation Results All sounds of RPW and other borers collected from the public dataset [38] have been scaled to 224 × 224 audio window signals to enhance the performance of tested classifiers. In this study, we assumed that the sounds of pest borers inside trees are similar to the feeding and/or moving sounds of RPW larvae. They are classified as infested cases generally. The proposed InceptionResNet-V2 classifier and other transfer learning models have been implemented using open-source Python programming language with the packages of Tensorflow and Keras [44]. Implemented RPW sound classifiers were executed using a 4 GB NVIDIA GPU and 16 GB RAM on Intel(R) Core(TM) i7-2.2 GHz processor laptop. Classification procedure of infested and clean palm trees has been automatically done using the proposed InceptionResNet-V2 model, as shown above in Fig. 6. The sound recordings of pests database have been randomly 80%-20% split for conducting training, validation and test phases. The value of each hyperparameter is carefully tuned for the proposed InceptionResNet-V2 as follows. Number of epochs and batch size are 40 and 60, respectively. The learning rate is 0.001. An update of stochastic Adam optimizer [45], namely Adamax has been applied to accomplish targeted convergence during training step. Softmax activation function of the output classifier layer was used to identify infested and clean classes of palm tree sounds. Moreover, these hyperparameter values have been also used for other transfer learning models to justify the classification performance of our proposed InceptionResNet-V2 model with other deep learning classifiers in this study. Discussions Advanced IoT-based health monitoring of the date palm trees becomes essential for saving crop productivity and preventing high tree mortality caused by RPWs. Automatic early detection of RPW infestation can be achieved by acquired vibrating sounds of feeding and/or moving RPWs via the TreeVibes sensing device [38], as shown in Fig. 6. In addition, deployed deep learning models such as fine-tuned InceptionResNet-V2 classifier can achieve a good classification performance to identify infestation and clean tree status accurately, as illustrated in Tabs. 2 and 3. In this study, our proposed InceptionResNet-V2 classifier was compared with five transfer learning models, namely Resnet-50, MobileNet, Densenet-121, EfficientNetB0 and Xception. These five models have been previously investigated by Rigakis et al. [38], showing that Xception model is the top rank classifier of pest sounds inside trunks of trees. It achieved average classification accuracy of 94.16% with minimal standard deviation of ±0.99 based on 10-fold cross validation. In contrast, the fine-tuned InceptionResNet-V2 demonstrated a competitive classification accuracy of 94.53% with higher standard deviation of ±1.69, as presented in Tab. 3. The only limitation of our proposed classification model is its large size of 215 MB. That required additional resources to accomplish the early detection task of RPWs in the proposed IoT network framework, as depicted in Fig. 6. However, using cloud computing services can solve the above problem of limited hardware resources and the availability of GPUs at end users. Although security and privacy issues of IoT-based smart farming have been discussed in recent studies [12][13][14], these issues are not considered in this study. Because a single security protocol of IoT-systems in agriculture is still not sufficient to prevent leakage of information [12]. However, basic requirements of secure IoT-based agricultural systems can be fulfilled, i.e., authentication, access control and confidentiality of the stakeholders. Other integrated sub-systems such as protection, fault-diagnosis and reaction systems against danger and cyberattacks should be also considered in the security model of smart precision agriculture systems. The above security requirements and sub-systems will be considered in the future version of our proposed intelligent IoT-based detection system of RPW sounds. In addition, selecting the hyperparameter values of transfer learning models is an iterative complicated process to accomplish the targeted task. Therefore, recent studies suggested the utilization of bio-inspired optimization techniques such as whale optimization algorithm (WOA) [46] and adaptive particle swarm optimization (APSO) [47] to automate the design of deep neural networks, with increasing required training computing cost. Nevertheless, our proposed InceptionResNet-V2 still achieved best classification performance for RPW sounds detection, as illustrated in Tabs. 2 and 3. Conclusions In this study, a new IoT-based early detection system of RPWs has been developed based on acquired sounds of palm trees. The TreeVibes sensing device was used to acquire and record short vibration sounds of RPWs inside palm trees. Here, the role of cloud services is to save these recoding sounds and forward them for deep learning classifiers at the end user, as shown in Fig. 6. Deep transfer leaning model, namely InceptionResNetV2 was fine-tuned to distinguish between clean and infested trees, as depicted in Fig. 5. Using 10-fold cross validation, the developed classifier showed a superior performance over other transfer learning models in previous studies, achieving accuracy score of 94.53 ± 1.69 as given Tab. 3. In future work, we aim to minimize the computing power resources by exploiting cloud computing services [48] to accomplish the early detection task of RPWs on a developed mobile application for guiding specialists and farmers. Also, we are working on enhancing the classification accuracy of our proposed system by using other advanced deep learning models, e.g., generative adversarial neural networks (GANs) [49], considering the security and privacy aspects of open IoT network communications [50,51] for sending sound data of date palm trees safely.
2021-08-27T17:22:32.726Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2f763e2a000bfb6c37b10670a7b4a06b1570f024", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32604/cmc.2021.019059", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "368322b3723a6f56dd8af0d1153f57e7cf6664d3", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119140719
pes2o/s2orc
v3-fos-license
Lectures on Minimal Surface Theory An article based on a four-lecture introductory minicourse on minimal surface theory given at the 2013 summer program of the Institute for Advanced Study and the Park City Mathematics Institute. Introduction Minimal surfaces have been studied by mathematicians for several centuries, and they continue to be a very active area of research. Minimal surfaces have also proved to be a useful tool in other areas, such as in the Willmore problem and in general relativity. Furthermore, various general techniques in geometric analysis were first discovered in the study of minimal surfaces. The following notes are slightly expanded versions of four lectures presented at the 2013 summer program of the Institute for Advanced Study and the Park City Mathematics Institute. The goal was to give beginning graduate students an introduction to some of the most important basic facts and ideas in minimal surface theory. I have kept prerequisites to a minimum: the reader should know basic complex analysis and elementary differential geometry (in particular, the second fundamental form and the Gauss-Bonnet Theorem). For readers who wish to pursue the subject further, there are a number of good books available, such as [CM11], [Law80], and [DHS10]. Readers may also enjoy exploring Matthias Weber's extensive online collection of minimal surface images: http://www.indiana.edu/~minimal/archive/. All but one of the illustrations in these notes are taken from that collection. If I had a little more time, I would have talked more about the maximum principle and about the structure of the intersection set for pairs of minimal surfaces. See for example [CM11, 1. §7, 6. §1, 6. §2]. If I had a lot more time, I would have talked about geometric measure theory, which has had and continues to have an enormous impact on minimal surface theory. Almgren's 1969 expository article [Alm69] remains an excellent introduction. Morgan's book [Mor09] is a very readable account of the main concepts and results. In many cases, he describes the key ideas of proofs without giving any details. For complete proofs, I recommend [Sim83]. In the last few years, there have been a number of spectacular breakthroughs in minimal surface theory that are not mentioned here. See [MP11] for a survey of many of the recent results. I would like to thank Alessandro Carlotto for running problem sessions for the graduate students attending my lectures and for carefully reading an early version of these notes and making a great many suggestions. I would also like to thank David Hoffman for additional suggestions. The notes are much improved as a result of their input. The First Variation Formula and Consequences Let M be an m-dimensional surface in R n with boundary Γ. We say that M is a least-area surface if its area is less than or equal to the area of any other surface having the same boundary. To make this definition precise, one has to specify the exact meaning of the words "surface", "area", and "boundary". For example, do we require the surfaces to be oriented? But for the moment we will be vague about such matters, since the topics we consider now are independent of such specifications. In the Plateau problem, one asks: given a boundary Γ in Euclidean space (or, more generally, in a Riemannian manifold), does there exist a least-area surface M with boundary Γ? If so, how smooth is M ? For example, if m = 1 and Γ consists of a pair of points in R n , then the solution M of the Plateau problem is the straight line segment joining them. In general, however, even proving existence is very nontrivial. Indeed, in 1936 Jesse Douglas won one of the first two 1 Fields Medals for his existence and regularity theorems for the m = 2 case of the Plateau problem. We will discuss those results in lecture 4. Now we consider a related question: given a surface M , how do we tell if it a least-area surface? In general, it is very hard to tell, but the "first-derivative test" provides a necessary condition for M to be a least-area surface: If M t is a one-parameter family of surfaces each with boundary Γ, and if M 0 = M , then d dt t=0 area(M t ) should be 0. For the test to be useful, we need a way of calculating the first derivative: Theorem 1 (The first variational formula). Let M be a compact m-dimensional manifold in R n . Let φ t : M → R n be a smooth one-parameter family of smooth maps such that φ 0 (p) ≡ p. Let X(p) = d dt t=0 φ t (p) be the initial velocity vectorfield. Then It is perhaps better to refer to the first equality Definition. An m-dimensional submanifold M ⊂ R n (or of a Riemannian manifold) is called minimal (or stationary) provided its mean curvature is everywhere 0, i.e., provided it is a critical point for the area functional. Theorem 2. Let M be a compact m-dimensional minimal submanifold of R n . Then Here ω m is the m-dimensional area (i.e., Lebesgue measure) of the unit ball in R m . Thus Θ(M, p, r) (which is called the density ratio of M in B(p, r)) is the area of M ∩ B(p, r) divided by the area of the cross-sectional m-disk in B(p, r). Equivalently, it is the area of M ∩ B(p, r) after dilating by 1/r. Let A(r) be the m-dimensional area of M r and L(r) be the (m − 1)-dimensional measure of ∂M r . (When m = 2, A(r) is an area and L(r) is a length.) Then A (r) ≥ L(r). This follows from the coarea formula applied to the function x ∈ M → |x|. But intuitively (in the case m = 2 for simplicity) A(r + dr) \ A(r) is a thin ribbon of surface: the length of the ribbon is L(r) and the width is ≥ dr. (The width is equal to dr at a point p ∈ ∂M r if and only if M is orthogonal to ∂B(0, r) at p.) Hence A(r + dr) − A(r) ≥ L(r) dr. Combining these last two inequalities gives: and therefore (r −m A) ≥ 0. Remark. The monotonicity theorem follows from the more general monotonicity identity [Sim83,17. For a smooth, immersed surface, the density of M at a point p ∈ M \ ∂M is equal to the number of sheets of M that pass through p. In particular, Θ(M, p) ≥ 1. Density at infinity Let M be a properly immersed minimal surface without boundary in R n . Then Θ(M, p, r) is increasing for 0 < r < ∞. Thus lim r→∞ Θ(M, p, r) exists. (It may be infinite.) Note that from which it easily follows that lim r→∞ Θ(M, p, r) is independent of p and therefore can be written without ambiguity as Θ(M ). We call Θ(M ) the density of M at infinity. For example, the density at infinity of a plane is 1, and the density at infinity of a union of k planes is k. Near infinity, a catenoid (figure 1) looks like a multiplicity 2 plane. (To be precise, if we dilate the catenoid by 1/n about its center and let n → ∞, then the resulting surfaces converge smoothly (away from the center) to a plane with multiplicity 2.) It follows that the catenoid has density 2 at infinity. Similarly, Scherk's surface (figure 1) resembles two orthogonal planes near infinity, so its density at infinity is also 2. The following theorem characterizes the plane by its density at infinity: Proof. Let p ∈ M . Then by monotonicity, This proves the inequality. If 1 = Θ(M ), then we would have equality in (3), so M would intersect ∂B(p, r) orthogonally for every r. That implies that M is invariant under dilations about p, i.e., that M is a cone with vertex p. Since we are assuming that M is smooth, M must in fact be a union of planes (with multiplicity) passing through p. Since Θ(M ) = 1, M is a single plane with multiplicity 1. Extended monotonicity According to the monotonicity theorem (theorem 3), if M is minimal, then the density ratio is an increasing function of r for 0 < r < R = dist(p, ∂M ). The theorem is false without the restriction r < dist(p, ∂M ). For example, if M ⊂ B(p,R), then the density ratio is strictly decreasing for r ≥R, because the numerator of the fraction (4) is constant for r ≥R. However, there is an extension of the monotonicity theorem that gives information for all r: Theorem 5 (Extended Monotonicity Theorem [EWW02]). Suppose that M ⊂ R n is a compact, minimal m-manifold with boundary Γ, and that p ∈ R n \ Γ. Let E = E(p, Γ) denote the exterior cone with vertex p over Γ: is an increasing function of r for all r > 0. Indeed, with equality if and only if: Remark. In the definition of Θ(M , p, r), we count area with multiplicity. For example, if exactly two portions of E overlap in a region, we count the area of that region twice. Of course if M is embedded and if p is in general position, then such overlaps do not occur. In proving the extended monotonicity theorem, we may assume that p = 0. As before, we will apply the first variation formula (or, more precisely, the generalized divergence theorem) to the vectorfield X(x) = x. Lemma 6. Let E be the exterior cone over Γ with vertex 0. Among all unit vectors v that are normal to Γ at x ∈ Γ, the maximum value of x · v is attained by v = −ν ∂E (x). Consequently, x · v ≤ −x · ν ∂E (x) and therefore x · (v + ν ∂E (x)) ≤ 0 for every such vector v. The proof of the lemma is left as an exercise. Proof of extended monotonicity. Let M r , E r ,M r , and Γ r be the portions of M , E, M , and Γ inside the ball B r = B(0, r). By the generalized divergence theorem, because H · X ≡ 0 on E, since H is perpendicular to E and X is tangent to E. Also, div M X ≡ div E X ≡ m, so the left sides of these equations are m area(M r ) and m area(E r ). Adding equations (5) and (6) gives Note that ∂M r consists of two parts: M ∩ ∂B r and Γ r . Likewise ∂E r consists of E ∩ ∂B r and Γ r . By combining the two integrals over M ∩ ∂B r and E ∩ ∂B r , and by also combining the two integrals over Γ r , we can rewrite (7) as By lemma 6, the second integrand is everywhere nonpositive. Thus The rest of the proof is exactly the same as the proof of the monotonicity theorem. Proof. The proof is almost identical to the proof of theorem 4. As a consequence of the extended monotonicity theorem, we can show that a minimal surface must stay reasonably close to its boundary: where |∂M | is the (m−1)-dimensional measure of ∂M . Furthermore, equality holds if and only if M is a flat m-disk centered at p with multiplicity 1. Proof. We may assume that p = 0. Let Γ = ∂M , let C be the (entire) cone over Γ: and let E = {tq : t ≥ 1, q ∈ Γ} be the exterior cone. Let Γ * be the result of radially projecting Γ to ∂B(0, R), where R = dist(p, ∂M ). Then by extended monotonicity, which is the asserted inequality. Equality of the first two terms in (8) implies that M ∪ E is a plane with multiplicity 1, and equality of the last two terms implies that Γ * = Γ, which implies that the function dist(·, 0) is constant on Γ. The following corollary implies (for example) that two short curves bounding a connected minimal surface cannot be too far apart: Corollary 9. If M ⊂ R n is a compact, connected m-dimensional minimal submanifold such that Γ is the union of two (not necessarily connected) components Γ 1 and Γ 2 , then . Proof. Since the function dist(·, Γ 1 ) − dist(·, Γ 2 ) is negative on Γ 1 and positive on Γ 2 , there must be a point p ∈ M at which it vanishes. (Here dist denotes the straight line distance in R n .) Let By the triangle inequality, dist(Γ 1 , Γ 2 ) ≤ 2R, which is at most by theorem 8. In [EWW02], the extended monotonicity theorem was used to solve a long-open problem in minimal surface theory: if Γ ⊂ R 3 is a smooth, simple closed curve with total curvature at most 4π, must the unique 4 minimal immersed disk bounded by Γ be embedded? (The total curvature of a smooth curve is the integral of the norm of the curvature vector with respect to arclength.) Theorem 10. [EWW02] Let M be an immersed minimal surface (possibly with branch points 5 ) in R n bounded by a smooth embedded curve Γ whose total curvature is at most 4π. Then M is smoothly embedded (without branch points). Proof. For simplicity, we give the proof for unbranched surfaces and for curves of total curvature strictly less than 4π, and we prove only that M \ Γ has no points of self-intersection. Let p be a point in M \ Γ. Let C and E be the cone and the exterior cone over Γ with vertex p, as in the extended monotonicity theorem 5. It is left as an exercise to the reader to show that Θ(C) ≤ 1 2π (the total curvature of Γ) . Since Θ(M, p) is the number of sheets of M passing through p, we see that only one sheet passes through p. Since p is an arbitrary point in M \ Γ, we are done. We have not yet discussed branch points, but exactly the same argument rules out interior branch points (i.e., branch points not in Γ): one only needs to know the fact that the density of a minimal surface at a branch point is at least 2. That fact is easily proved using the Weierstrass Representation (for example), which will be discussed in Lecture 2. A similar argument rules out branch points and self-intersections at the boundary. Corollary 11 (Farey-Milnor Theorem). If Γ is a smooth, simple closed curve in R 3 with total curvature at most 4π, then Γ is unknotted. 4 Nitsche [Nit73] had proved that a curve of total curvature less than 4π bounds a unique minimal disk, and that the disk is smoothly immersed. Whether such a curve can bound a minimal surface of nonzero genus is an interesting open question. Such curves can bound minimal Möbius strips [EWW02]. Nitsche's uniqueness theorem was extended to curves of total curvature at most 4π by X. Li and Jost [LJ94]. 5 Branch points will be discussed in lecture 2. Proof. Let M be a least-area disk bounded by Γ. (The disk exists by the Douglas-Rado Theorem, which will be discussed in Lecture 4.) By theorem 10, M is a smooth embedded disk. But for any smoothly embedded curve, bounding a smooth embedded disk implies (indeed, is equivalent to) being unknotted. (To see the implication, let F : D → M be a smooth, conformal parametrization of M by the unit disk in R 2 . Then provides an isotopy from Γ to small, nearly circular curves, which are clearly unknotted.) Theorem 10 is sharp: there exist smooth embedded curves, including unknotted ones, that have total curvature slightly larger than 4π and that bound many immersed minimal surfaces (see theorem 46). The Farey-Milnor Theorem is also sharp: consider, for example, a trefoil knot that is a slight perturbation of a twicetraversed circle. One can also define total curvature for arbitrary continuous curves. Theorem 10 and corollary 11 remain true for continuous simple closed curves with total curvature at most 4π. (The surface M will be embedded, though of course it will in general be smoothly embedded only away from its boundary.) See [EWW02]. The isoperimetric inequality The following fundamental theorem was proved by Allard [All72] (with a constant that was allowed to depend on dimension n of the ambient space) and by Michael and Simon [MS73]: Exercise: Prove the isoperimetric inequality for a two-dimensional surface with connected boundary. (Use theorem 2.) The value of the best constant in the isoperimetric inequality, even in the case H ≡ 0 of minimal surfaces, is an interesting open problem. For minimal surfaces, it is conjectured that the best constant is attained by a ball in an m-dimensional plane. Almgren [Alm86] proved the conjecture assuming that M is area-minimizing. For two-dimensional minimal surfaces, the conjecture has been proved in some cases, such as when ∂M has at most two connected components [LSY84]. See [CS09] for some more recent developments. 6 In some of the references, the inequality is stated as a Sobolev inequality for a function u that is compactly supported on M \ ∂M . The isoperimetric inequality follows by taking a suitable sequence of such u's that converge to 1 on M \ ∂M . Two-Dimensional Minimal Surfaces The theory of two-dimensional surfaces has many features that do not generalize to higher dimensional manifolds. For example, every two-dimensional surface with a smooth Riemannian metric admits local isothermal coordinates, i.e., can be parametrized locally by conformal maps from domains in the plane. 7 Relation to harmonic maps Theorem 13. Let F : Ω ⊂ R 2 → R n be a conformal immersion. Then F (Ω) is minimal if and only if F is harmonic. Proof. One way to show this is to calculate that the mean curvature H is equal to the Laplacian ∆ g F of F with respect to the pullback by F of the metric on R n . (This is also true for immersions of m-dimensional manifolds M into general Riemannian manifolds.) Thus M is minimal if and only if F is harmonic with respect to the metric g. The theorem follows immediately because, for two-dimensional surfaces, harmonic functions remain harmonic under conformal change of metric on the domain. Corollary 14. Every two-dimensional C 2 minimal surface in R n is real-analytic. This is also true for m-dimensional minimal submanifolds of R n , but by a different proof. Theorem 15 (Convex hull property). Let M be a two-dimensional minimal surface in R n . (1) If φ : R n → R is a C 2 function, then φ|M cannot have a local maximum at any point of M \ ∂M where D 2 φ is positive definite. (2) If M is compact, then it lies in the convex hull of its boundary. Proof. Let p ∈ M \ ∂M be a point at which D 2 φ is positive definite. Let F : D ⊂ R 2 → M with F (0) = p be a conformal (and therefore harmonic) parametrization of a neighborhood of M . Then (using y 1 , . . . , y n and x 1 , x 2 as coordinates for R n and R 2 and summing over repeated indices) one readily calculates by the chain rule that since F is harmonic, where λ is the lowest eigenvalue of D 2 φ. This is strictly positive at a point where D 2 φ is positive definite. Consequently (∂/∂x k ) 2 (φ • F ) must be positive for k = 1 and/or k = 2, which proves (1). To prove (2), it suffices to show that that if ∂M lies in a closed ball, then so does M , since the convex hull of ∂M is the intersection of all such balls. If this failed for some ball B(p, r), then the function x ∈ M → |x − p| 2 would attain its maximum at an interior point of M , contradicting (1). Theorem 15 is also true for m-dimensional minimal submanifolds of R n by essentially the same proof. (In particular, (9) is true at a point if ∆ denotes the Laplacian with respect to the metric on M induced from R n and if x 1 , . . . , x m are normal coordinates at that point.) Theorem 15 is a special case of much more general maximum principles for (possibly singular) minimal varieties. See for example [Whi10]. Exercises: (1) Suppose that M is a compact, simply connected, two-dimensional minimal surface in R n and that B ⊂ R n is a ball. Prove that M ∩ B is also simply connected. (In exercise (1), replace "simply connected" by "having trivial (m − 1) th homology". In exercise (2), replace "smallest two eigenvalues" by "smallest m eigenvalues".) Also, theorem 15 and the exercises remain true for branched minimal surfaces (which will be discussed in lecture 2). Conformality of the Gauss map Recall that if M is an oriented surface in R 3 , then its Gauss map n : M → S 2 is the map that maps each point in M to the unit normal to M at that point. Theorem 16. Let M be a minimal surface in R 3 . Then M is minimal if and only if the Gauss map n : M → S 2 is almost conformal and orientation-reversing. Recall that a map F : M → N between Riemannian manifolds is almost conformal provided In theorem 16, the orientation on S 2 is the standard orientation, i.e., the orientation given by the outward unit normal. Proof. Let e 1 and e 2 be the principal directions of M at p ∈ M . Then {e 1 , e 2 } is an orthonormal basis for Tan p M and also for Tan n(p) S 2 . With respect to this basis, the matrix for Dn(p) is where κ 1 and κ 2 are the principal curvatures. Total curvature For any surface M ⊂ R 3 , the Gauss curvature K = κ 1 κ 2 is the signed Jacobian of the Gauss map. The total absolute curvature (which we will call the total curvature, for short) of M is which is equal to the area (counting multiplicity) of the image of M under the Gauss map: Theorem 17 (Osserman [Oss63]). Let M ⊂ R 3 be a complete, connected, orientable minimal surface of finite total curvature: Then (1) M is conformally equivalent to a compact Riemann surface minus finitely many points: (2) The Gauss map extends analytically to the punctures. (3) There is a nonnegative integer m such that for almost every v ∈ S 2 , exactly m points in M have unit normal n = v. (4) The total curvature of M is a equal to 4πm. Properness means that if we go off to infinity in M , we also go off to infinity in R n . More precisely, we say that a sequence p i ∈ M diverges in M if no subsequence converges with respect to the induced arclength metric on M to a point in M . We say that a sequence diverges in R n if no subsequence converges with respect to the metric on R n . We say that M is a proper in R n if every sequence p i ∈ M that diverges in M also diverges in R n . For example, the curve where Y is the y-axis, also is not a proper submanifold of R 2 for the same reason.) The catenoid M with a vertical axis of rotational symmetry (figure 2) provides a good example of Osserman's theorem. The Gauss map is a conformal diffeomorphism from M to S 2 \ {N P, SP }, where N P = (0, 0, 1) and SP = (0, 0, −1) are the north and south poles on S 2 . In particular, M is conformally diffeomorphic (in this case by the map n) to a twice-punctured sphere. The Gauss map extends continuously to the punctures. The total curvature is the area of the Gaussian image, namely 4π. Proof of Osserman's Theorem. The first assertion is a special case of an intrinsic theorem due to Huber [Hub57]: if M is a complete, connected surface such that M K − dS < ∞ then M is conformally a punctured Riemann surface. Here To prove the second assertion, let us (for the moment) orient S 2 by the inwardpointing unit normal, so that the Gauss map becomes orientation preserving. Let U ⊂ Σ be a neighborhood of one of the punctures, p. By Picard's Theorem, either n : U \ {p} → S 2 ∼ = C ∪ {∞} is meromorphic at p (and therefore extends to p), or n : U \ {p} → S 2 takes all but two values in S 2 infinitely many times. The latter implies that U |K| dS = ∞, a contradiction. Thus n extends continuously (indeed analytically) to U . We have shown that the Gauss map n : Σ → S 2 is holomorphic (with respect to orientation on S 2 induced by the inward unit normal.) Let m be its mapping degree. Then assertion (3) holds by standard complex analysis or differential topology. The fourth assertion (that the total curvature is 4πm) follows immediately, since The last assertion (properness) can be proved using the Weierstrass Representation (discussed below). Alternatively, one can show that if S ⊂ R 3 is a complete surface diffeomorphic to a closed disk minus its center and if the slope of Tan(S, p) is uniformly bounded, then S is proper in R 3 ; see [Whi87a]. One applies this result to a small neighborhood in Σ of a puncture, on which one can assume (by rotating) that the unit normal is very nearly vertical. Remarks. (1). Every multiple of 4π does occur as the total curvature of such a surface. (2). For a proof of generalization of Osserman's Theorem that does not use Huber's Theorem, see [Whi87a]. Osserman's Theorem has an extension, due to Chern and Osserman [CO67], to two-dimensional surfaces in R n . All the conclusions remain true, except that the total curvature 8 is a multiple of 2π rather than of 4π. And every multiple of 2π does occur as the total curvature of such a surface. For example, is a complete, embedded minimal surface with total curvature 2π(n − 1). The surface is minimal and indeed area minimizing by the Federer-Wirtinger theorem (theorem 39). To see that it has total curvature 2π(n − 1), let M r be the portion of the surface with {|w| ≤ r}. Note that if r is large, then ∂M r is very nearly a circle of radius r traversed n times. Thus by the Gauss Bonnet theorem, Letting r → ∞ gives 2πn = 2π + T C(M ). Theorem 4 characterized the plane by its density at infinity. Using Osserman's Theorem, we can give another characterization of the plane: Corollary 18. If M ⊂ R 3 is a complete, orientable minimal surface of total curvature < 4π, then M is a plane. Proof. If the total curvature is less than 4π, it must be 0, so K ≡ 0. But for a minimal surface in R 3 , K(p) = 0 implies that the principal curvatures at p are 0. Similarly, using the Chern-Osserman Theorem, one sees that a complete, orientable minimal surface in R n with total curvature < 2π must be a plane. As will be explained in lecture 3, corollary 18 implies a useful curvature estimate (theorem 23) for minimal surfaces. The Weierstrass Representation We have seen that immersed minimal surfaces in R 3 are precisely those that can be parametrized locally by conformal, harmonic maps F : Ω ⊂ R 2 → R 3 . Following work of Riemann, Weierstrass and Enneper 9 independently found a nice way to generate all such F . Write z = x + iy in R 2 , and 8 Here we can take the total curvature to be the integral of the absolute value of the scalar curvature. Since the scalar curvature is everywhere nonpositive, this is equal to minus the integral of the scalar curvature. Since M is minimal, the scalar curvature is equal to − 1 2 |A| 2 , so we could also define the total curvature to be the integral of 1 2 |A| 2 . 9 In the interests of brevity, I use the conventional name "Weierstrass Representation" rather than the more accurate "Enneper-Weierstrass Representation". Note that if u is a map from Ω ⊂ R 2 ∼ = C to C (or more generally to C n ), then u z = 0 if and only if u is holomorphic. (The real and imaginary parts of the equation u z = 0 are precisely the Cauchy-Riemann equations for u.) Note also that ∂ ∂z Thus F is harmonic ⇐⇒ F zz = 0 ⇐⇒ F z is holomorphic. Concerning conformality, note that if we extend the Euclidean inner product from R n to C n by making it complex linear in both arguments, then so the real and imaginary parts of We have proved: Our goal is to find holomorphic φ such that φ · φ ≡ 0. We then recover F by Note that φ(z) = 2F z = F x − iF y determines the oriented tangent plane to M at F (z), and therefore the image n(z) of F (z) under the Gauss map, and thus the image g(z) ∈ R 2 ∼ = C of n(z) under stereographic projection. Indeed, one can check that One can solve for φ 1 and φ 2 in terms of g and φ 3 : Theorem 20 (Weierstrass Representation). Let Ω ⊂ C be simply connected, let φ 3 and g be a holomorphic function and a meromorphic function on Ω, and let η be the one form φ 3 (z) dz. Suppose also that wherever g has a pole or zero of order m, the function φ 3 has a zero of order ≥ 2m. Then is a harmonic, almost conformal mapping of Ω into R 3 . Furthermore, every harmonic, almost conformal map F : Ω → R 3 arises in this way, unless the image of F is a horizontal planar region. The function g may take the value ∞ because the surface may have points where the unit normal is (0, 0, 1). Thus g may have poles. At the poles and zeroes of g, F would have poles unless φ 3 has zeroes to counteract the poles of g −1 ± g. Hence the condition about orders of poles and zeroes. The "unless" proviso is needed because if F (Ω) is a horizontal planar region, then g ≡ 0 or g ≡ ∞ and thus (10) does not make sense. More generally, η and g can be any holomorphic differential and any meromorphic function on any Riemann surface Ω. But if Ω is not simply connected, the expression (10) may be well-defined only on the universal cover of Ω. To be precise, the Weierstrass representation (10) gives a mapping of Ω (rather than of its universal cover) if and only if the closed one forms 1 2 (g −1 − g 2 )η, i 2 (g −1 + g)η, and η have no real periods. Since g and φ 3 (or η) determine the surface, all geometric quantities can be expressed in terms of them. For example, the conformal factor λ is given by , and thus (calculating |φ| from (10)) we have That is, One can also calculate the Gauss curvature: dg gη 2 (Note: dg and gη are both meromorphic 1-forms, so their ratio is a meromorphic function.) The points where F fails to be conformal are called branch points. Using the expression (11) for the conformal factor, we can identify the branch points: Proposition 21. Suppose in theorem 20 that g has a pole or zero of order m ≥ 0 at p and that φ 3 has a zero of order k ≥ 2m at p. Then F is an immersion at p if and only k = 2m. Thus F has a branch point at p if and only if k > 2m. The difference k − 2m is called the order of the branching at p. It is not hard to show that for sufficiently small > 0, the density of F (B(p, )) at F (p) is 1 plus the order of branching at p. The geometric meaning of the Weierstrass data As explained above, the function g in the strass representation has a simple geometric meaning: it is the Gauss map (or, more precisely, the Gauss map followed by stereographic projection to R 2 ∪ {∞} ∼ = C ∪ {∞}). As for φ 3 , note by theorem 20 and the discussion preceding it that In other words, φ 3 encodes the derivative of the height function F 3 with respect to the parametrization. More generally, as mentioned above, if the domain of F is a Riemann surface, we should think of the Weierstrass data as being g together with a holomorphic one form η (corresponding to φ 3 (z) dz.) In this case, η = 2 ∂F 3 , where ∂ is the Dolbeault operator which (in any local holomorphic chart) is given by ∂(·) = ∂ ∂z (·) dz. It can be difficult to determine for a particular g and φ 3 (or g and η) whether the real periods vanish and (if they do vanish) whether the resulting surface is embedded. For those reasons, great ingenuity is often required to prove the existence of specific kinds of embedded, genus g surfaces using the Weierstrass representation. Part of the discussion above carries over without change to two-dimensional minimal surfaces in R n . In particular, F : Ω ⊂ R n is harmonic and almost conformal if and only F can be written as Of course we can use the equation φ · φ ≡ 0 to express φ in terms of (n − 1) holomorphic functions; those (n − 1) functions can then be chosen more-or-less arbitrarily. See [Oss86, §12] for more details. As an example of how the Weierstrass representation gives nontrivial information about a surface, consider Enneper's surface M (figure 4), i.e., the surface with Weierstrass data η(z) = φ 3 (z) dz = z dz and g(z) = z on the entire plane. As suggested by the picture, there is a finite group of congruences of M , i.e., of isometries of R 3 that map M to itself. Though it is not evident from the picture, in fact M has an infinite group of intrinsic isometries: M is intrinsically rotationally symmetric about z = 0, since the metric depends only on |φ 3 | and |g|, which in this case are both equal to |z|. Rigidity and Flexibility A surface M ⊂ R 3 is flexible if it can be deformed (non-trivially) through a one-parameter family of smooth isometric immersions. Otherwise it is rigid. (Here "trivially" means "by rigid motions of R 3 ".) For example, a flat rectangle (or, more generally, any planar domain) is flexible, as can be demonstrated by rolling up a piece of paper. According to a classical theorem, every smooth, closed, convex surface in R 3 is rigid. Whether there exists a flexible smooth, closed, non-convex surface M is a long-open problem. Is a round hemisphere rigid? Perhaps there is no intuitive reason why it should or should not be rigid. However, it (or, more generally, any proper closed subset of any closed, uniformly convex surface) is flexible, as explained (if the closed surface is a sphere) in [HCV52, §32.10]. On the other hand, in the surface z = (x 2 + y 2 ) 3 (for example), arbitrarily small neighborhoods of the origin are rigid [Usm96]. So whether particular surfaces are rigid or flexible can be rather subtle. Now we impose an extra condition that makes flexing a surface much more difficult: is there a non-trivial one-parameter family F t : M → R 3 of smooth isometric immersions of a surface such that the unit normal at each point remains constant? In other words, if n(p, t) is the unit normal to F t (M ) at F t (p), we require that n(p, t) be independent of t. If M 0 = F 0 (M ) is a planar region, for example, the answer is "no", since in that case the only allowed deformations are translations and rotations about axes perpendicular to M 0 . Intuitively, non-trivial isometric deformations keeping the normals constant should be impossible. However, such deformation do exist for every simply connected, nonplanar minimal surface! In the Weierstrass representation, replace η by e iθ η and let θ vary. The metric depends only on |g| and |η|, so it does not change, and the Gauss map g also does not change. (The proof that the resulting deformations are nontrivial for nonplanar M is left as an exercise.) For example, if we start with the helicoid, this gives an isometric deformation of the helicoid to the catenoid (covered infinitely many times). One may see animations of the deformation online, for example at http://www.foundalis.com/mat/helicoid.htm, http://www.indiana.edu/~minimal/archive/Classical/Classical/ AssociateCatenenoid/web/qt.mov, and http://virtualmathmuseum.org/Surface/helicoid-catenoid/helicoid-catenoid. mov. Incidentally, one can show that minimal surfaces are the only surfaces in R 3 that can be isometrically deformed keeping the normals fixed, and that the oneparameter family obtained by replacing η by e iθ η is the only such deformation of a minimal surface. See [HK97] or [Web05] for more information about the Weierstrass Representation. (Note: in those works, the authors use dh to denote the holomorphic one-form η. If one thinks of h as the height function on the surface, then their dh is not the exterior derivative of h, but rather 2 ∂h. The exterior derivative of the height function is the real part of their dh.) Curvature Estimates and Compactness Theorems In many situations, one wants to take limits of minimal surfaces. For example, David Hoffman, Martin Traizet, and I needed to do this in our recent work on genusg helicoids. For several centuries, the plane and the helicoid were the only known complete, properly embedded minimal surfaces in R 3 with finite genus and with exactly one end. 10 Jacob Bernstein and Christine Breiner [BB11] proved (using work of Colding and Minicozzi) that any such surface other than a plane must be asymptotic to a helicoid at infinity. (Later Meeks and Peréz [MP] gave a different proof, also based on the work of Colding and Minicozzi.) Hence such a surface of genus g is called an embedded genus-g helicoid. But do embedded genus-g helicoids exist for g = 0? In 1992, Hoffman, Karcher, and Wei [HKW93] used the Weierstrass Representation to prove the existence but not the embeddedness of a genus-one helicoid. In 2004, Hoffman, Weber, and Wolf [HWW09] proved existence of an embedded genus-1 helicoid as shown in figure 5. (See [HW08] for a different, somewhat shorter proof.) Although genus-2 examples were found numerically as early as 1993 (see figure 6), existence was not known rigorously until 2013, when Hoffman, Traizet, and I [HTWa, HTWb] proved existence of embedded genus-g helicoids for every positive integer g. In our proof, first we construct analogous surfaces in S 2 × R (which, oddly enough, turns out to be easier), and then we get examples in R 3 by letting the radius of the S 2 tend to infinity. Of course we need to know that the surfaces in S 2 × R converge smoothly (after passing to subsequences) to limit surfaces in R 3 . In general, it is very useful to have compactness theorems: conditions on a sequence of minimal surfaces that guarantee existence of a smoothly converging subsequence. Theorem 22 (basic compactness theorem). Let M i be a sequence of minimal submanifolds of R n (or of a Riemannian manifold) such that second fundamental forms are uniformly bounded. Then locally there exist smoothly converging subsequences. In particular, if p i ∈ M i is a sequence bounded in R N and if dist(p i , ∂M i ) ≥ R > 0, then (after passing to a subsequence), 10 For a complete minimal surface M properly immersed in R n , the number of ends is the limit as r → ∞ of the number of connected components of M \ B(0, r). It follows from the convex hull property (theorem 15) that the number of those components cannot decrease as r increases, and thus that the limit exists. Proof of the Basic Compactness Theorem in R n . By scaling, we can assume that the principle curvatures are bounded by 1, and that dist(p i , ∂M i ) ≥ π/2. We can assume the p i converge to a limit p and that Tan(M i , p i ) converge to a limit plane. Indeed, in R n , we can assume by translating and rotating that p i ≡ 0 and that Tan(M i , p i ) is the horizontal plane through 0. For each i, let S i be the connected component of containing 0. The hypotheses imply that S i is the graph of a function where the C 2 norm of the F i is uniformly bounded. Hence by Arzelá-Ascoli, we may assume (by passing to a subsequence) that the F i converge in C 1,α to a limit function F . 11 A lamination is like a foliation except that gaps are allowed. For example, if S ⊂ R is an arbitrary closed set, then the set of planes R 2 × {z} where z ∈ S forms a lamination of R 3 by planes. So far we have not used minimality. Since the surfaces M i are minimal, the F i are solutions to an elliptic partial differential equation (or system of equations if n > m + 1), the minimal surface equation (or system). According to the theory of such equations, convergence in C 1,α on B m (0, 1/2) implies convergence in C k,α on B m (0, 1/2 − ) (for any k and ). We have proved convergence in sufficiently small geodesic balls. We leave it to the reader to piece such balls together to get the convergence of the M i ∩ B(p i , R). This theorem indicates the importance of curvature estimates: curvature estimates for a class of minimal surfaces imply smooth subsequential convergence for sequences of such surfaces. The 4π curvature estimate Theorem 23 (4π curvature estimate [Whi87b]). For every λ < 4π, there is a C < ∞ with the following property. If M ⊂ R 3 is an orientable minimal surface with total curvature ≤ λ, then Here |A(p)| is the norm of the second fundamental form of M at p, i.e, the square root of the sum of the squares of the principal curvatures at p. The theorem is false for λ = 4π, since the catenoid has total curvature 4π and is not flat. (Earlier, Choi and Schoen [CS85] proved that there exists a λ > 0 and a C < ∞ for which the conclusion holds.) Proof. It suffices to prove the theorem when M is a smooth, compact manifold with boundary, since a general surface can be exhausted by such M . Suppose the theorem is false. Then there is a sequence p i ∈ M i of examples with total curvature T C(M i ) ≤ λ and with We may assume that each p i has been chosen in M i to maximize the left side of (12). By translating and scaling, we may also assume that p i = 0 and that |A i (p i )| = 1, and therefore that dist(0, ∂M i ) → ∞. We may also replace M i by the geodesic ball of radius R i := dist(0, ∂M i ) about 0. Thus we have We have shown for each r that sup dist(x,0)≤r Hence the M i converge smoothly by theorem 22 (the basic compactness theorem) to a complete minimal surface M with |A M (0)| = 1. Remark. The theorem is also true (with the same proof) in R n , but with 4π replaced by 2π. This is because Osserman's theorem is also true in R n , but with 4π replaced by 2π. Theorem 23 can be generalized to manifolds in various ways. For example: Theorem 24. Suppose that N is a 3-dimensional submanifold of Euclidean space R n with the induced metric. Let For every λ < 4π, there is a C = C λ,n < ∞ with the following property. If M is an orientable, immersed minimal surface in N and if the total absolute curvature of M is at most λ, then The proof is almost exactly the same as the proof of theorem 23. In particular, we get a sequence p i ∈ M i ⊂ N i with |A Mi (p i )| = 1 and with The fact that ρ Ni (p i ) → ∞ means that the N i are converging (in a suitable sense) to R 3 , so in the limit we get a complete minimal immersed surface M in R 3 , exactly as in the proof of theorem 23. A general principle about curvature estimates Recall that we have proved: (1) A complete, nonflat minimal surface in R 3 has total curvature ≥ 4π. The equivalence of statements (1) and (2) is an example of general principle: any "Bernstein-type" theorem (i.e., a theorem asserting that certain complete minimal surfaces must be flat) should be equivalent to a local curvature estimate. Indeed, the Bernstein-type theorem in Euclidean space should imply a local curvature estimate in arbitrary ambient manifolds (as in theorem 24). An easy version of Allard's Regularity Theorem As an example of the general principle discussed above, consider the following: (1) Global theorem: If M ⊂ R n is a proper minimal submanifold without boundary and if Θ(M ) ≤ 1, then M is a plane. Clearly (2) implies (1), and proof that (1) implies (2) is very similar to the proof of the 4π curvature estimate (theorem 23). Furthermore, as suggested in the discussion of the general principle above, statement (1) implies a version of statement (2) in Riemannian manifolds. Bounded total curvatures For total curvatures that are bounded, but not bounded by some number λ < 4π, we have the following theorem, which says that for a sequence of minimal surfaces with uniformly bounded total curvatures, we get smooth subsequential convergence except at a finite set of points where curvature concentrates: Theorem 25 (Concentration Theorem [Whi87b]). Suppose that M i ∈ Ω ⊂ R n are two-dimensional, orientable, minimal surfaces, that ∂M i ⊂ ∂Ω, and that T C(M i ) ≤ Λ < ∞. Then (after passing to a subsequence) there is a set S ⊂ Ω of at most Λ 2π points such that M i converges smoothly in Ω\S to a limit minimal surface M . The surface M ∪ S is a smooth, possibly branched minimal surface. Now suppose that Ω ⊂ R 3 . Then S contains at most Λ 4π points. Also, if the M i are embedded, then M ∪ S is a smooth embedded surface (with multiplicity, but without branch points.) The theorem remains true (with essentially the same proof) in Riemannian manifolds. To illustrate the concentration theorem, let M k be obtained by dilating the catenoid by 1/k. Then M k converges to a plane with multiplicity 2, and the convergence is smooth except at the origin. Of course, the concentration theorem is only useful if the hypothesis (uniformly bounded total curvatures of the M i ) holds in situations of interest. Fortunately, there are many situations in which the hypothesis does hold. For example, suppose 12 I am describing Allard's theorem specialized to minimal varieties. His theorem is stated more generally for varieties with mean curvature in L p where p can be any number larger than the dimension of the variety. In this generality, the conclusion is not that M ∩ B(p, ) is smooth, but rather that it is C 1,α for suitable α. If the variety is minimal, smoothness then follows by standard PDE arguments. The easy version of Allard's Regularity Theorem first appeared in [Whi05]. the M i ⊂ R n all have the same finite topological type. Suppose also that the boundary curves ∂M i are reasonably well-behaved: where κ ∂Mi denotes the curvature vector of the curve ∂M i . (In other words, suppose that the boundary curves have uniformly bounded total curvatures.) Then the hypothesis sup i T C(M i ) < ∞ holds by the Gauss-Bonnet Theorem. Proof of part of the concentration theorem in R 3 . Define measures µ i on Ω by By passing to a subsequence, we can assume that the µ i converge weakly to a limit measure µ with µ(Ω) ≤ Λ. Let S be the set of points p such that µ{p} ≥ 4π. Then |S| ≤ Λ 4π , where |S| is the number of points in S. Suppose x ∈ Ω \ S. Then µ{x} < λ < 4π for some λ. Thus there is a closed ball for all sufficiently large i. Consequently, |A i (·)| is uniformly bounded on B(x, r/2) by the 4π curvature estimate (theorem 23). Since |A i (·)| is locally uniformly bounded in Ω \ S, we get subsequential smooth convergence on Ω \ S by the basic compactness theorem 22. Let p ∈ S. By translation, we may assume that p is the origin. Note that we can find B(0, ) for which µ(B(0, ) \ {0}) is arbitrarily small. It follows (by the 4π curvature estimate 23 and the basic compactness theorem 22) that if we dilate M about 0 by a sequence of numbers tending to infinity, a subsequence of the dilated surfaces converges smoothly on R 3 \ {0} to a limit minimal surface with total curvature 0, i.e., to a union of planes. By monotonicity, the number of those planes is finite. It follows that (for small r), the surface M ∩ (B(0, r) \ {0}) is topologically a finite union of punctured disks. In fact, it is not hard to show that the smooth subsequential convergence of the dilated surfaces to planes implies that the components of M ∩ (B(0, r) \ {0}) are not just topologically punctured disks, but actually conformally punctured disks. Let Theorem 26 and its proof generalize to Riemannian 3-manifolds, but one has to be careful because in some 3-manifolds, simple connectivity of a minimal surface M does not imply that the components of its intersection with a small ball (say a geodesic ball) are simply connected. Thus one needs to make some additional hypothesis. For example, one could assume that the ambient space is simply connected and has non-positive sectional curvatures or, more generally, that for each p ∈ Ω and r > 0, the set {x ∈ Ω : dist(x, p) ≤ r} has smooth boundary and that the mean curvature vector of that boundary points into the set. (See exercise 3 after theorem 15). To apply the concentration theorem, we need uniform local bounds on total curvature. Such bounds are implied by uniform local bounds on genus and area: Then Thus under the hypotheses of this theorem, we get the conclusion of the concentration theorem: smooth convergence (after passing to a subsequence) away from a discrete set S. Note: Ilmanen's Theorem 3 is about surfaces, not necessarily minimal, in Euclidean space; it gives local bounds on total curvature (integral of the norm of the second fundamental form squared) in terms of genus, area, and integral of the square of the mean curvature. To deduce Theorem 27 from that result, isometrically embed Ω into a Euclidean space. Stability Let M be a compact minimal submanifold of a Riemannian manifold. We say that M is stable provided for all deformations φ t with φ 0 (x) ≡ x and φ t (y) ≡ y for y ∈ ∂M . For noncompact M , we say that M is stable provided each compact portion of M is stable. If M ⊂ R n is an oriented minimal hypersurface and if X(x) = d dt t=0 φ t (x) is a normal vectorfield, we can write X = uν where u : M → R and ν is the unit normal vectorfield. Note that φ t (x) = x for x ∈ ∂M implies that u ≡ 0 on ∂M . Theorem 28 (The second variation formula). Under the hypotheses above, To prove the theorem, one observes that The formula remains true in an oriented ambient manifold N , except that one replaces |A| 2 by |A| 2 + Ricci N (ν, ν) where ν is the unit normal vectorfield to M . See [Sim83,§9] or [CM11, 1. §8], for example, for details. The following theorem is one of the most important and useful facts about stable surfaces. It was discovered independently by Do Carmo and Peng [dCP79] and by Fischer-Colbrie and Schoen [FCS80]. A few years later Pogorelov gave another proof [Pog81]. (1) A complete, stable, orientable minimal surface in R 3 must be a plane. (2) If M is a stable, orientable minimal surface in R 3 , then |A(p)| dist(p, ∂M ) ≤ C for some C < ∞. As usual, (1) and (2) are equivalent. Also, a version of (2) holds in Riemannian 3-manifolds. In some ways, Fischer-Colbrie and Schoen get the best results, because they get results in 3-manifolds of nonnegative scalar curvature that include (1) as a special case. However, the proof below is a slight modification 13 of Pogorelov's. First we prove some preliminary results. This is actually a very general fact about the lowest eigenvalue of self-adjoint, second-order elliptic operators (first proved by Barta for the Laplace operator). In particular, theorem 30 is true in Riemannian manifolds with |A| 2 replaced by |A| 2 + Ricci(ν, ν). See [FCS80] or [CM11, 1. §8, proposition 1.39] for the proof. 13 Here corollary 33 is used in place of one of the lemmas that Pogorelov proves. Corollary 31. Let M be as in theorem 30. If M is stable, then so is its universal cover. Proof. Lift the function u from M to its universal cover. Proposition 32. Let M be a complete, simply connected surface with K ≤ 0. Let A(r) = A p (r) be the area of the geodesic ball B r of radius r about some point p. Let Note that θ(M ) is an intrinsic analog of Θ(M ), the density at infinity of a properly immersed minimal surface (without boundary) in Euclidean space discussed in lecture 1. Proof. Let L(r) be the length of ∂B r . Then A = L, so (The formula for L is a special case of the first variation formula.) Thus The result follows easily. Corollary 33. If M (as above) is a minimal surface in R 3 and if θ(M ) < 3, then M is a plane. Lemma 34 (Pogorelov). Let M ⊂ R 3 be a simply connected, minimal immersed surface. Suppose B R is a geodesic ball in M of radius R about some point p ∈ M such that the interior of B R contains no points of ∂M , i.e., such that dist(p, ∂M ) ≥ R. If A(R) := area(B R ) > 4 3 πR 2 , then B R is unstable. Proof. We may assume that M = B R . To prove instability, it suffices (by the second variation formula) to find a function u in B R with u|∂B R = 0 such that Q(u) < 0, where (The second equality holds because |H| 2 = |A| 2 + 2K for any surface.) Let r and θ be geodesic polar coordinates in M centered at the point p. Thus the metric has the form ds 2 = dr 2 + g 2 dθ 2 for some nonnegative function g(r, θ) such that g(0, 0) = 0 and g r (0, 0) = 1. Recall that the Gauss curvature is given by Thus the second integral in (13) becomes Integrating by parts twice gives Now let u(r, θ) = u(r) = (R − r)/R, so that u(r) decreases linearly from u(0) = 1 to u(R) = 0. Then the last integral in (14) vanishes, and (u r ) 2 = |∇u| 2 = 1/R 2 , so combining (13) and (14) gives 3 πR 2 . Now we can give the proof of theorem 29: Proof. By corollary 31, we may assume that M is simply connected. Suppose that M is not a plane. Then by corollary 33, so A(r) πr 2 > 4 3 for large r. But then M is unstable by lemma 34. Existence and Regularity of Least-Area Surfaces Our goal today is existence and regularity of a surface of least area bounded by a smooth, simple closed curve Γ in R N . As mentioned in lecture 1, the nature of such surfaces depends (in an interesting way!) on what we mean by "surface", "area", and "bounded by". There are different possible definitions of these terms, and they lead to different versions of the Plateau problem. In the most classical version of the Plateau problem: "surface" means "continuous mapping F : D → R n of a disk", "bounded by Γ" means "such that F : ∂D → Γ is a monotonic parametrization", and "area" means "mapping area" (as in multivariable calculus): (F x = ∂F/∂y and F y = ∂F/∂y.) Theorem 35 (The Douglas-Rado Theorem). Let Γ be a smooth, simple closed curve in R N . Let C be the class of continuous maps F : D → R N such that F |D is locally Lipschitz and such that F : ∂D → Γ is a monotonic parametrization. Then there exists a map F ∈ C that minimizes the mapping area A(F ). Indeed, there exists such a map that is harmonic and almost conformal, and that is a smooth immersion on D except (possibly) at isolated points ("branch points"). Remark. The theorem remains true even if Γ is just a continuous simple closed curve, provided one assumes that the class C contains a map of finite area. (If Γ is smooth, or, more generally, if it has finite arclength, then C does contain a finitearea map.) With fairly minor modifications, the proof presented below establishes the more general result. See [Law80], for example, for details. Even more generally, Douglas proved that C contains a harmonic, almost conformal map without the assumption that C contains a finite-area map. Morrey [Mor48] generalized the Douglas-Rado theorem by replacing R n by a general Riemannian n-manifold N under a rather mild hypothesis ("homogeneous regularity") on the behavior of N at infinity. We say that a continuous map φ : ∂D → Γ is a monotonic parametrization provided it is continuous, surjective, and has the following property: the inverse image of each point in Γ is a connected subset of ∂D. Roughly speaking, this means that if a point p goes once around ∂D, always moving in one direction (e.g., counterclockwise), then φ(p) goes once around Γ, always in one direction. (Note that φ is allowed to map arcs of ∂D to a single points in Γ.) Note that we need some condition on F to guarantee that A(F ) makes sense. Requiring that F be locally Lipschitz on D is such a condition, since such an F is differentiable almost everywhere by Rademacher's Theorem. (Alternatively, one could work in the Sobolev space of mappings whose first derivatives are in L 2 .) The most natural approach for proving existence for this (or any other minimization problem) is the "direct method", which we describe now. Let α be the infimum of A(F ) among all F ∈ C. Then there exists a minimizing sequence F i , i.e., a sequence F i ∈ C such that A(F i ) → α. Now one hopes that there exists a subsequence F i(j) that converges to a limit F ∈ C with A(F ) = α. For the direct method to work, one needs two ingredients: a compactness theorem (to guarantee existence of a subsequential limit F ∈ C), and lowersemicontinuity of the functional A(·) (to guarantee that A(F ) = α.) For the Plateau problem, a minimizing sequence need not have a convergent subsequence 14 . For example, there exists a minimizing sequence F i such that the images F i (D) converge as sets to all of R N : (Think of F i (D) as a flat disk with a long, thin tentacle attached near the center. Even if the tentacle is very long, its area can be made arbitrarily small by making it sufficiently thin. By making the tentacle meander more and more as i → ∞, we can arrange for (15) to hold, even though A(F i ) converges to the area of the flat disk.) One can also find a minimizing sequence F i such that F i | D converges pointwise to a constant map. For example, suppose that Γ is the unit circle x 2 + y 2 = 1 in the plane z = 0, and let To avoid such pathologies, instead of using an arbitrary minimizing sequence, we choose a well-behaved minimizing sequence. For that, we make use of the energy functional. The energy of a map F : We need several facts about energy: Lemma 36 (Area-Energy Inequality). For F ∈ C, with equality if and only if F is almost conformal. Proof. For any two vectors u and v in R n , with equality if and only if u are v orthogonal and have the same length. Apply that fact to F x and F y and integrate. Lemma 37. Suppose F : D → R N is smooth and harmonic. Then for all smooth G : D → R N with G|∂D = F |∂D, with equality if and only if G = F . 14 In the geometric measure theory approach to Plateau's problem, one works with a class of surfaces and a suitable notion of convergence for which minimizing sequences do have convergent subsequences. One disadvantage (compared to the classical approach described here) is that a limit of simply connected surfaces need not not be simply connected. . (The proof actually shows that the lemma holds for domains in arbitrary Riemannian manifolds, and in the Sobolev space of mappings whose first derivatives are in L 2 .) Proof of the Douglas-Rado Theorem. Let α = inf{A(F ) : F ∈ C}. We begin with four claims, each of which implies that we can find a minimizing sequence consisting of functions in C with some additional desirable properties. Claim 1. For every β > α, there is a smooth map F ∈ C with A(F ) < β. Proof of claim 1. We will show that there is an F ∈ C such that F is Lipschitz on D, such that F is smooth near ∂D, and such that A(F ) < β. The assertion of claim 1 then readily follows by standard approximation theorems. By definition of α, there is an G ∈ C with A(G) < β. Let R > 0 be the reach of the curve Γ, i.e., the supremum of numbers ρ such that every point p with dist(p, Γ) < ρ has a unique nearest point Π(p) in Γ. For δ < R/2, let Φ δ : R n → R n be the map if dist(p, Γ) ≤ δ, and Then for every δ ∈ (0, R), the map Φ δ •G is in the class C. Furthermore, Note that there is an r with 0 < r < 1 such that F maps the annular region A := {z : r ≤ |z| ≤ 1} to Γ: F (A) = Γ. Now it is straightforward to modify the definition of F on A so that F (A) remains Γ, so that F is Lipschitz, and so that F is smooth near ∂D and maps ∂D diffeomorphically to ∂D. (Note that this modification does not change A(F ).) Claim 2. If β > α, then there exists a smooth map G ∈ C with E(G) ≤ β. Since A(G) ≤ E(G) for every map G, claim 2 is stronger than claim 1. Proof of claim 2. By claim 1, there is a smooth map F ∈ C with A(F ) < β. Although F is smooth, its image need not be a smooth surface. That is, F need not be an immersion. To get around this, for δ > 0, we define a new map By choosing δ small, we can assume that A(F δ ) < β. Now F δ (D) is a smooth, embedded disk. Hence (by existence of conformal coordinates and the Riemann mapping theorem), we can parametrize F δ (D) by a smooth conformal map Φ : where E(Φ) = A(Φ) by conformality of Φ and where A(Φ) = A(F δ ) because Φ and F δ parametrize the same surface. Claim 3. For every β > α, there is a smooth harmonic map F ∈ C such that E(F ) ≤ β. Proof of claim 3. By claim 2, there is a smooth map G ∈ C with E(G) < β. Now let F : D → R n be the harmonic map with the same boundary values as G. By lemma 37, E(F ) ≤ E(G) < β. Claim 4. For every β > α, there is a smooth harmonic map F ∈ C such that F maps a, b, and c toâ,b, andĉ. Proof. By claim 3, there is a smooth harmonic map F ∈ C such that E(F ) < β. Let a , b , and c be points in ∂D that are mapped by F toâ,b, andĉ. Let u : D → D be the unique conformal diffeomorphism that maps a, b, and c to a , b , and c . Then F • u has the desired properties. (For any map F with a two-dimensional domain and for any conformal diffeomorphism u of the domain, note that E(F ) = E(F • u), and that if F is harmonic, then so is F • u.) By claim 4, we can find a sequence of smooth, harmonic maps F i ∈ C such that Furthermore, we can choose the F i so that they map a, b, and c in ∂D toâ,b, and c in Γ. By the maximum principle for harmonic functions (applied to L • F i , for each linear function L : R n → R), the F i are uniformly bounded: Thus by passing to a subsequence, we can assume that the F i converge smoothly on the interior of the disk 15 to a harmonic map F . However, we need uniform convergence on the closed disk. Claim (Equicontinuity). The maps F i are equicontinuous. Proof of equicontinuity. Suppose not. Then (by the smooth convergence on the interior) there exist point p i ∈ ∂D and q i ∈ D such that and such that |F i (p i ) − F i (q i )| → 0. By passing to a subsquence (and by relabeling, if necessary) we may assume that the p i converge to a point p ∈ ∂D that does not 15 For readers not familiar with this fact about harmonic maps (which holds more generally for solutions of second-order linear elliptic partial differential equations under mild conditions on the coefficients), note that each coordinate of F i is the real part of a holomorphic function. By (16), those holomorphic functions take values in a strip in the complex plane, and hence form a normal family. lie on the closed arc joining b to c (and disjoint from a.) Let E = sup i E(F i ). By the Courant-Lebesgue Lemma (lemma 38 below), there exist arcs The boundary of D i consists of two arcs, C i and an arc C i in ∂D, namely B(p i , r i ) ∩ ∂D. The two arcs have the same endpoints. Since the length of F (C i ) tends to 0, the distance between the endpoints tends to 0. Thus F (C i ) is an arc in Γ, and the distance between the endpoints tends to 0. Thus, for large i, F (C i ) is either a (i) very short arc in Γ or (ii) all of Γ except for a very short arc. Since F (C i ) contains F (p) and (for large i) is disjoint from the arc in Γ joiningb toĉ, in fact F (C i ) must be very short arc in Γ: its length tends to 0 as i → ∞. We have shown that the arclength and therefore the diameter 16 of F (∂D i ) tends to 0. By the maximum principle for harmonic functions, This completes the proof of equicontinuity. By equicontinuity, we can (by passing to a subsequence) assume that the F i converge uniformly on D to a limit map F . As already mentioned, F is harmonic on the interior. The uniform convergence implies that F ∈ C, so Since A(F ) = E(F ), the map is almost conformal. Lemma 38 (Courant-Lebesgue Lemma). Let Ω ⊂ R 2 and F : Ω → R n be a map with energy E. Let p be a point in R 2 and let L(r) be the arclength of F |∂B(p, r). Consequently, Proof. It suffices to consider the case p = 0. Using polar coordinates, 16 The diameter of a subset of a metric space is the supremum of the distance between pairs of points in the subset. Thus |DF | 2 r dθ dr = 4πE. Boundary regularity The Douglas-Rado Theorem produces an almost conformal, harmonic map F that is continuous on the closed disk and is such that F |∂D gives a monotonic parametrization of the curve Γ. It is not hard to show that any such map (whether or not it minimizes area) cannot be constant on any arc of ∂D. (See for example [Oss86,lemma 7.4] or [Law80,proposition 11].) It follows from the monotonicity of F |∂D that F : ∂D → Γ is a homeomorphism. Later, every such map was proved to be smooth on the closed disk provided Γ is smooth. Roughly speaking, such a map F : D → R n turns out to be as regular as Γ. For example, if Γ is C k,α for some k ≥ 1 and α ∈ (0, 1), then so is F , and if Γ is analytic, then so is F . (Lewy first proved that minimal surfaces in R n with analytic boundary curves are analytic up to the boundary. The fundamental breakthrough was due to Hildebrandt [Hil69], who, in the case of area-minimizing surfaces, extended Lewy's result to arbitrary ambient manifolds and who also proved the corresponding result for C 4 boundaries. Later Heinz and Hildebrandt [HH70] proved such results for surfaces that are minimal but not necessarily area minimizing. See also [Kin69].) Branch points Let F : D → R N be a non-constant, harmonic, almost conformal map (such as given by the Douglas-Rado theorem). Recall that harmonicity of F means that the map F z = 1 2 (F x − iF y ) from D to C n is holomorphic. Thus F z can vanish only at isolated points. Those points are called "branch points". Away from the branch points, the map is a smooth, conformal immersion. Using the Weierstrass representation, it is easy to give examples of minimal surfaces with branch points. (The branch points are the points where g has a pole of order m (possibly 0) and where ν has a zero of order strictly greater than 2m.) But are there area-minimizing examples? The following theorem implies that there are such examples in R n for n ≥ 4: Theorem 39 (Federer, following Wirtinger). Let M be a complex variety in C n . Then (as a real variety in R 2n ) M is absolutely area minimizing in the following sense: if S is a compact portion of M , and if S is an oriented variety with the same oriented boundary as S, then area(S) ≤ area(S ). Here "with the same oriented boundary" means that ∂S = ∂S and that S and S induce the same orientation on the boundary. For the proof, see [Fed65] or [Law80,. Using the Federer-Wirtinger Theorem, we can give many examples of branched, area-minimizing surfaces. For example, the map has a branch point at the origin and is area-minimizing by the Federer-Wirtinger Theorem. Whether there exist any examples other than the ones provided by the Federer-Wirtinger Theorem is a very interesting open question. In other words, must a connected least-area surface with a true 17 branch point in R 2n ∼ = C n be holomorphic after a suitable rotation of R 2n ? The paper [MW95] is suggestive in this regard. The theorems of Gulliver and Osserman Osserman and Gulliver proved in R 3 (or more generally in any Riemannian 3manifold) that the Douglas-Rado solution cannot have any interior branch points. 18 Thus (away from the boundary), the map F is a smooth immersion. Whether the map F in the Douglas-Rado Theorem can have boundary branch points (for a 3-dimensional manifold) is one of longest open questions in minimal surface theory. Using the Federer-Wirtinger Theorem, one can give examples in R n for n ≥ 4, such as There are some situations in which boundary branch points are known not to occur: (1) If Γ lies on the boundary of a compact, strictly convex region in R n . In this case, one need not assume area minimizing: minimality suffices. (The proof is a slight modification of the proof of theorem 15, together with the Hopf boundary point theorem.) (2) If Γ is a real analytic curve in R n or more generally in an analytic Riemannian manifold [Whi97]. Higher genus surfaces Let Γ be a simple closed curve in R n . Does Γ bound a least-area surface of genus one? Not necessarily. Consider a planar circle Γ in R 3 . By the convex hull principle (theorem 15), Γ bounds only one minimal surface: the flat disk M bounded by Γ. We can take a minimizing sequence of genus one surfaces, but (for this example) in the limit, the handle shrinks to point, and we end up with the disk. 17 A branch point p ∈ D of F is called false if there is a neighborhood U of p such that the image F (U ) is a smooth, embedded surface. Otherwise the branch point is true. For example, if F : D ⊂ C → R n is a smooth immersion, the z → F (z 2 ) has a false branch point at z = 0. 18 Osserman ruled out true branch points in R 3 , and Gulliver extended Osserman's result to 3-manifolds and also ruled out false branch points. Alt [Alt72] independently proved some of Gulliver's results. See [CM11], [Law80], [DHS10], or the original papers for details. Technically speaking, the planar circle does bound a least area genus 1 suface in the sense of mappings. Let Σ be a smooth genus 1 surface consisting a a disk with a handle attached. There is a smooth map F : Σ → M that collapses the handle to the center p of the disk M bounded by Γ, and that maps the rest of Σ diffeomorphically to M \ {p}. However, there is no "nice" area-minimizing map F : Σ → R 3 with boundary Γ. For example, there is no such map that is an immersion except at isolated points. Definition. Let Γ be a smooth, simple closed curve in R n . Let α(g) be the infimum of the area of genus g surfaces bounded by Γ. Proof. Take a surface of genus g − 1 whose area is close to α(g − 1), and then attach a very small handle. Theorem 41 (Douglas 19 ). If α(g) < α(g − 1), then there exists a domain Σ consisting of a genus g Riemanan surface with an open disk removed, and a continuous map F : Σ → R n that is harmonic and almost conformal in the interior of F , that maps ∂Σ monotonically onto Γ, and that has area A(F ) equal to α(g). The proof is similar to the proof of the Douglas-Rado Theorem, but more complicated because not all genus g domains are conformally equivalent. For example, up to conformal equivalence, there is a 3-parameter family of genus-one domains with one boundary component. As a result, we have to vary the domain as well as the map. The Douglas theorem can be restated slightly informally as follows: Theorem 42. Let g be a nonnegative integer. The least area among all surfaces of genus ≤ g bounded by Γ is attained by a harmonic, almost conformal map. The theorems of Gulliver and Osserman also hold for these higher genus surfaces: in R 3 (and in 3-manifolds) they must be smooth immersions except possibly at the boundary. Summary: For a fixed genus g and curve Γ, we cannot in general minimize area among surfaces of genus equal to g and get a nice surface: minimizing sequences may converge to surfaces of lower genus. However, we can always minimize area among surfaces of genus ≤ g: the minimum will be attained by a harmonic, almost conformal map. Intuitively, the Douglas theorem is true because when we take the limit of a minimizing sequence of genus g surfaces, we can loose handles but we cannot gain them. 19 It seems that Douglas never gave a complete proof of "Douglas's Theorem". A result very similar to Douglas's Theorem, but for minimal surfaces without boundary in Riemannian manifolds, was proved by Schoen and Yau [SY79]. Later, Jost [Jos85] gave a complete proof of Douglas's original theorem. What happens as the genus increases? Fix a smooth, simple closed curve Γ in R 3 . As above, we let α(g) denote the least area among genus g surfaces bounded by Γ. According to proposition 40, α(g) is a decreasing function of g. The following provides a sufficient condition for α(g) to be strictly less than α(g − 1). (Recall that by the Douglas Theorem, the strict inequality implies existence of a least-area genus g surface bounded by Γ.) Theorem 43. Suppose M ⊂ R 3 is a minimal surface of genus (g − 1) bounded by Γ. Suppose also that M \ Γ is not embedded. Then Γ bounds a genus g surface whose area is strictly less than the area of M . In particular, if area(M ) = α(g − 1), then α(g) < α(g − 1). Proof. One can show that if M \ Γ is not embedded, then there is a curve C along which two portions of M cross transversely. (There may be many such curves.) We will use that fact without proof here. Note that we can cut and paste M along an arc of C to get a new surface M * . There are two ways to do the surgery: one produces an orientable surface and the other a non-orientable surface. We do the surgery that makes M * orientable. The new surface is piecewise smooth but not smooth. It has the same area as M and has genus g. By rounding the corners of M * , we can make a new genus g surface whose area is strictly less than area(M * ) = area(M ). Proof. The genus of a simple closed curve in R 3 is defined to be the smallest genus of any embedded minimal surface bounded by the curve. Using elementary knot theory, one can show that there are smooth curves of every genus. Let Γ be such a curve of genus g. For k = 0, 2, . . . , g − 1, let M k be a least-area surface of genus ≤ k bounded by Γ, so that area(M k ) = α(k). Since Γ has genus g > k, the surface M k cannot be embedded. Therefore α(k + 1) < α(k) by theorem 43. Actually, the relevant notion is not the genus of Γ, but rather the "convex hull genus" of Γ: the smallest possible genus of an embedded surface bounded by Γ and lying in the convex hull of Γ. Theorem 45 (Almgren-Thurston [AT77]). For every > 0 and for every positive integer g, there exists a smooth, unknotted, simple closed curve Γ in R 3 whose convex hull genus is g and whose total curvature is less than 4π + . (Recall that the total curvature of a smooth curve is the integral with respect to arclength of the norm of the curvature vector.) Later Hubbard [Hub80] gave a beautiful, very simple proof of this theorem and gave an explicit formula for calculating the convex hull genus of a large, interesting family of curves. Theorem 46. For every > 0 and for every positive integer g, there exists a smooth, unknotted simple closed curve Γ in R 3 with total curvature ≤ 4π + such that α(0) > α(1) > · · · > α(g), and such that for every k < g, each genus-k least area surface is non-embedded. Proof. Let Γ be a curve satisfying the conclusion of theorem 45. By the convex hull property (theorem 15), any embedded minimal surface bounded by Γ has genus ≥ g. The rest of the proof is exactly the same as the proof of theorem 44. However, for a smooth curve, eventually the function α(·) must stabilize according to the following theorem of Hardt and Simon [HS79]: Theorem 47. Let Γ be a smooth simple closed curve in R 3 . Let α = inf α(·) be the infimum of the areas of all orientable surfaces bounded by Γ. Then (1) The infimum is attained, and any surface that attains the infimum is smoothly embedded (including at the boundary). (2) The set of surfaces that attain the infimum is finite. In particular, if g is the genus of a surface that attains the infimum, then the α(g) ≡ α(k) for all k ≥ g. On the other hand, one can construct a simple closed curve Γ that is smooth except at one point such that α(g) > α(g + 1) for all g. For example, take such a curve of infinite genus or or even just of infinite convex hull genus, or see [Alm69] for an example (due to Fleming [Fle56]) for which α(·) is strictly decreasing and for which the Douglas solutions are all embedded. Indeed, all kinds of pathologies can happen once one allows a point at which the curve is not smooth: Theorem 48. [Whi94, 1.3] There exists a simple closed curve Γ in ∂B(0, 1) ⊂ R 3 and a number A < ∞ such that Γ is smooth except at one point p and such that the following holds: for every area a ∈ [A, ∞], for every genus g with 0 ≤ g ≤ ∞, and for every index I with 0 ≤ I ≤ ∞, the curve Γ bounds uncountably many embedded minimal surfaces that are smooth except at p and that have area a, genus g, and index 20 of instability I. This is in sharp contrast to the case of an everywhere smooth, simple closed curve Γ in the boundary of a convex set in R 3 . For such a curve, one can show that for each genus g < ∞, the set of embedded genus-g minimal surfaces bounded by Γ is compact with respect to smooth convergence [Whi87b]. It follows that (for each g) the set of possibly indices of instability is finite. With a little more work, one can show that the set of areas of such surfaces (for each g) is a finite set. Of course, if Γ is smooth, then the areas of all the minimal surfaces (regardless of genus) are bounded above according to theorem 2. Embeddedness: The Meeks-Yau Theorem Theorem 49 (Meeks-Yau [MY82]). Let N be a Riemannian 3-manifold and let F : D → N be a least-area disk (parametrized almost conformally) with a smooth boundary curve Γ. Suppose F (D) is disjoint from Γ. Then F is a smooth embedding. The disjointness hypothesis holds in many situations of interest. In particular, it holds if Γ lies on the boundary of a compact, convex subset of R 3 . (This follows from the strong maximum principle.) More generally, it holds if N is a a compact, mean convex 3-manifold and if Γ lies in ∂N . (Mean convexity of N means that the mean curvature vector at each point of the boundary is a nonnegative multiple of the inward unit normal.) Idea of the proof. Suppose M is immersed but not embedded. One can show that it contains an arc along which it intersects itself transversely. One can cut and paste M along such arcs to get a new piecewise smooth (but not smooth) surfacẽ M . Such surgery is likely to produce a surface of higher genus (as in the proof of theorem 43). However, Meeks and Yau show that it is possible to do the surgery (simultaneously on many arcs) in such a way thatM is still a disk. Thus area(M ) = area(M ), soM is also area minimizing. However, whereM has corners, one can round the corners to get a disk with less area thanM , a contradiction.
2016-01-17T21:14:39.000Z
2013-08-15T00:00:00.000
{ "year": 2013, "sha1": "23d6a7d487d66fe51d8631830f25c4e3a8c221a2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "23d6a7d487d66fe51d8631830f25c4e3a8c221a2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
248522211
pes2o/s2orc
v3-fos-license
The one‐dimensional model for an elliptic equation in a perforated thin anisotropic heterogeneous three‐dimensional structure In this paper, we investigate the one‐dimensional model for a thin three‐dimensional structure Ω^ε$$ {\hat{\Omega}}_{\varepsilon } $$ in the framework of the thermal conduction. The structure is characterized by two small positive parameters ε$$ \varepsilon $$ and rε$$ {r}_{\varepsilon } $$ . The first parameter ε$$ \varepsilon $$ corresponds to the thickness of the structure while the second one characterizes the thickness of its core Tε$$ {T}_{\varepsilon } $$ which plays the role of a “hole.” The structure is assumed to be heterogeneous and anisotropic, and we deal with three cases related to the limit limε→0ε2|ln(rε)|=k,k∈{0,1,+∞}$$ \underset{\varepsilon \to 0}{\lim}\kern.3em {\varepsilon}^2\mid \ln \left({r}_{\varepsilon}\right)\mid =k,k\in \left\{0,1,+\infty \right\} $$ . We exhibit the “strange” term appearing in the one‐dimensional model in the critical case k=1$$ k=1 $$ , and we highlight the effect of the anisotropy on the form of the corrector for uε$$ {u}_{\varepsilon } $$ . Introducing the classical change of variables and unknowns we get easily from (1.1) the following variational equation satisfied by u : where ∇ ′ denotes the gradient with respect to the two first variables = ( 1 , 2 ). Remark 1.1. For the sake of simplicity, we have assumed that the hole has a cylindrical form with a straight section r D although the study may be performed assuming only the hole defined as a cylinder with a section d T, T being a closed set of R 2 such that there exists ∈ (0, 1∕2) satisfying D ⊂ T ⊂ Y where D denotes the disk with radius centered at the origin. In the sequel, functions of H 1 D (Ω ) are implicitly extended by zero inside the hole so that it may be considered as elements of Under classical hypotheses on the matrix A and the source term (see below), existence and uniqueness of the solution u of (1.9) for fixed are an immediate consequence of the Lax-Milgram Theorem. (1.9) Remark 1.2. In the sequel and for the sake of brevity, the study will concern only the sequence u defined on the fixed domain Ω from which one can deduce the behavior of the average over Y of different quantities related to the sequenceū defined on the variable domain Ω as it was done for instance in Gaudiello and Sili. 1 The isotropic setting for a thin structure having a hole was addressed in Murat and Sili, 2 and the analysis was based on the use of test function introduced in the study of periodic homogenization problems in perforated domains; see Cioranescu and Murat. 3 The asymptotic analysis shows that for the critical size of the hole lim →0 2 | ln(r )| = 1, a zero order term appears in the one-dimensional limit equation obtained from (1.9) by letting → 0. In the literature, this term is sometimes called "strange term." In the case of a simple reduction of dimension without a hole, see Murat and Sili,4,5 it is known that the anisotropy of the material generally leads to the introduction of additional terms in the limit diffusion coefficients. More precisely, the limit of the sequence of the rescaled transverse temperature gradients 1 ∇ ′ u requires more attention since their limit which is proved to be still a gradient ∇ ′ w (the gradient with respect to the two first horizontal variable) is in fact the quantity that takes into account the anisotropy of the material at the limit. For orthotropic media (including isotropic ones), the entries of the diffusion matrix are such that A 13 = A 23 = 0. In that case, the limit ∇ ′ w reduces to zero. Similar situation arises in the framework of linear elasticity; see Sili. 5 In the terminology of correctors (see Bensoussan et al. and Tartar 6,7 ), the anisotropy introduces the additional term w in the corrector of u since in some sense, u behaves like u ∼ u(x 3 ) + w as proved in Murat and Sili 4 and Sili. 8 In this work, we aim to investigate the effect of the anisotropy in the limit diffusion when the structure contains a hole. Unfortunately, the test function used in Murat and Sili 2 which is an adaptation to the reduction of dimension problem of the test function already used in Cioranescu and Murat 3 for the homogenization in domains with holes is not suitable for the anisotropic case, and thus, it cannot be used in the asymptotic analysis. Following an idea introduced in Casado-Diaz 9,10 in the study of the homogenization of monotone operators in domains with holes and based on a judicious adaptation of the two-scale convergence method of Arbogast, Douglas and Hornung (see their work 11 ) which was developed later by Nguetseng 12 and Allaire, 13 we identify the limit one-dimensional model, and we prove the strong convergence of the sequence u for the norm of H 1 (Ω) associated to the operator ∇ ; namely, setting lim (1.17) in the critical case k = 1. Note that for k = 1, the sequence ∕ √ | ln(d ) (d defined in 1.11) occurring in the definition of w is equivalent to 2 for small . The convergence result implies in particular that ||u − (u(x 3 ) + w)w || H 1 (Ω → 0 and that the sequence of transversal temperature gradients 1 ∇ ′ u behaves as w ∇ ′ w + (u + w) 1 ∇ ′ w . We establish this corrector result under a weak assumption on the regularity of the matrix A. The limit problem is given by (1.21) where is defined by (1.16) in the case k = 1 and = 0 in the case k = +∞. For k = 0, the sequence u converges strongly to zero in H 1 (Ω). In the case k = +∞, our result means that the hole does not affect the form of the limit problem nor that of the corrector which is identical to the one found in Murat and Sili 4,5 and Sili. 8 We end this introduction pointing out once again that a method a priori intended for the study of homogenization problems is successfully applied to the study of a dimension reduction problem. The main reason is related to the fact that we consider here a thin structure the configuration of which may be identified as a representative cell of a periodic homogenization of a composite fibered medium as pointed out in Paroni and Sili 14 and Murat and Sili. 2 Note that the corresponding homogenization problems were also addressed in the last two references, and the homogenized problem is shown to be a copy of the one-dimensional limit problem obtained in the reduction of dimension occurring locally in each cell. The comparison between the homogenized problem and the local reduction of dimension is however not possible in the case of homogenization with oscillating boundaries due to the effect induced by the oscillations of the boundaries; see previous studies. 1,15,16 Before stating our main result, we make more precise our assumptions. We assume the following: and is continuous with respect to the variable , A ∈ (L ∞ (Ω)) 3×3 and A is continuous with respect to the variable , there exists c > 0, such that A ≥ c| | 2 ∀ ∈ R 3 . (1.10) As in Casado-Diaz, 10 we now introduce the following change of variables and unknowns already used in periodic homogenization which allows to deal with a sequence of functionsû which are constant with respect to the macroscopic variable in each cell. Note that this change of variables allows us to derive a strong compactness result on the sequenceû . Indeed, for instance in the case k = 1 for which | ln(r )| is equivalent to 1∕ 2 and according to (2.4) below, we get a priori estimates on the sequence 1 2û in L 2 ((0, R) × (0, 2 ) × (0, L)) for all 0 < R < 2; a priori estimate of this kind on the sequence u is out of reach. The space H 1 m (0, 2 ) is defined as the subspace of H 1 (0, 2 ) of functions with zero average over (0, 2 ). In order to built the corrector, that is, an approximation for the sequence u in some strong topology and also the strange term arising in the one-dimensional model, we need to introduce the unique solution (û 0 ,û 1 ) of the variational problem (1.15) The strange term arising at the limit is then a function (x 3 ) of the vertical variable defined by zdrd , a.e. x 3 ∈ (0, L), (1.16) while the corrector will be obtained with the help of the sequence ≃ 2 in the case k = 1, so that for k = 1, w may be equivalently written as w ( , We also need to define the elementary equation which allows to give a simple expression of the limit diffusivity coefficient when dealing with anisotropic materials. ) , 3 given in (0, L), the existence and uniqueness ofŵ is a consequence of the Lax-Milgram Theorem applied in the space H 1 m (Y ) equipped with the norm ||∇ ′ u|| (L 2 (Y )) 2 . Using the hypotheses (1.10) on the matrix A, one can prove that in factŵ ∈ L ∞ (0, L; H 1 m (Y )); see also Sili. 8 We also can check that w = 0 if the matrix A fulfills A 13 = A 23 = 0 as it is the case for isotropic materials; see Sili. 8 The limit diffusivity coefficient is then defined by our main result may be stated as follows: where ∈ L ∞ (0, L), > 0 a.e. in (0, L), is defined by (1.16). If k = +∞, the sequence u converges weakly in H 1 D (Ω) to the unique solution of If k = 0, the sequence u converges strongly to zero in H 1 (Ω). The following corrector result holds: assuming Moreover, if the solution u of (1.21) or (1.22) is such that u ∈ H 2 0 (0, L), then the following convergence holds true: Remark 1.6. Theorem 1.5 states that the strong convergence in H 1 (Ω) of the sequence u occurs without additional assumptions only in the trivial case k = 0. For the two other cases k = 1 and k = +∞, the convergence is in general a weak convergence; however, if the pairs (û 0 ,û 1 ) and u,ŵ are sufficiently regular, then one has the corrector result (1.23) from which one can deduce (1.24) with the help of the Poincaré inequality and then the strong convergence of u in H 1 (Ω) since the sequence w strongly converges to 1 in H 1 (Ω). By construction,û 0 andû 1 are H 1 with respect to r and , and thus, the regularity hypothesis on the derivative with respect to x 3 ofû 0 andû 1 makes the sequence w defined in (1.17) converge strongly to 1 in H 1 (Ω). Note that such regularity hypothesis is not out of reach since it is ensured as soon as the entries A are not depending on x 3 leading toû 0 andû 1 constant with respect to x 3 as shown by (1.15). In a similar manner, the regularity assumptions made onŵ are reached at least as soon as the matrix does not depend on x 3 . The hypothesis du∕dx 3 ∈ H 1 0 (0, L) allows to deduce (1.24) from the convergence of the gradients (1.23) with the help of the Poincaré inequality; that hypothesis is reached in case of a regular matrix A. Remark 1.7. The corrector result states that u behaves as (u + w)w where w =ŵ(du∕dx 3 ). Equivalently, one can say that u behaves as uw + w. Indeed, one has (u + w)w − (uw + w) = (w − 1) w while under the same hypotheses on w stated in Theorem 1.5, one can check that (w − 1) w strongly converges to zero in H 1 (Ω) for the norm associated to ∇ ; that is, ∇ ((w − 1) w) strongly converges to zero in ( L 2 (Ω) ) 3 . In the case of the Laplacian, it is known that the corrector takes the form uw ; see Cioranescu and Murat 3 and Murat and Sili. 2 Hence, the role of the anisotropy appears here in the corrector through the term w or equivalently ww , w being a sequence which tends to 1 in the Finally, in order to complete the study, we link the approach followed here with the one using capacities, see Dal Maso and Garroni. 17,18 For that aim, we set and then we introduce the following variational problem: (1.26) Regarding the sequenceĉ , we prove the following theorem. (1.27) Remark 1.9. Theorem 1.8 states that the extra term given by the formula (1.16) is also given as a limit of capacities according to the general result of Dal Maso and Garroni. 17,18 In the following section, we will prove some a priori estimates which will be used in the last section to prove Theorem 1.5. As announced above, we now establish some a priori estimates in particular those leading to a strong compactness result on the sequenceû . A PRIORI ESTIMATES Proposition 2.1. The sequence u extended by zero to the whole Ω satisfies the a priori estimate 1) and there exist a subsequence of (still denoted by ) and two functions u ∈ H 1 The sequenceû satisfies the following a priori estimates: 4) and there exist a subsequence of and two functions (u 0 , u 1 ) such that In the case k = 1, u 0 (2, ). (2.5) In the case k = 0, the sequence u converges strongly to zero in H 1 (Ω). Proof. Choosing u as test function in (1.9), we get thanks to the coerciveness of the matrix A, Since u vanishes on the part Γ D of the boundary of Ω defined by the use of the Cauchy-Schwarz inequality in the last integral combined with the Poincaré inequality allows to get easily the estimate (2.1). Therefore, u is bounded in H 1 D (Ω), and there exist a subsequence of and a function u ∈ H 1 D (Ω) such that (2.2) holds true. The domain Ω being connected, the fact that u = u(x 3 ) ∈ H 1 0 (0, L) is a consequence of the estimate (2.1) which implies that ∇ ′ u strongly converges to zero in L 2 (Ω) and thus, ∇ ′ u = 0 in Ω. Noting that the sequence d (2−r) √ | ln(d )| converges uniformly to zero (with respect to r) in each interval [0, R] with 0 < R < 2, we derive easily the last convergence of (2.5) from the first one. Let us now prove the property u 0 (2, x 3 ) = u(x 3 ) a.e. in (0, L). The proof is given in Casado-Diaz 10 for k = 1 in the framework of the periodic homogenization; we reproduce it here including the case k = 0 for the convenience of the reader. First, by the Rellich-Kondrachov's Theorem, one can assume that u converges strongly to u in L 2 (Ω) so that using once again the change of variables (1.12), we infer Let us fix a constant such that 0 < < 1∕2. Defining R ′ by R ′ ∶= 2 − ln( ) ln(d ) < R , one can check that d 2(2−r) ≥ 2 for all r ≥ R ′ . In addition, one has ln(d ) = ln(2 )∕R − R ′ in such a way that (2.12) implies (2.13) Assume now that lim →0 2 ln(d ) = 1. Then the function u 0 and the related convergence of (2.5) still hold for the . To continue the proof, we need the following lemma the proof of which is based on elementary arguments. (2.15) From the weak lower semi-continuity of the norm and thanks to (2.13), we derive the following inequality by passing to the limit in (2.15): (2.16) which leads to the equality and then u 0 (2, In the case k ∶= lim →0 2 ln(d ) = 0, we remark that the first convergence in (2.5) holds true forû with the corresponding limit u 0 = 0. The same arguments as those used for k = 1 lead to (2.17) with u 0 = 0, and this allows to conclude that u = 0 in the case k = 0. It is then easy to deduce the strong convergence of u to zero in H 1 (Ω) from the coerciveness of the matrix A by passing to the limit in the following equality: (2.18) PROOF OF THE MAIN RESULTS We start by introducing the appropriate test function which will be used in order to pass to the limit in (1.9) when k takes the values 1 or +∞. Note that the case k = 0 is not concerned by what follows since the sequence u of solutions of (1.9) converges strongly to zero in H 1 (Ω) as proved in Proposition 2.1. Test function Following Casado-Diaz, 10 we choose two functions v 0 and v 1 where ( ) and ( ) are defined by (1.11). Regarding this function, we prove the following result. Proposition 3.1. The sequence v defined by (3.1) belongs to H 1 (Y ), vanishes in T , and satisfies the inequality For k = 1 or k = +∞, there exists a subsequence of such that v converges strongly to 1 in H 1 (Y ). Proof. Taking into account (1.11) and (1.14), we get with the help of the change of variables = d 2−r z, Due to (3.1) and (3.3), the change of variables in the integral on the left of (3.2) allows to deduce easily the inequality (3.2). On the other hand, by construction of v 0 and v 1 and since | | ≤ d implies 0 ≤ ( ) ≤ 1, we get v = 0 in T . Also one can check that the sequence v is bounded in L 2 (Y ) which combined with (3.2) implies the boundedness of v in H 1 (Y ) for k = 1 or k = +∞. Hence, one can assume up to extracting a subsequence that v converges weakly to some v ∈ H 1 (Y ). We obtain that v = 1 by checking that ∫ Y |v −1| 2 d → 0. Since | | ≥ d implies ( ) ≥ 2− and since v 0 = 1 in [2 − , +∞[, we obtain the latter convergence by writing the integral as a sum ∫ | |≤d |v − 1| 2 d + ∫ | |≥d |v − 1| 2 d . The convergence of v in H 1 (Y ) is a strong convergence since ∇ ′ v converges strongly to zero in L 2 (Y ) according to the estimate (3.2) in which the right hand side is bounded for k = 1 or k = +∞ so that ∇ ′ v is bounded in ( L 2 (Ω) ) 2 by C where C is a constant. Passing to the limit in (1.9) In this subsection, we use the convention on repeated indices. The indices , take the values 1 or 2. We take a test function in (1.9) in the form (ū(x 3 ) +w)v ( ) whereū ∈ H 1 0 (0, L) andw ∈ H 1 D (Ω). Clearly, such test function is admissible in view of the properties of v . For the sake of brevity, we denote the entries of the matrix by A instead of A ( , x 3 ) while the derivative with respect to x i (i=1,2,3) will be denoted by i instead of ∕ x i . We write explicitly the left hand side of (1.9), and we get (3.4) Note that due to the fact that v = 0 in T , the integral over Ω in (1.9) reduces to an integral over Ω. We pass to the limit → 0 in each term of the right hand side of (3.4). Note also that (3.2) implies that ∇ ′ v converges strongly to zero in L 2 (Y ) so that one can assume that for a subsequence of , v converges strongly to 1 in H 1 (Y ). Using the latter together with the convergences (2.2) and (2.3), we get the following limits: Hence, it remains to compute the limits of two integrals; for the first one, using the above change of variables, the definition of v , the continuity of the matrix A with respect to , the second a priori estimate of (2.4), we get easily We now deal with the last limit arising in the right hand side of (3.4), namely, In view of (2.11), (3.1), and (3.3), the following equality holds true: . (3.10) Hence, using again the change of variables = d 2−r z in (3.9), we can compute the limit (3.9) to get with the help of (3.14) and (2.5) where we have set = 1 if k = 1 and = 0 if k = +∞. We continue the proof assuming that k = 1 and thus = 1 since choosing v 1 = 0 in the definition (3.11), we see that the limit (3.11) reduces to zero and therefore the limit problem in the case the limit problem in the case k = +∞ which corresponds to = 0 is the following equation: ) , ) . (3.12) At this stage of the proof, we have proved thanks to (3.4), (3.5), (3.6), (3.7), (3.8), and (3.11) that the passing to the limit in (1.9) in the case = 1 leads to the following equation: Due to the fact that v 0 and v 1 vanish in [0, 1] according to (3.1), one can see that the second integral of the right hand side in (3.13) reduces to an integral over (1, 2) with respect to r; moreover, we can extend equation (3.13) by density to all ) . Hence, setting Equation (3.13) takes the following form: On the other hand, for allv 0 ∈ H 1 0 (1, 2), the function v 0 +v 0 satisfies the same hypotheses as those satisfied by v 0 ; therefore, one can deduce from (3.15) that for allv 0 ∈ H 1 0 (1, 2), the following system holds true: a.e. x 3 ∈ (0, L), d dr (3.16) so that a.e. x 3 ∈ (0, L), ) . (3.17) Hence, the pair (u 0 (.x 3 ), u 1 (., ., x 3 )) ∈ L 2 ( 0, L; H 1 (1, 2) ) × L 2 ( 0, L; L 2 (1, 2; H 1 (1, 2) ) is a solution of (3.17), and moreover, it satisfies u 0 (x 3 , 2) = u(x 3 ), a.e. in (0, L). The uniqueness of the solution (u s 0 , u s 1 ) of (3.17) satisfying u s 0 (2) = s for a fixed s ∈ R allows to conclude that the pair (u 0 (.x 3 ), u 1 (., ., x 3 )) is given by (u 0 (.x 3 ), u 1 (., ., x 3 )) = (û 0 ,û 1 )u(x 3 ) where (û 0 ,û 1 ) is the unique solution of (1.15). It remains to prove the corrector result which is the purpose of the following subsection. Proof of the corrector result We consider the critical case k = 1; the case k = +∞ is simpler, and the calculations are similar. We introduce the sequence and we will prove that lim →0 I = 0; the coerciveness of the matrix A will then allow to obtain the corrector result stated in (1.5) since w is given by (3.19). Note that in view of the regularity hypothesis on the solution (û 0 ,û 1 ) of (1.15), one can check easily that w strongly converges to 1 in H 1 (Ω). The latter convergence property of w will be used several times in the calculation of the limit of I . We will also use (2.11) and the following expression derived from the definition of w given in (1.17): , (3.22) where z and are defined in (1.11) and (1.14), respectively. In order to simplify the exposition, we perform the calculations assuming the matrix A to be symmetric which allows us to gather some terms, but the calculation without this assumption is quite similar. We split the last integral into several parts, and we get (3.23) We discuss in detail the limit of the second integral in the right hand side of the last equality, the limits of the other integrals are studied in a similar way, and we will only indicate these limits. Recalling the definition (1.8) of the operator ∇ , we get . (3.24) Thanks to the change of variable = d 2−r z, (2.11), (3.22), the property (u 0 , u 1 ) = (û 0 ,û 1 )u, (3.20) withū = u, and Proposition 2.1, we get For the other integrals in the right hand side of the last equality in (3.24), we get easily by the use of Proposition 2.1 and the strong convergence of w to 1 in H 1 (Ω), (3.26) For the third integral arising in the last right hand side of (3.23), we use once again the fact that ∇ ′ w converges strongly to zero in L 2 (Ω). Hence, we have (3.27) Finally, we compute the limit of the last integral of (3.23), and we get through similar calculations of that used in (3.25) (3.28) Summarizing the above limits and using the following property derived from (3.18) by choosingū = 0 andw = w, we get lim →0 I = 0. Under the additional assumption u ∈ H 2 0 (0, L), the function u − (u + w)w vanishes over Γ D defined by (2.7) in such a way that the Poincaré inequality allows to deduce the convergence (1.24) from (1.23). The proof of Theorem 1.5 is now complete. ACKNOWLEDGEMENT There are no funders to report for this submission.
2022-05-05T15:01:49.962Z
2022-05-03T00:00:00.000
{ "year": 2022, "sha1": "f54efcc070f97829f5b0139074f7cd894da11cb7", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1002/mma.8341", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "156d940de14e3708158f26c34905725571080d8c", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
16822302
pes2o/s2orc
v3-fos-license
Quantitative Assessment of Radionuclide Uptake and Positron Emission Tomography‐computed Tomography Image Contrast Radionuclide uptake and contrast for positron emission tomography-computed tomography (PET-CT) images have been assessed in this study using NEMA image quality phantom filled with background activity concentration of 5.3 kBq/mL fluorodeoxyglucose (F-18 FDG). Spheres in the phantom were filled in turns with water to mimic cold lesions and FDG of higher activity concentrations to mimic tumor sites. Transaxial image slices were acquired on the PET-CT system and used for the evaluation of mean standard uptake value (SUVmean) and contrasts for varying sphere sizes at different activity concentrations of 10.6 kBq/mL, 21.2 kBq/mL, and 42.4 kBq/mL. For spheres of same sizes, SUVmean increased with increase in activity concentration. SUVmean was increased by 80.6%, 83.5%, 63.2%, 87.4%, and 63.2% when activity concentrations of spheres with a diameter of 1.3 cm, 1.7 cm, 2.2 cm, 2.8 cm, and 3.7 cm, respectively, were increased from 10.6 kBq/mL to 42.4 kBq/mL. Average percentage contrast between cold spheres (cold lesions) and background activity concentration was estimated to be 89.96% for the spheres. Average contrast for the spheres containing 10.6 kBq/mL, 21.2 kBq/mL, and 42.4 kBq/mL were found to be 110.92%, 134.48%, and 150.52%, respectively. The average background contrast variability was estimated to be 2.97% at 95% confidence interval (P < 0.05). Introduction tissue uptake of FDG that is of interest. The two most significant sources of variation that occur in practice are the amount of injected FDG and the patient size. [7] The practice of using standard uptake value (SUV) thresholds for diagnosis is known to be affected by a number of factors, which does not make it wholistically acceptable worldwide. These factors have been discussed extensively by a number of researchers. [1,4,[8][9][10][11][12][13][14][15] Two common reasons for the inconsistent use of SUVs in practice are: (i) Accurate staging and diagnostic information do not have to depend upon accurate image quantification, since the relative image content or appearance is often sufficient for such purposes, [16] and (ii) measured SUVs have large degree of variability due to physical and biological sources of error, as well as inconsistent and non-optimized image acquisition, processing and analysis. [1,[17][18][19][20][21] However, the use of SUV as a measurement of relative tissue/organ uptake facilitates comparisons between patients, and has been suggested as a basis for diagnosis. [7,22,23] Standard uptake value, which is a simplified measure of radionuclide uptake, is until date thought to be the most widely used method for the quantification of F-18 FDG PET studies, although other methods have been developed as well. [24,25] Tomographic image quality of PET-CT images is determined by a number of different performance parameters, primarily the contrast and spatial resolution, scanner sensitivity, tomographic uniformity, and the process that is used to reconstruct the images. [26] Due to the complexity of variation in uptake of radiopharmaceuticals and the large range of patient sizes and shapes, the characteristics of radioactivity distributions often vary greatly and a single study with a phantom cannot simulate all clinical imaging conditions. However, such studies give some indications of image quality for a particular imaging situation that could be reproduced on different scanners at different times. The study of image quality (contrast) follows closely the NEMA NU2-2007 recommendations. [27] Image contrast is assessed to produce images simulating those obtained in a total body imaging study involving both hot and cold lesions. Radioactivity is present outside the PET scanner to mimic out-of-field radioactivity, and spheres of different diameters are imaged in a simulated body phantom with nonuniform attenuation. Image quality is assessed by calculating image contrast and background variability ratios for both hot and cold spheres. Materials and Methods Image quality phantom (IEC/NEMA 2001 body phantom, Middleton, Wisconsin, USA) [28] was used for the study. The phantom container was filled with 5.3 kBq/mL F-18 FDG to serve as background activity. This activity concentration is an approximation of the typical background uptake observed on clinical data and hence, is recommended for the use of NEMA phantom devices. [26,29] Step 1 Five spheres in the phantom (with diameters 1.3 cm, 1.7 cm, 2.2 cm, 2.8 cm, and 3.7 cm) were filled with tap water to mimic cold lesion imaging. The phantom was set up and aligned in a supine position on PET-CT system (Biograph 40, Siemens, Memphis, Tennessee, USA) for imaging as shown in Figure 1a. PET-CT images of the phantom were acquired in 3 min (one bed position) and displayed in 512 × 512 matrix. Transaxial image slice centering on the spheres as shown in Figure 1b was selected for the analysis. Step 2 The spheres were emptied and refilled with F-18 FDG of concentration 10.6 kBq/mL such that the ratio of concentration between spheres and background was 2:1. Images were acquired in the same setup and acquisition conditions as in step 1. Step 3-4 The procedure was repeated for activity concentrations of 21.2 kBq/mL and 42.4 kBq/mL such that the ratios of concentration between sphere and background were 4:1 and 8:1, respectively. Figure 1c indicates hot spots from the higher activity concentration in spheres. Standard uptake values (SUVs) and image contrasts (Q) for the different activity concentrations in the spheres were assessed by drawing regions of interest (ROIs) over the respective target areas, and employing equations 1-4. Four transaxial image slices (one from each of the four steps) centering on all spheres were used for the analysis. Activity Concentration in ROI (kBq / mL) SUV = Injected Activity (MBq) / Body Weight (kg) (1) Percentage contrasts for hot and cold spheres were estimated by equations (2) and (3) where C H denotes the counts of activity in ROI for hot sphere and C B denotes average counts of activity in corresponding background ROIs; a H is activity concentration in the hot sphere and a B is activity concentration in the background; C C denotes the counts of activity in ROI for cold sphere. Percentage background contrast variability (N) was estimated from equation 4. [26] where K is the number of background ROIs (i.e. 48). Results The study was performed to assess the quality of PET-CT images by simulating those obtained in total body imaging involving hot and cold lesions. Four transaxial image slices corresponding to cold sphere (water) and hot spheres (10.6 kBq/mL, 21.2 kBq/mL, and 42.4 kBq/mL) were used in examining the trends of radionuclide uptake and image contrasts in the study. SUV, which indirectly characterizes the level of radionuclide uptake in tissues, was used as a measure to analyze the quantities of radionuclide activity in spheres of different sizes in the image quality phantom. The image quality phantom mimics a human being with varying tumor sites and sizes. For uniform activity distribution with no excretion, SUV = 1 assuming tissue density of 1 g/cm 3 . Mean SUV (SUV mean ) was used as quantity for measurement in this study because its value does not change significantly with image reconstruction factors such as matrix size, number of subsets, and iterations, compared to maximum SUV (SUV max ). [30] Uptake values (SUV mean ) for the different activity concentrations and respective sphere sizes are presented in Table 1 and graphically shown in Figure 2. Pixel intensity plot from image J, along the central point of a hot spot in the transaxial images is indicated in Figure 3. The percentage contrast estimates are presented in Table 2. Variation of percentage contrast with the sphere's activity concentrations is presented in Figure 4 while the variation of percentage contrast with spherical sizes is presented in Figure 5. The background contrast variability estimates for different ROI sizes are presented in Table 3 and graphically presented in Figure 6. Discussion For spheres of same sizes, SUV mean increased with increase in activity concentration [ Figure 2]. between the highest and least recorded SUV mean . Table 1 presents detailed SUV data for the study. From Figure 3, in the region around the central point in the hot spot, pixels have intensities of approximately 250 units, and background activity has pixel intensities of less than 100 units. Image pixel intensities have a linear relation with activity concentration; hence, the uneven nature at the peak of the curve implies imperfect uniformity of activity distribution in the sphere. Deviations of pixel values at the peak of the pixel plots for hot spots in the study were all within 10%, signifying a relatively uniform concentration. Image contrast evaluation between hot or cold spots (lesions) and background radionuclide activity in acquired images reveal to some extent the level of image quality produced on PET-CT systems. Figures 4 and 5 show the percentage contrast variabilities with the sphere's activity concentrations and sizes, respectively [ Figure 4 is projected to assume a plateau shape beyond 42.4 kBq/mL where the radionuclide activity concentration ratio between sphere and background is 8:1. For spheres of diameter 1.3 cm, 1.7 cm, 2.2 cm, 2.8 cm, and 3.7 cm, contrast increased by 79.7%, 65.8%, 64.3%, 63.9%, and 63.8% between a cold lesion (water) of no radionuclide activity and a hot lesion of 42.4 kBq/mL, as depicted in Figure 5. By increasing the activity concentration, bigger sized spheres recorded relatively less increase in contrast compared to smaller spheres. This observation could be a result of the count density being high for smaller volumes compared to larger volumes. At constant activity concentration, contrast remains relatively the same for hot spheres of different sizes, an indication that tumors of different sizes but containing similar activity concentrations may likely record similar contrast values. Background contrast variability (N) estimation allows assessment of the accuracy of absolute quantification of radioactivity concentration in the uniform volume of interest inside the phantom. Contrast within the background activity distribution varied by approximately 3% as shown in Figure 6. Using ROIs of different sizes (1.3 cm, 1.7 cm, 2.2 cm, 2.8 cm, and 3.7 cm), the average background contrast variability was estimated to be 2.97 ± 0.1% (2.85-3.08%) at 95% confidence interval with P < 0.05 [ Table 3]. The tolerance level of ±5% relative to baseline estimates for contrast and background variability recommended by the IAEA Human Health Series 1 [26] could not be assessed due to unavailable baseline data. Conclusion The study has analyzed SUVs and contrast of PET-CT images with the use of NEMA image quality phantom. For same sized spheres, SUV mean increases with increase in activity concentration, affirming that radionuclide uptake values correspond proportionally to the concentration of activity in organs or tissues. Radionuclide activity concentration was also shown to have linear relation with image pixel intensities. Contrast between tumor sites (hot lesions) and background activity distribution increases with increasing activity concentration in the tumor but the contrast is likely to plateau beyond certain concentration ratios between the tumor and background activity. At constant activity concentration, contrast remains relatively the same for tumors (hot lesions) of different sizes. Background contrast variability has also been determined to be approximately 3%, indicating a very good uniformity within the background activity concentration.
2018-04-03T05:25:26.725Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "3bc134d2794e8ede8c497ee11b80becc7222a665", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/1450-1147.174702", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3bc134d2794e8ede8c497ee11b80becc7222a665", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235490494
pes2o/s2orc
v3-fos-license
Step Out of Your Comfort Zone: More Inclusive Content Recommendation for Networked Systems Networked systems are widely applicable in real-world scenarios such as social networks, infrastructure networks, and biological networks. Among those applications, we are interested in social networks due to their complexity and popularity. One crucial task on the social network is to recommend new content based on special characteristics of the graph structure. In this project, we aim to enhance the recommender systems by preventing the recommendations from leaning towards contents from closed communities. To counteract the bias, we will consider information dissemination across network as a metric to assess the recommendation for contents e.g. new connections and news feed. We use academic collaboration network and user-item interaction datasets from Yelp to simulate an environment for connection recommendations and to validate the proposed algorithm. INTRODUCTION Graphs have been utilized in a wide variety of practical applications, including but not limited to social networks, biology networks, and interconnected large-scale systems. In the applications of social networks, graphs are composed of individuals and their relationships with others, represented as nodes and edges. Similarly, warehouses and routes of a supply chain network can be modeled as a directed graph with nodes and edges. For those graphical systems, especially for social networks, one important application is how to accurately recommend contents or connections for existing users. Profound research has been conducted to establish reliable and efficient recommender systems. For example, Konstas et al. [11] introduce a collaborative filtering recommendation system. In their work, they utilize network information including personal preferences and underlying users' communities to enhance the performance of the recommender system. Yang et al. [24] demonstrate that the accuracy of the recommendation system can be increased by learning the category-specific social trust circles from the network data. Moreover, Fan et al. [2] apply the graphical neural network to predict user-item rating. The model separately learns the user and item * Both authors contributed equally to this research. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference'21, May 2021, UIUC, IL, USA © 2021 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn latent variables from the two user-user and user-item graphs, and train the predictor on these latent variables. All the aforementioned works try to improve the recommender systems' performance by applying additional information from the graphs, such as communities of users, trust circles as well as users' interactions. Nonetheless, such additional information tends to be community-specific; as a consequence, the users would likely connect within their community or heavily receive homogeneous recommendations. The cycle of a user interacting based on recommendations and recommendations constructed from the user's interactions leads to an undesirably disconnected graph. The overall utility of such recommendation is questionable: out-of-community recommendations can be more preferable for users. Furthermore, the dissemination of information should not be limited by specific communities identified. For example, on Youtube, a user may prefer to have coverings for the same subject from different perspectives; or on TikTok, the user may want to discover new contents different from his/her "learned" interests. As a result, in this project, we want to avoid the over-constrained recommender systems by considering information dissemination crossing the entire network via user-item interaction graph. Information dissemination has been used as a measure in the literature to evaluate the structure of the graph. For instance, Tong et al. [19] have demonstrated an algorithm to increase the information dissemination by adding edges to the graph. However, it assumes that all edges have the same addition cost, which is false in the recommendation problem; in particular, target users have a preference for each connection. Generalizing the idea for the social network settings, out-of-community connections should be recommended based on information dissemination metrics together with the likelihood of users' interactions. Thus, we adopt DiffNet Wu et al. [22] to formulate an appropriate optimization model to recommend potential user-item connections based on social network or user-user interactions (Section 2). We also study information dissemination in existing graphs and propose an integration of dissemination into aforementioned optimization problem to train DiffNet in order to optimize recommendation accuracy and dissemination capability simultaneously (Section 3). We then empirically study the effect of dissemination factor and draw DiffNet's Pareto front on accuracy-dissemination trade-off (4). In general, we are able to formulate a recommender algorithm to smoothly incorporate information dissemination as an additional criterion. NETWORK DIFFUSION FOR RECOMMENDATION As for traditional recommendation tasks, many studies have been conducted to find sophisticated user and item embeddings in latent space and thus to directly predict users' preferences based on the embedded representationsGuo et al. [6], He et al. [8], Rendle [16]. However, the performance of the embedding based recommendation technique is usually deteriorated because of the sparsity of the user-item interactions. This drawback motivates the research about how to incorporate external information other than user/item embeddings to enhance the recommendation system. For instance, with the abundant information from the social networks, interactions between the users can be utilized to learn the user's preference towards a specific item even tough the user-item interaction of such a pairwise relation is missing. The intuition is that people who are in close contact may share similar interests and such social network information can be utilized to tackle the data sparsity issue. Thus, how to leverage the underlying information in the social networks becomes a promising direction in the recent literature Guo et al. [4], Jiang et al. [10], Tang et al. [18]. Among all those social network aided recommendation frameworks, in this study, we first adopt the influence diffusion neural network (DiffNet) proposed by Wu et al. [22] to analyze the applicability of using social network information in the recommender system. The essential component of the DiffNet is to capture the user's interest and to model the propagation process of the user's influence to his/her neighbors k-hops away. Such an iterative interest/influence diffusion process can change the latent user embeddings at each propagation step. And it's beneficial to comprehend the precise user/item embeddings during the diffusion process to improve the recommendation performance. In the following sections, we first briefly summarize the key components of the DiffNet, and demonstrate it's original performance on the real world dataset. DiffNet Architecture There are four major components of the DiffNet: embedding layer, fusion layer, influence diffusion layer, and prediction layer. And the overall flowchart of the DiffNet is depicted in Figure 1. Embedding layer: Similar to the collaborative filtering, free embeddings of the user and item can be obtained from e.g. matrix factorization and neural network based filtering. Let P ∈ R × and Q ∈ R × represent the embeddings of the users and items, where / is the number of users/items and is the dimension of the latent space. These two free embeddings are treated as the initial inputs to the DiffNet. Fusion layer: Besides the embeddings of the users/items, other useful features can also be used as the input. For instance, by using the Word2vec model, features can be constructed from the embedding of each word in the reviews given to businesses on Yelp, the comments left on posts on Twitter, or the descriptions of images on Instagram. Once the corpus of a given dataset has been analyzed, the feature vector of each user/item can be constructed by averaging all the learned word vectors for the user/item. For the user , let x denote the feature vector. Then the free embedding of the user P can be fused with the x by employing a simple neural network with one fully connected layer: where W is the weight; is a nonlinear function e.g. a fully connected layer; x and P have been concatenated first. Similarly, the fusion layer for the item can be expressed as: Influence diffusion layer: After we obtain the initial fused user embedding h 0 , then we need to diffuse such a user's influence information in the social network. And the influence diffusion layer models the dynamic diffusion process for the user's latent preference. This process can be represented as a -layer structure: each layer takes the user embedding h −1 from the previous layer and outputs the updated embedding h as the propagation goes on. And the propagation forwards by one hop at each layer. Specifically, the updated user embedding consists of two parts: the previous embedding and the influence diffusion from the trusted users at current layer. Notice that, in a connected social network, a user trusts every users if is large enough. Let denote the set of trusted users of user , then the influence diffusion from trusted users for user is where the is a pooling operation, e.g. taking the maximum of or averaging the input. Then the overall updated user embedding can be formulated as a nonlinear mapping: where again is a nonlinear function and W is trainable weights. Prediction layer: With both the user embedding h afterhops diffusion and the fused item vector v on hand, the final user representation can be obtained from where is the set of items that user has shown interest. This equation states that the latent representation of each user has two parts: the user embedding from the social network diffusion process, and the average item embedding from his/her known preferences in the dataset. This user embedding is more comprehensive by considering both the information of the social network and the user's historical preferences. And in order to make predictions for potential user-item interactions, we can adopt the traditional approach of taking the inner product of the item embedding and the final user representation: Thisˆcan be interpreted as the predicted probability of having a connection between the user and the item . And in this way, based on the recommendation prediction results, a new graph := ( := { , }, := × ) can be established. Numerical Study To evaluate the effectiveness of the DiffNet for recommending new contents, we utilize the well-known Yelp dataset yel [1]. This data set consists of two parts: a social network with 17237 users, 129455 edges and a matrix recording the rating from those users to 37378 businesses. The user/item feature matrix X/Y is first derived from the Word2vec model by analyzing the reviews. The dimension of the feature is set to be 150 for both users and items. To evaluate the recommendation performance, we utilize two metrics: hit ratio (HR) and normalized discounted cumulative gain (NDCG). Given a top-N ranking list based on the recommendation resultˆ, the HR measures the proportion of the number of recommendations that the user truly likes. And NDCG not only considers the occurrence of an appropriate recommendation but also takes the rank of each recommendation into consideration: the higher the rank of a correct recommendation, the larger the score. Notice that there are two important parameters that can significantly affect the performance of the DiffNet: the dimension of the free embedding P/Q and the top-N values in the evaluation metrics. Thus in Table 1 we show the numerical results of the HR and NDCG for different settings of and . We include the recommendation performance from the traditional collaborative filtering based technique i.e. SVD++ Guo et al. [5] as a comparison. Moreover, comparing to other graph diffusion based approach, e.g. graphical convolutional network (GCN) based PinSage Ying et al. [25], the DiffNet provides better performance in terms of the HR and NDCG: the PinSage achieves a HR score around 0.30 for = 32 and = 10, while that for NDCG is 0.18, that are both worse than the results of DiffNet. RECOMMENDATION AND DISSEMINATION The DiffNet enables a way to make recommendations by utilizing social information. The recommendation can be viewed as an establishment of new interaction between users and items. However, such a recommendation may worsen the information dissemination of the network, e.g. repetitively recommend contents from the same topic to a user or confine the recommendations for a user to a specific category. To avoid the recommendations with limited varieties, information dissemination capability of the graph formed from recommendation results can be considered to evaluate the recommendation performance. Thus, in this section, we first investigate the unsaturated level of information dissemination in real world graphs by comparing to other synthetic graphs with similar sizes in a connectivity optimization. Later, motivated by the unsaturated information dissemination capability, we propose a way to incorporate the dissemination capability of the network as an additional metric to recommendation tasks. Information Dissemination Information dissemination describes a characteristic of a graph in a stochastic process such as disease infection, news spreading, or data broadcasting. The prominent models to study these processes are the compartmental models which consider an ordinary differential equation (ODE) between states of nodes with interactions arising from the edges of the graph. As a simple variation, Susceptible-Infectious-Susceptible (SIS) models susceptible-to-infectious interaction via edges and probabilistic recovery to susceptible. Despite the complexity of the process, Wang et al. [21] have discovered that information dissemination can be summarized by one parameter: the largest eigenvalue 1 of the adjacent matrix. Moreover, the largest eigenvalue also indicates whether a spreading would evolve into an epidemic or disappear over time Prakash et al. [15]. To this end, we study the selected datasets from the information dissemination perspective. In this report, we include two datasets: the Yelp social graph yel [1] and the collaboration network of Arxiv High Energy Physics Theory (CA-HEPTH) Leskovec et al. [12]. To avoid an ill condition in the SIS model, we extract the largest connected subgraph from each dataset and remove all other connected components. The subgraphs contain 99.4% and 87.5% of all nodes. Table 2 shows the summary of dataset sizes and the information dissemination parameters 1 Table 1: Testing results of the DiffNet on the Yelp dataset after 500 epochs of training: we consider two types of hyperparameters (1) the number of dimension of the free embedding P a /Q i while is fixed at 10 (2) the number of the top-N ranking for the predicted recommendation while is fixed at 16. Dataset Nodes Edges and CA-HEPTH generally follow the power-law distribution but Yelp dataset contains some nodes with extremely high degrees. We compare these two real world graphs with 4 synthetic graphs with similar numbers of nodes and edges. Stars Topology refers to a graph where a few nodes connect to all other nodes. We connect the star's edges as many as we can to match the number of edges in the original graph. Given nodes labeled with consecutive numbers in a ring, Next-K Neighborhood links each node to its adjacent neighborhood such that the total number of edges match. Erdős-Rényi model uniformly samples all possible edges. Finally, Barabási-Albert model generates a graph with power-law degree distribution by sampling edges with preferential attachment. We generate these 4 synthetic graphs for each dataset size, totaling 8 generated graphs. Alongside the original graphs, we also list the summary of these synthetic graphs in Table 2. SIS simulations Miller and Ting [14] in Figure 3 empirically present the information dissemination process in all 10 graphs. Overall, the convergence rate of the infected fraction is consistent with the largest eigenvalues shown earlier: the higher 1 of the graph, the faster the convergence rate is initially. Stars Topology stands as the most information-disseminating graph. And this finding is consistent with the intuition: information is much easier to propagate in this setting since information from nodes with extremely high degrees can be quickly propagated to their numerous neighbors just one hop away. And the gap between the spreading in real world graphs and the artificial Stars Topology signifies some room for improvement for the real world network topology. To further show that some of these graphs could have a wider spreading result, we replicate Gelling algorithm from Tong et al. [19] to suggest edges that would improve information dissemination the most. We choose to implement the full ( 2 ) version of the proposed algorithm, given the dataset size is tractable. Figure 4 outlines increasing largest eigenvalues after adding some edges via the Gelling technique. When we add 10 4 edges, the information dissemination capability of the graph from Yelp dataset doubles while that of CA-HEPTH dataset improves by approximately 4 times. Similar trends can be observed in other synthetic graphs as well, but only insignificant in Stars Topology. Dissemination-aware Training Now we see that there is a gap between ideally disseminating graph and existing ones. One can be tempted to substitute the recommender with dissemination optimizer; however, the most influential edge on dissemination is not guaranteed to be the best edge for recommendation task i.e. obtaining the highest HR or NDCG. Thus, our goal is to both accurately predict connections and improve dissemination in the graph to improve user's exploration on new information. To recommend new user-item pairwise relation via a neural network model, the most common approach is using the cross entropy loss to quantify the difference between the predicted edge connecting probability and the true edge connection status. In this case, the edge of interest is the predicted user-item interaction that has not been seen yet. However, we later find that using mean squared error (MSE) loss can significantly improve the performance of the model in terms of HR/NDCG. And the objective of the disseminationaware recommendation model is shown as following: where ( , ) is the dissemination score and ( , ) is a dissemination factor between user in distribution A and item in distribution I. In this study, the ( , ) has the form: We assume a uniform distribution of user index. To mitigate the negative or unknown interactions dominating the training process, we construct an biased item distribution towards positive interactions similarly to other preceding evaluations He et al. [8], Wu et al. [22]. Specifically, we use all the positive ratings from each user provided in the Yelp dataset as the true positive connections.But we only uniformly sample 8 out of the remaining possible user-item connections for training and assign them as the negative interactions (since those connections have not been observed in the real dataset). As for evaluating the recommendation performance, we select 1000 users randomly and prepare the positive/negative connections in the same manner to compute HR and NDCG. Another important term introduced in the Equation 7 is the dissemination score ( , ). To obtain the score for each possible user-item pairwise relation, before training, we first combine the user-user social network and the user-item interactions and derive an + by + adjacency matrix. Then the first eigenvector z ∈ R + of the aggregated adjacency matrix has been precomputed. Thus during the training process, we can use this eigenvector to compute the dissemination score ( , Notice that the constant ∈ R in Equation 8 is an additional hyperparameter. It can be interpreted as a interpolation weight between the original MSE loss without considering dissemination and its fully dissemination-aware version, when → −∞ and → ∞ respectively. In our experiment, we have analyzed the effects of this hyperparameter and have varied to generate the EXPERIMENT RESULTS Besides the established metrics for evaluating the recommendation performance e.g. HR and NDCG, we also need to analyze the information dissemination capability of the graph formed after obtaining the recommendation results. And the dissemination performance is quantified by the largest eigenvalue 1 of the graph. The original Yelp dataset has been split into train, validation, and test dataset in a way that each partition contains non-overlapping user-item interactions. And the test dataset has around 18000 records correspond to 10622 users and 11948 items, which is a sparse network that can be constructed quickly. The original 1 of the social network for the testing data is 70.47. To compare the information dissemination capability of the new networks, we have trained the dissemination-aware DiffNet for 500 epochs. Then we run the trained model on the testing dataset, obtain the recommendation results and expand the user-user social networks by adding user-item edges. For any user-item pair, if the predictedˆis higher than 0.5, a corresponding edge = ( , ) will be added to the original graph. Table 3 summarizes the numerical results for the disseminationaware DiffNet on the testing Yelp dataset with different weights . We have considered the HR/NDCG as well as the largest eigenvalue of the graph to evaluate the recommendation performance. And the performance of the original DiffNet without any dissemination term is denoted as = −∞. It's not surprising to see that the recommendation accuracy has been deteriorated after introducing the dissemination term in the objective function while the largest eigenvalue increases significantly. For all levels of , the numbers of newly introduced edges i.e. predicted user-item connections are all around the same level. However, the recommendation accuracy and the dissemination capability achieved from different settings are quite different. This indicates that the final topologies of the user-item graph are different. And the final dissemination scores in Figure 5: Visualization of trade-off between the recommendation accuracy and the information dissemination capability of the network after realizing recommendation results. some cases are worse than the original social networks without any recommendation e.g. when ≤ −1. This is due to the fact that most of the predicted recommendations connect new item to existing users as an end node, which has a low nodal degree in the resulting graph. Based on the numerical results in Table 3, we can see a clear trade-off between the recommendation accuracy and the information dissemination capability induced by the recommendation algorithm. To better comprehend the trad-off, Figure 5 illustrates the changes in NR/NDCG and the 1 obtained from the new user-item graph. For both the NR and NDCG, the numerical results decrease with the increasing dissemination score 1 . And this observation is consistent with the objective function since larger makes the recommendation model focus more on establishing edges to maximize the information dissemination, while smaller leads to a model similar to the vanilla DiffNet. From Figure 5, we can see that the decreasing trend of the recommendation accuracy is not linear and there is a sharp change when the is set to be approximately 0. This could be the result of expressing the dissemination factor as an exponential term. And the plot suggests that an between 0 and -1 is a good choice for such a recommendation task on the Yelp dataset. Since the recommendation accuracy is acceptable (even higher than the SVD++ results) while the information dissemination capability has been maintained at a high level. RELATED WORKS Graph neural networks (GNNs) recently find their applications in recommender systems. Wu et al. [23] compiles an extensive survey of these approaches based on different GNN structure (GCN Fu et al. [3], GraphSage Hamilton et al. [7], GAT Veličković et al. [20], and GGGN Li et al. [13]), temporal dependent (general recommendation, sequential recommendation), and considered information (useritem interactions, social network, knowledge graph). In terms of this taxonomy, DiffNet's concept of influence diffusion is closely related to GCN approach. DiffNet performs a general recommendation and so assumes that recommendation is invariant to time. It also relies on social network to help learning user-item interactions. Apart from dissemination, many other works enhance recommendation task with auxiliary properties to solve specific problems Ricci et al. [17]. Many objectives align well with our interest in explorative recommendations. For example, serendipity measures the surprising degree of successful recommendations which encourage recommendations out of the typical interactions. Novelty and diversity are also well-known desirable properties in a recommender systems which determine the distinction among the recommendations. Hurley and Zhang [9] explains various approaches to enhance existing frameworks towards these metrics. Instead of directly rewarding a diverse recommendation list, our algorithm globally focuses on the effect of successful recommendation on the user's future exploration, reflected in the dissemination term. CONCLUSION In this project, we have studied existing recommender systems utilizing graph information such as user-user and user-item interactions. To improve the effectiveness of the recommendations in terms of both the accuracy and user's exploration, we have considered the information dissemination of the graph as an aid in the recommendation algorithm. Case studies based on the Yelp dataset have successfully demonstrated the advantage of the proposed framework: comparing to baseline models such as SVD++ and PinSage, the recommendation accuracy has been maintained at desirable levels while the information dissemination capability of the formed user-item graph has been optimized. As for future directions, more comprehensive diffusion process for the network can be utilized: not only the user embedding can be evolved through the information propagation process but also the item embedding can be iteratively refined.
2021-06-22T01:15:56.167Z
2021-06-19T00:00:00.000
{ "year": 2021, "sha1": "f9d7b5e225c9f8cacb350b57c1861197b5e4b548", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f9d7b5e225c9f8cacb350b57c1861197b5e4b548", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
86862522
pes2o/s2orc
v3-fos-license
Genotyping of Clinical Streptococcus agalactiae Strains Based on Molecular Serotype of Capsular ( cps ) Gene Cluster Sequences Using Polymerase Chain Reaction Background: Group B Streptococcus (GBS) is major cause of serious life threatening infections including sepsis, pneumonia and meningitis in neonates as well as in pregnant women. Capsular polysaccharide typing is significant and essential for epidemiological studies of GBS. Objectives: Theaimof thecurrentstudywastodifferentiategenotypesof GBSclinicalisolatesbasedonthepolymerasechainreac-tion (PCR) to acquire information about distribution of GBS types in Hamedan, Iran. Methods: In this experimental study we used sixty-two GBS clinical strains isolated from vaginal swabs (n = 16), urine cultures (n = 45) and blood culture (n = 1) of patients, who had referred to educational hospitals and private clinic centers during nine months from June 2013 to February 2014 in Hamedan, Iran, for genotyping using multiplex PCR assay. Results: Among the 62 GBS isolates examined, all capsular types except for VI, VII and VIII were found. Types III and V were the most prevalent types with a sum of 46 isolates (74.2%). Type III was the predominant type with 35 (56.5%) isolates, followed by type V (11 isolates; 17.7%), type II (seven isolates; 11.3%), type Ia (five isolates; 8.1%) and Ib and IV with similar prevalence of two isolates (3.2%). Conclusions: The results of the current study demonstrated that type III is the predominant type in Hamedan, followed by types V, type II, type Ia, Ib and IV, respectively. Use of the molecular serotyping (MS) method such as PCR assay as an alternative to conventional serotyping (CS) method leads to accurate, sensitive, specific, and fast typing of GBS isolates. Background Streptococcus agalactiae, also known as group B Streptococcus (GBS) is a major cause of serious life threatening infections including sepsis, pneumonia and meningitis in neonates and young infants as well as other serious infections in women at gestational and postpartum period, individuals with diabetes, immunocompromised patients or adults and the elderly. Mortality rates due to GBS infection in newborns is 4% to 6% and higher in premature infants (1,2). Group B Streptococcus is classified to 13 variants (serotypes), based on recognized capsular polysaccharide (CPS) antigens (capsular polysaccharide synthesis cps gene cluster), from which nine serotypes, i.e. Ia, Ib, and II-VIII, are considered clinically important. Based on reported studies from the USA and Europe, it has been shown that serotypes Ia, II, III and V, are the main causes of human GBS disease, as these serotypes have been found in 80% to 90% of clinical isolates (3)(4)(5). Capsular polysaccharide typing is significant and essential for epidemiological studies of GBS, pathogenesis and also other studies association with GBS infections, including surveillance programs and vaccine development in future. In fact, many attempts have focused on using CPS as immunoprophylactic antigens (4). According to investigations, it has been found that type III often appears in neonatal disease followed by types Ia, Ib, II and V. In addition, these antigens are the cause of 96% and 88% of neonatal and adult diseases, respectively; therefore, a possible CPS vaccine must include these mentioned capsular antigens (6). To ensure efficient development of vaccine, surveying worldwide distribution pattern of frequent serotypes will be important, in order to ensure involvement of the most appropriate bacterial components in one global GBS vaccine. Moreover, prevalence of GBS serotypes is age dependent and different between colonizing and invasive strains (7,8). Typing of bacteria is often required when two or several samples (strains) are suspected to have epidemio-logical relationship, for instance, in nosocomial or foodborne outbreaks. Another situation, in which typing is required can be epidemiological surveillance of one infectious disease after a certain period of time in order to follow disease development and design possible infection control approaches. Another application of typing methods can be comparison among strains of bacterial species isolated from one patient for differentiating pathogenic strains from nonpathogenic ones (7)(8)(9). Conventionally, GBS capsular typing is performed by serotyping (immunological) methods (10), however, complicated interpretation, high costs, and commercial availability of reagents for only a subset of serotypes are a limitation of these methods. Moreover, due to possible capsule operon mutations or rearrangements and the fact that encapsulation levels of GBS is different among strains particularly under experimental conditions, typing of such strains using immunological methods may fail or be difficult. Additionally, subjects such as those examining whether strains express capsular polysaccharide related genes (non-encapsulated) or express polysaccharide variants that fail to react with used antiserums, are nondistinguishable using these methodologies (11). Although appropriate phenotypic methods can be used for outbreak isolates during a short period of time, yet such methods are generally not adequate for evolutional studies, and it has been known that these methods lack adequate capability of discriminatory (9). Polymerase chain reaction (PCR)-based typing has been demonstrated that non-typeable strains resulted from immunological methods usually harbor the genetic information for synthesis these polysaccharides and consequently are typeable by PCR assay; in fact, molecular methods such as PCR assay offer more accurate and reliable typing of bacteria than phenotypic methods (12). On the other hand, in case of mutations in capsule synthesis genes, the molecular typing methods result in nontypeable strains (13); thereby CPS genotyping can be designed using both single PCR and/or multiplex PCRs (14-17). Objectives With regards to the mentioned issues, the aim of the present study was to determine capsular polysaccharide serotypes of clinical isolated GBS using multiplex PCR assay in Hamedan, for obtaining accurate information about distribution of GBS serotypes in this region. Collection and Identification of Isolates In this experimental study, a total of 62 GBS clinical strains (56 and 6 samples from female and male individuals, respectively) including vaginal swabs (n = 16, one sample from a non-pregnant woman and fifteen samples from pregnant women isolated from 203 collected specimens), urine cultures (n = 45), and blood culture (n = 1) isolates were collected from educational hospitals and private centers during nine months from June 2013 to February 2014 in Hamedan, Iran. All samples from male individuals were urine cultures. The study was done after obtaining informed consent from the participants and approval from the Hamadan University of medical science ethical committee with the following code: IR.UMSHA.REC.9304171965. The included individuals were women at 35 -37 weeks of pregnancy with no clinical problems, referred to prenatal care of Fatemieh hospital and/or private centers in Hamadan. The criterion for exclusion was being a pregnant woman at < 35 weeks of gestation. Information about age for each collected sample was recorded. Based on the center for disease control and prevention (CDC) guidelines (1), processing of the vaginal swabs were performed as follows: specimen swabs were inoculated in Lim broth (Pronadisa Co, Spain) as a selective and enriched medium and incubated at 35 to 37°C in 5% CO 2 (in candle jar) for 18 to 24 hours, then subcultured on Trypticase Soy agar (TSA) (Merck Co, Germany) with 5% blood and incubated at 35 to 37°C in 5% CO 2 for 18 to 24 hours. Conventional phenotypic methods including gram staining, catalase test, sodium hippurate hydrolysis, bile esculin agar test, and CAMP reaction were used for microbiological presumptive identification of the isolates. The presumptive identified isolates were subcultured on Trypticase Soy agar with 5% blood and incubated at 35 to 37°C in 5% CO 2 for 18 to 24 hours for obtaining single pure GBS colonies. Then these isolates were inoculated in brain heart infusion (BHI) broth (Merck Co, Germany) containing glycerol and blood and preserved at deep freeze (-70°C) conditions until further use. DNA Extraction From Isolates DNA was extracted from isolates using alkaline lysis method using the following procedure: three or four colonies from overnight cultured isolate were suspended in a sterile microtube containing 60 µL of lysis buffer (0.05 N NaOH, 0.25% sodium dodecyl sulfate-SDS) then vortexed and heated at 95°C for 15 minutes. Afterwards, 540 µL of TE buffer (50 mM Tris HCl, 1 mM EDTA, pH 8) was added to the microtube for diluting the obtained cell lysate. Subsequently, the microtube was centrifuged at 15000 rpm for five minutes to sediment cell debris. The supernatant was 2 Arch Clin Infect Dis. 2017; 12(1):e36787. transferred to a new sterile microtube and used for PCR assay or frozen at -20°C until further use (17). Confirmation of Identified Isolates as Group B Streptococcus Using Polymerase Chain Reaction The PCR assay targeting the 780-bp atr gene (GenBank accession number: AF15135) specific for S. agalactiae species as internal positive control was performed for confirmation or definitive identification of the isolates. The forward and reverse primer sequences were CAA CGA TTC TCT CAG CTT TGT TAA and TAA GAA ATC TCT TGT GCG GAT TTC, respectively (8). The PCR reaction volume was 20 µL including 2 µL of bacterial DNA, 1 µL of forward primer, 1 µL of reverse primer, 10 µL of 2x Taq Premix-Master mix (Parstous Biotech Co, Iran), and 6 µL of sterile double distilled water. Amplification thermal cycles were as follows: an initial denaturation step for five minutes at 94°C followed by 35 cycles at 94°C for 30 seconds, 55°C for 55 seconds, and 72°C for one minute and a final extension cycle at 72°C for 10 minutes using Bio-Rad Thermal Cycler. For positive and negative controls, the S. agalactiae ATCC 12386 and Enterococcus faecalis ATCC 29212 were used, respectively. The PCR products and 50-bp DNA size marker (Fermentase Co, USA) were run simultaneously on 1.5% agarose gel stained with DNA safe stain (SinaClon Co, Iran) at 80 V for one hour. Finally, the agarose gel was visualized and photographed using UV transilluminator (Vilbert Lourmat Co, Japan). Molecular Serotyping of Group B Streptococcus Isolates Using Multiplex Polymerase Chain Reaction Each confirmed isolate as GBS was examined by genotyping (molecular serotyping) using multiplex PCR assays targeting nine cps genes introduced by Poyart et al. (15). For this purpose, two reaction mixes were prepared. The mix (i) contained primers for Ia, Ib, II, IV and V and the mix (ii) contained the primers for III, VI, VII and VIII. The first reaction mix contained 2.5 µL of bacterial DNA, 1 µL of each forward primer, 1 µL of each reverse primer and 12.5 µL of 2x Taq Premix-Master mix (Parstous Biotech Co, Iran). The second reaction mix contained 2.5 µL of bacterial DNA, 1 µL of each forward primer, 1 µL of each reverse primer, 12.5 µL of 2x Taq Premix-Master mix and 2 µL of sterile double distilled water. Final volume of each reaction mix was 25 µL. Amplification thermal cycles were as follows: an initial denaturation step for three minutes at 94°C followed by 30 cycles at 94°C for 30 seconds, 58°C for one minute, and 72°C for one minute and a final extension cycle at 72°C for five minutes. As with the mentioned confirmatory PCR assay methodology, PCR products were run on agarose gel and then visualized and photographed using UV transilluminator. The 50-bp and 1-kb DNA size markers (Fermentase Co, USA) were used in these assays. Statistical Analysis Correlation between genotypes and age groups (≤ 30 and > 30) as well as between genotypes and samples type were analyzed by Comparing Means test (independent samples T test). The test of significance was two-tailed and a P < 0.05 was considered statistically significant. All analyses were performed using SPSS version 20 software. Results In case of confirmatory PCR assay, the samples (PCR product) presenting an amplicon size of 780-bp were considered positive for GBS ( Figure 1). All the 62 presumptive identified isolates produced amplicon size of 780-bp and thus were confirmed as GBS strains. Results of genotyping using multiplex PCR assay as well as correlation between samples type and genotypes are presented in Table 1. All 62 GBS strains produced amplicons from CPS related primer pairs with genotypes of Ia, Ib and II-V ( Figure 2). No amplicon size indicating genotypes of VI-VIII was observed for all examined GBS strains. There were no significant correlations between age groups and genotypes (P = 0.963) (data not shown). Discussion The GBS capsule is a significant virulence factor and antigenic determinant, therefore, it is considered as one of the main targets in the investigation for development of a Arch Clin Infect Dis. 2017; 12(1):e36787. vaccine against GBS infections in the future. Thereby correct capsular typing of clinical isolates is essential for predicting vaccine coverage and consequently applying sensitive and specific methods is required for achieving this purpose (18)(19)(20). The aim of current study was to differentiate genotypes of GBS clinical isolates based on PCR assay to acquire information about distribution of GBS types in Hamedan, Iran. Among the 62 GBS isolates examined, all capsular types except for VI, VII and VIII were found. Types III and V were the most prevalent types with the sum of 46 isolates (74.2%). Type III was the predominant type with 35 (56.5%) isolates, followed by type V (11 isolates; 17.7%), type II (seven isolates; 11.3%), type Ia (five isolates; 8.1%) and Ib and IV with similar prevalence of two isolates (3.2%). In review of prevalence of GBS types in pregnant women universally, the results were as follows: in USA, although the majority of reported studies in the 1990s showed type V as the predominant type among colonizing isolates, yet types Ia and III were more frequent in the recent years. In Canada, types Ia, III and V were common; similarly, and in Europe, except for Greece, type III was predominant in many countries (21). Additionally, type Ib is emerging in Germany (22). Based on a recent performed study in three African countries, similar prevalence of types III and V was reported. In Asian countries, after type III as the predominant type, types Ib and V were common similarly (23), however, an unusual case has been reported in Japan where types VI and VIII were predominant in Japanese pregnant women (24). In the Middle-east, types Ia, II, III and V were more frequent (23), but in the United Arab Emirates, type IV was the most common (25). Considering the above findings, the obtained results of the current study are consistent with universal prevalence of GBS types, with types III and V as the most frequent types in many countries. Similarly, our findings are consistent with the study performed by Ippolito et al. (23), which reported types Ia, II, III and V as the most frequent types in the Middle-east. Statistical associations between some types (i.e. type Ia and III) and the source of isolation were observed in current study. There is thought to be a correlation between type Ia and vaginal colonization (P = 0.005), and also between type III and urine samples (P = 0.003); however, due to the small number of samples examined in current study, these associations may not be considered as significant. The lack of statistically significant correlation between age groups and genotypes, regardless of other related studies (8), can be attributed to small number of examined samples. The most frequent method used for GBS typing is conventional serotyping (CS) or immunological method. However, the accuracy of the obtained results is highly dependent on several agents, in fact, serological methods due to several limitations are only moderately reliable, thus, the CS method is not efficient for typing of these strains. Also, these methods can cause misidentification of certain serotypes of isolates because of problems associated with immunological cross-reaction. Being expensive and not cost-effective (particularly, commercial serotyping kits), laborious procedures and less sensitive than molecular methods, are other limitations of the CS method. With regards to the mentioned limitations, a significant proportion of isolates was non-typeable (NT) by this method. In contrast, PCR-based typing (genotyping) methods such as molecular serotyping (MS) have several applications in bacterial typing systems and show a simple modifiable level of differentiation. Molecular Serotyping identification methods are attractive due to several advantages including being specific, their high discriminatory power, clear results, high reproducibility, simplicity and fast to perform, and great availability of equipment and required materials. Advanced molecular methods have capability of differentiating strains to many variable types (4,15,30). In conclusion, the results of the current study demonstrated that type III is the predominant type in Hamedan, followed by types V, type II, type Ia, Ib and IV, respectively. Using the MS method, such as the PCR assay, as an alternative to the CS method leads to accurate, sensitive, specific, and fast typing of GBS isolates; in addition, MS results reduced misclassification of non-typeable isolates, associated with CS. Therefore, MS can be used to confirm serotyping results obtained by the CS method such as latex agglutination. The advantages of MS method allow it to analyze various populations and to examine invasive and colonizing isolates in extensive epidemiological studies and surveillance activities. In fact, MS could facilitate the proper formulation of candidate GBS vaccines.
2019-01-02T00:56:07.210Z
2016-10-16T00:00:00.000
{ "year": 2016, "sha1": "a5b238e083b3217adca82b0a7ba29492b848cacc", "oa_license": "CCBYNC", "oa_url": "http://acid.neoscriber.org/cdn/dl/e7b995e6-3ab9-11e7-b915-5fa312051203", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "864ebb0daf606194d42f62bdc47b1bf1c8a98ed2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
262207545
pes2o/s2orc
v3-fos-license
Effects of Fermented Cottonseed Meal Substitution for Fish Meal on Intestinal Enzymatic Activity, Inflammatory and Physical-Barrier-Related Gene Expression, and Intestinal Microflora of Juvenile Golden Pompano ( Trachinotus ovatus ) : The present study was conducted to investigate the effects of dietary fermented cottonseed meal (FCSM) substitution for fish meal on intestinal enzymatic activity, inflammatory and physical-barrier-related gene expression, and intestinal microflora of juvenile golden pompano. The 375 golden pompanos were divided into 15 groups of 25 fish each, with three replicates for each experimental group. The fish were fed five experimental diets (0 (FM), 12.5% (CSM12.5), 25% (CSM25), 50% (CSM50), and 100% (CSM100) substitution levels) for 8 weeks. The fish were reared and fed the experimental diets under a natural-day light cycle. Compared with the control group, the activities of AMY (amylase) enzymes in the CSM12.5 group and all other groups were elevated ( p < 0.05). The CSM25 group exhibited a considerable up-regulation of IL-10 (Interleukin-10) expression relative to the FM group ( p < 0.05). With an increase in dietary FM substitution with FCSM from 0 to 25%, the relative expressions of NF-κ B (Nuclear factor kappa-B), IL-1 β (Interleukin-1 beta), and IL-8 (Interleukin-8) were down-regulated. In this study, the relative expressions of ZO-1 (zonula occluden-1) and Occludin were up-regulated, and those of Claudin-3 and Claudin-15 significantly up-regulated, when the FCSM substitution ratio was 25%. The results of high-throughput sequencing of the intestinal microflora showed that ACE indices the lowest in the CSM25 group, which was significantly different from those in the CSM100 group ( p < 0.05). The CSM50 group had the highest Shannon and Simpson indices and the highest community diversity. In addition, replacing a high percentage of fish meal with FCSM can negatively affect the intestinal flora of fish. In this study, the 25% substitution ratio improved nutrient absorption, reduced intestinal inflammation, improved intestinal physical barrier damage, did not affect intestinal microecology, and had no adverse effects on fish. However, substitution of a high proportion of FM with FCSM negatively affects the intestinal microflora and nutrient absorption capacity of fish. Introduction Golden pompano (Trachinotus ovatus) is a valuable edible fish with thornless flesh, a tender texture, and a delicious taste, with the unique aroma of the trevally family.It is distributed in the Atlantic, Indian, and Pacific Oceans' tropical and subtropical seas.In China, the coasts of Guangdong, Guangxi, Hainan, and Fujian are main growing regions [1].The demand for protein raw materials (e.g., fishmeal or soybean meal) is rising as aquaculture production increases.Import policies have a significant impact on the aquaculture industry, since fishmeal and soybean meal are import-dependent.To fulfill the rising domestic demand for protein raw materials, guarantee a continuous supply of aquatic products, and maintain global food security, it is crucial to discover and develop novel, effective, sustainable, and environmentally friendly protein sources. The ability of the intestines to digest and absorb nutrients is crucial for fish, especially stomachless fish, and is considered to be an important indicator of the overall health of the body [2].There are large amounts of microflora in the intestines of fish which live in symbiosis with the fish and are known as the normal intestinal flora.Fish with normal intestinal flora can have better digestion and receive essential nutrients which strengthen their natural defenses and immunity [3].Dysbacteriosis has the potential to trigger enteritis, leading to diminished appetite, stunted growth, and potentially fatal outcomes in fish [4].An effective and properly controlled intestinal barrier is crucial for safeguarding the organism against food antigens and its indigenous intestinal bacteria [5].As a result, research on the structure and changes of fish intestinal microflora has recently gained attention in the aquaculture industry. Cottonseed meal (CSM) serves as a valuable source of protein of superior quality.The utilization of this nutrient-dense resource as animal feed, however, encounters obstacles due to the presence of free gossypol (FG) [6].FG-containing diets can have negative effects on animal growth, digestive health, and reproduction.Fermentation technology allows for efficient separation of cottonseed phenols in cottonseed meals, raising the crude protein level of the raw material and producing small peptides and growth-promoting factors that enrich the nutrition of cottonseed protein [7].The application of fermented cottonseed protein in mackerel culture was investigated, and it was found that the survival and weight gain rates of mackerel in the group with fermented cottonseed protein were significantly higher than those in the control group [8].It was discovered that using 23% FCSM to replace 9% soybean meal and 15% cottonseed meal promoted the growth of grass carp, reduced the feed coefficient, and improved nonspecific immunity [9].Most current research on fermented cottonseed protein is gathered in livestock and poultry, with little focus on aquatic animals.Based on these findings, this study was conducted to investigate the effects of the replacement of fish meal (FM) with FCSM on the intestinal enzymatic activity, inflammatory and physical-barrier-related gene expression, and intestinal microflora of juvenile T. ovatus. Preparation of Experimental Diets Fish meal, casein, soybean protein concentrate, and soybean meal were added as protein sources, and fish oil and soybean lecithin were added as lipid sources.Five isoproteic and isolipidic experimental diets were formulated with different levels of replacement of FM with FCSM, specifically, 0 (FM), 12.5% (CSM12.5),25% (CSM25), 50% (CSM50), and 100% (CSM100), respectively; the feed formulations and the contents of various nutrients are shown in Table 1.To eliminate the effect of limiting amino acids, lysine and methionine were added to each group, respectively, so that the amino acid content of each group of feeds was balanced; the amino acid composition of the experimental feeds is shown in Table 2.All the ingredients were ground into powder, sieved through 60 mesh, and thoroughly mixed with oil and water; the 2.5 mm and 3.0 mm diameter long doughs were extruded using a twin screw extruder (F-26, South China University of Technology, Guangzhou, China), cut into pelletized feeds using a pelletizer (G-500, South China Uni- 1 Vitamin and mineral premix provided by Shenzhen Jingji Zhinong Times Co., Ltd.(mg kg −1 diet).The formulation includes the following amounts of vitamins and minerals per kilogram: vitamin A at a minimum of 450,000 IU, vitamin B1 at a minimum of 1000 mg, vitamin B2 at a minimum of 1000 mg, vitamin B6 at a minimum of 1500 mg, vitamin B12 at a minimum of 5 mg, vitamin K3 at a minimum of 800 mg, inositol at a minimum of 12,000 mg, D-Pantothenic acid at a minimum of 3500 mg, nicotinic acid at a minimum of 2000 mg, folic acid at a minimum of 500 mg, D-Biotin at a minimum of 5 mg, vitamin D3 at a range of 300,000 to 400,000 IU, vitamin E at a minimum of 8000 IU, Na 2 SeO 3 at 20 mg, CuSO Fish and Experimental Conditions The feeding trial was conducted in a seawater pond at the Shenzhen Base of the South China Sea Fisheries Research Institute of the Chinese Academy of Fishery Sciences (Shenzhen, China).For two weeks, juvenile golden pompanos were acclimated to the experimental system and fed commercial diets (Guangdong Yuequn Biotechnology Co., Ltd., Guangzhou, China).At the outset of the feeding experiment, the fish were fasted for 24 h and then weighed.The fish were randomly assigned into 18 cages, with 25 uniformly sized fish per cage (5.6 ± 0.14 g) for 8 weeks.Experimental fish were fed different experimental diets twice daily, at 6:00 and 18:00, until they appeared to be satiated.During the feedingtrial period, the water temperature was maintained at 28.3-33.3• C. Dissolved oxygen was higher than 6.0 mg/L.The salinity and ammonia were in the range of 20-25‰ and 0.05-0.1 mg/L, respectively.The photoperiod was the natural-day light cycle throughout the experimental period.The protocols for all fish were approved by the Ethical Committee of the South China Sea Fisheries Research Institute. Collection of Samples By the end of the feeding trial, all the fish were deprived of diets for 24 h.Fish that were put under anesthesia were treated with diluted MS-222 (Sigma, St. Louis, MO, USA).Three fish per cage were anesthetized and sampled.The digestive contents of four fish from each cage were collected and were then quickly frozen in liquid nitrogen, followed by storage at −80 • C. To reduce the impact of interindividual differences, intestinal contents from each treatment were mixed for the analysis of intestinal microbiota.Mid-intestines from three fish in each cage were frozen in liquid nitrogen and then stored at −80 • C until total RNA was extracted.Three fish in each cage had a section of their gut frozen in liquid nitrogen, which was subsequently kept at −80 • C until the enzyme activity was determined. Intestinal Enzymes Activities Measurements Intestinal samples were homogenized in sterilized physiological saline (0.86%, pH = 7.4; 1:9, w/v) by a handheld homogenizer.Then the samples were centrifuged for 15 min (2000 r/min, 4 • C) and the supernatant removed for the quantification of chymotrypsin, lipase, and α-amylase using commercial kits (Beijing Huaying Biotechnology Research Institute, Beijing, China). Free Cotton Phenol Content Measurements The free cotton phenol content analysis of diets was determined according to the standard methods of the American Oil Chemists Society (AOCS 2009; method Ba 7b-96) [10].All samples were analyzed with an 8453 ultraviolet-visible spectrophotometer (Agilent Technologies Co., Ltd., Qingdao, China). Quantitative Real-Time PCR Total RNA was extracted from intestinal tissues using the Animal Total RNA Isolation Kit (FOREGENE Co., Ltd., Chengdu, China), and the integrity and quality of the RNA were detected using the NanoDropOne Micro Spectrophotometer (Thermo Scientific, Waltham, MA, USA).The cDNA was obtained by reverse transcription using the PrimeScriptTM RT reagent kit with a gDNA Eraser kit (Takara, Kusatsu City, Japan).The reaction conditions: 3.All primer pairs required for real-time fluorescence quantitative PCR were synthesized by Sangon Biotech (Shanghai) Co., Ltd.(Shanghai, China).Real-time fluorescence quantitative PCR was performed on the Quant Studio Dx PCR instrument (ABI, Foster City, CA, USA).The expression of these genes was quantified by the 2 −∆∆CT method [11].The microbial DNA extraction was performed using Hi Pure Soil DNA Kits (Magen, Guangzhou, China).The 16S rDNA V4 region of the ribosomal RNA gene was amplified using the polymerase chain reaction (PCR) technique.The primers used for this amplification were Arch519 (CAGCMGCCGCGGTAA) and Arch915R (GTGCTCCCCCGCCAATTCCT).In a duplicate 50 µL combination, the components included 5 µL of 10 × KOD buffer, 5 µL of 2.5 mM dNTPs, 1.5 µL of each primer (5 µM), 1 µL of KOD Polymerase, and 100 ng of template DNA for PCR reactions.The AxyPrep DNA Gel Extraction Kit, manufactured by Axygen Biosciences in Union City, CA, U.S., was employed to extract amplicons from 2% agarose gels.The ABI Step One Plus Real-Time PCR System, manufactured by Life Technologies in Foster City, USA, was employed for the purpose of quantification.In accordance with established techniques, the purified amplicons were combined in equimolar proportions and subjected to paired-end sequencing (2 × 250) using an Illumina platform.The raw data underwent splicing and filtering processes in order to create a refined dataset.The construction of operational taxonomic units was carried out, and the final feature table and feature sequences were created using the Divisive Amplicon Denoising Algorithm method. Statistical Analysis The experimental data were presented as mean ± SEM.Statistical analysis was performed using SPSS 26.0 software ( IBM Corporation, Somers, NY, USA) for Windows.Before performing analysis of variance (ANOVA), the normality and homogeneity of experimental data were tested using the Kolmogorov-Smirnov test and Levene's test, respectively.After passing the test, the experimental data were subjected to a one-way analysis of variance.When there were significant differences, the group means were further compared with Duncan's multiple-range test, and a probability of p < 0.05 was considered significant. Intestinal Enzymatic Activity of T. ovatus The effects of dietary fish meal substitution by fermented cottonseed meal on the intestinal enzymatic activity of golden pompano are shown in Table 4.The LPS content levels of the FM, CSM12.5, and CSM25 groups were not significantly different (p > 0.05), with the highest content being found in the CSM12.5 group.There was a tendency towards an increase in chymotrypsin activity in the intestine of golden pompano at a substitution rate of 25%, which was significantly lower in the CSM12.5 and CSM50 groups (p < 0.05).As the substitution level of a fermented cottonseed meal increased, the AMY activity in the substitution group was significantly higher than that in the FM group (p < 0.05), and the highest activity of AMY in the intestine of golden pompano was observed when the substitution rate reached 25%. Intestinal Immune-Related Gene Expression of T. ovatus The gene expressions of the intestinal NF-κB-related signaling pathway in golden pompano after ingestion of different levels of experimental diets are shown in Figure 1.Compared with FM and CSM25, the expression of IL-1β and TNF-α of the fish in the group fed with CSM12.5, CSM50, and CSM100 was notably increased (p < 0.05).Compared with the FM group, the relative expression of the IL-8 gene in the intestine of golden pompano was not significantly different between the FM, CSM25, and CSM50 groups (p > 0.05), with the highest expression being found in the CSM100 group, which was significantly different from the other groups (p < 0.05).The expression of IL-10 of the fish in the CSM12.5 dietary group was significantly upregulated (p < 0.05).The relative expression of the NF-κB gene was significantly lower in the intestine of golden pompano at different levels in the FCSM substitution group compared to the FM group (p < 0.05), with the highest expression found in the CSM100 group. Intestinal Physical-Barrier-Related Gene Expression of T. ovatus The expression of physical-barrier-related genes in golden pompano after ingestion of different levels of FCSM are shown in Figure 2. The relative expression of the ZO-1 gene was lowest in the CSM50 group, which was lower than the FM group (p < 0.05), while the relative expression was significantly higher in the CSM12.5 and CSM100 groups.The relative expression of the Occludin gene was highest in the CSM100 group, and the levels of the CSM12.5 and CSM100 groups were significantly higher than that of the FM group (p < 0.05); additionally, there was no significant difference between the CSM25 and FM groups (p > 0.05).The relative expression of the Claudin-3 gene was significantly affected by the substitution of FM by FCSM (p < 0.05), with the highest relative expression found in the CSM25 group.When the level of FCSM substitution was 50%, the relative expression of the Claudin-15 gene in the intestine of golden pompano was significantly (p < 0.05) lower than in all other groups.Meanwhile, the CSM12.5 group, CSM25 group, and CSM100 group were significantly higher than the FM group (p < 0.05). the highest expression being found in the CSM100 group, which was significantly different from the other groups (p < 0.05).The expression of IL-10 of the fish in the CSM12.5 dietary group was significantly upregulated (p < 0.05).The relative expression of the NF-κB gene was significantly lower in the intestine of golden pompano at different levels in the FCSM substitution group compared to the FM group (p < 0.05), with the highest expression found in the CSM100 group. Intestinal Physical-Barrier-Related Gene Expression of T. ovatus The expression of physical-barrier-related genes in golden pompano after ingestion of different levels of FCSM are shown in Figure 2. The relative expression of the ZO-1 gene was lowest in the CSM50 group, which was lower than the FM group (p < 0.05), while the relative expression was significantly higher in the CSM12.5 and CSM100 groups.The relative expression of the Occludin gene was highest in the CSM100 group, and the levels of the CSM12.5 and CSM100 groups were significantly higher than that of the FM group (p < 0.05); additionally, there was no significant difference between the CSM25 and FM groups (p > 0.05).The relative expression of the Claudin-3 gene was significantly affected by the substitution of FM by FCSM (p < 0.05), with the highest relative expression found in the CSM25 group.When the level of FCSM substitution was 50%, the relative expression of the Claudin-15 gene in the intestine of golden pompano was significantly (p < 0.05) lower than in all other groups.Meanwhile, the CSM12.5 group, CSM25 group, and CSM100 group were significantly higher than the FM group (p < 0.05).The curve of intestinal microflora dilution (Figure 3) of juvenile golden pompano in different treatment groups tends to be flat, with sequencing coverage ≥99.96%.This The curve of intestinal microflora dilution (Figure 3) of juvenile golden pompano in different treatment groups tends to be flat, with sequencing coverage ≥ 99.96%.This indicates that the sequencing depth of the intestinal microflora of each group of golden pompano is reasonable and can cover the vast majority of species in the sample.A grand number of 1,549,208 sequencing reads of good quality were acquired.From the Venus plot of OTUs of golden pompano in different treatment groups (Figure 4), the total number of OTUs between groups was 71.The alpha diversity metrics were computed by analyzing the rarefaction curves at the operational taxonomic units (OTUs) level for each experimental meal.Table 5 displays the Chao1, ACE, Shannon, and Simpson indexes.The CSM100 group had the highest ACE indices and the largest community richness, which was not significantly different from the other groups (p > 0.05).Compared with the remaining groups, the CSM25 group had the lowest ACE and Chao1 indices and the lowest community richness, while the CSM100 group had the highest ACE and Chao1 indices and the highest community richness.The CSM50 group had the highest Shannon index and Simpson index, and the highest community diversity.The Shannon index and Simpson index in the CSM50 group differed significantly from those of the other groups (p < 0.05) (Table 5). Analysis of Intestinal Flora Composition and Relative Abundance The analysis of the intestinal contents of juvenile golden pompano encompassed multiple taxonomic levels, specifically, phylum, order, family, and genus.Phylum and genus were chosen as representative taxonomic levels for this study. As shown in Figure 5, at the phylum level, Proteobacteria and Firmicutes formed the core microflora.Proteobacteria were the most abundant phyla in the control group and CSM25 group, accounting for 49.4% and 52.16%, respectively.Firmicutes was the most abundant phylum in CSM12.5, CSM50, and CSM100, accounting for 61.46%, 51.71%, and 52.83%, respectively.There were no significant differences among the five groups, except for Fusobacteria and Planctomycetes (p > 0.05) (Table 6). Analysis of Intestinal Flora Composition and Relative Abundance The analysis of the intestinal contents of juvenile golden pompano encompassed multiple taxonomic levels, specifically, phylum, order, family, and genus.Phylum and genus were chosen as representative taxonomic levels for this study. As shown in Figure 5, at the phylum level, Proteobacteria and Firmicutes formed the core microflora.Proteobacteria were the most abundant phyla in the control group and CSM25 group, accounting for 49.4% and 52.16%, respectively.Firmicutes was the most abundant phylum in CSM12.5, CSM50, and CSM100, accounting for 61.46%, 51.71%, and 52.83%, respectively.There were no significant differences among the five groups, except for Fusobacteria and Planctomycetes (p > 0.05) (Table 6).As shown in Figure 6, at the genus level, Photobacterium and Paraclostridium formed the core microflora.The dominant genera in the intestinal flora of T. ovatus were Photobacterium and Paraclostridium, which accounted for 52.5% and 44.86% of the total species, respectively.Photobacterium is the most abundant genus in the FM group and CSM25.There were significant differences among the five groups, except for Lysinibacillus, Bacillus, and Aerococcus (p < 0.05) (Table 7).The Fusobacteria levels of CSM100 were significantly lower than those of all other groups (p < 0.05).There was no significant difference in Planctomycetes levels for CSM12.5, CSM25 and CSM50 (p > 0.05), and the FM group and CSM100 were significantly higher than all other groups (p < 0.05).As shown in Figure 6, at the genus level, Photobacterium and Paraclostridium formed the core microflora.The dominant genera in the intestinal flora of T. ovatus were Photobacterium and Paraclostridium, which accounted for 52.5% and 44.86% of the total species, respectively.Photobacterium is the most abundant genus in the FM group and CSM25.There were significant differences among the five groups, except for Lysinibacillus, Bacillus, and Aerococcus (p < 0.05) (Table 7).The Fusobacteria levels of CSM100 were significantly lower than those of all other groups (p < 0.05).There was no significant difference in Planctomycetes levels for CSM12.5, CSM25 and CSM50 (p > 0.05), and the FM group and CSM100 were significantly higher than all other groups (p < 0.05).Non-measured multidimensional analysis (NMDS) is a commonly used β diversity analysis method.As the basis for community structure research, NMDS analysis is often used to compare the differences between different ecosystems and reflect the heterogeneity of biological species due to the environment.The points in the graph represent the sample, and the distance between the points reflects the degree of sample difference; the closer the distance, the higher the similarity.Points in the same circle represent no significant difference between the samples, and points in the non-intersection circle represent significant differences between the samples.As can be seen from Figure 7, the FM group intersects with the CSM12.5 group, but does not intersect with any other groups.The CSM25 group intersects with the CSM12.5 group and the CSM50 group and does not intersect with any other groups.The CSM50 group intersects with the CSM25 group and does not intersect with any other groups.The CSM100 group intersects with the CSM12.5 group but does not intersect with any other groups.The CSM12.5 group is not intersected with the group and is intersected with all other groups.This means that there is no significant difference between the FM group and the CSM12.5 group (p > 0.05), while there is a significant difference with all other groups (p < 0.05).There is no significant difference between the CSM25 group and the CSM12.5 group or the CSM50 group (p > 0.05), and there is a significant difference with all other groups (p < 0.05).There is no significant difference between the CSM50 group and the CSM25 group (p > 0.05), and the former is significantly different from each of the other groups (p < 0.05); there is no significant difference between the CSM100 group and the CSM12.5 group (p > 0.05), and the former is significantly different from each of the other groups (p < 0.05).There is a significant difference between the CSM12.5 group and the CSM50 group (p < 0.05), and the former is not significantly different from any of the other groups (p > 0.05).There was an intersection between the CSM25 group and the CSM50 group, indicating that when the amount of FCSM in the feed was 25%, there was no significant difference in the intestinal microbial composition than with the addition of 50% in T. ovatus (p > 0.05).There was an intersection among the CSM25 group, the CSM12.5 group, and the CSM100 group, indicating that when the amount of FCSM in the feed was 25%, there was no significant difference in the intestinal microbial composition than with the addition of 12.5% and 100% in T. ovatus (p > 0.05). Fishes 2023, 8, x FOR PEER REVIEW 12 of 17 0.05), while there is a significant difference with all other groups (p < 0.05).There is no significant difference between the CSM25 group and the CSM12.5 group or the CSM50 group (p > 0.05), and there is a significant difference with all other groups (p < 0.05).There is no significant difference between the CSM50 group and the CSM25 group (p > 0.05), and the former is significantly different from each of the other groups (p < 0.05); there is no significant difference between the CSM100 group and the CSM12.5 group (p > 0.05), and the former is significantly different from each of the other groups (p < 0.05).There is a significant difference between the CSM12.5 group and the CSM50 group (p < 0.05), and the former is not significantly different from any of the other groups (p > 0.05).There was an intersection between the CSM25 group and the CSM50 group, indicating that when the amount of FCSM in the feed was 25%, there was no significant difference in the intestinal microbial composition than with the addition of 50% in T. ovatus (p > 0.05).There was an intersection among the CSM25 group, the CSM12.5 group, and the CSM100 group, indicating that when the amount of FCSM in the feed was 25%, there was no significant difference in the intestinal microbial composition than with the addition of 12.5% and 100% in T. ovatus (p > 0.05). Discussion Fish meal (FM) is widely utilized as a protein source in commercial aquafeeds due to its well-balanced composition of essential amino acids, vitamins, and minerals, and its palatability [10].In recent years, the increasing demand for animal nutrition has led to a search for sustainable protein sources of both plant and animal origin, due to the limited availability of fishmeal [13].Cottonseed meal is digested relatively well by fish and crustaceans [14].When markets are favorable, CSM is an economical alternative protein source for use in aquatic animal feeds.Furthermore, it is generally highly palatable to most aquatic animals [14]. The intestinal tracts of animals serve as complex microbial ecosystems that harbor Discussion Fish meal (FM) is widely utilized as a protein source in commercial aquafeeds due to its well-balanced composition of essential amino acids, vitamins, and minerals, and its palatability [10].In recent years, the increasing demand for animal nutrition has led to a search for sustainable protein sources of both plant and animal origin, due to the limited availability of fishmeal [13].Cottonseed meal is digested relatively well by fish and Fishes 2023, 8, 466 crustaceans [14].When markets are favorable, CSM is an economical alternative protein source for use in aquatic animal feeds.Furthermore, it is generally highly palatable to most aquatic animals [14]. The intestinal tracts of animals serve as complex microbial ecosystems that harbor dynamic communities of microorganisms.These microorganisms play various roles, such as absorbing nutrients, improving energy production, and maintaining immune homeostasis [15].The process of digestion is related to the activity of digestive enzymes, including chymotrypsin and α-amylase.These enzymes play a crucial role in breaking down nutrients, as they facilitate the digestion process [16].In the present study, the intestinal amylase content levels in the CSM12.5 and CSM25 groups were higher than that in the FM group.The chymotrypsin and AMY activity levels in the intestine were higher than in the FM group when the substitution rate reached 25%.AMY activity reflects the absorption and utilization of nutrients by fish [17].Protease is an important proteolytic enzyme in fish intestine, and its activity can reflect the ability of fish to decompose feed protein [18].These findings indicated that 25% substitution of FCSM for FM boosted the activity of digestive enzymes and improved nutrient uptake, which in turn improved growth performance of T. ovatus.In this study, when 50-100% of FM was replaced by FCSM, the activity levels of LPS and chymotrypsin were lower than those of the control group.This may be due to the lower digestion and absorption ability of FCSM as compared to fish meal.A similar phenomenon was also observed in a previous study by Sillago sihama Forsskál [19].Research findings indicate that the American redfish species (Sciaenops ocellatus) has demonstrated proficient ability in digesting cottonseed meal [20]. The significant function of pro-and anti-inflammatory cytokines in the maintenance of tissue and immunological homeostasis has been well established [21].The pro-inflammatory cytokines, which include TNF-α and IL-8, have a variety of effects that cause inflammation [22].Interleukin-1β (IL-1β) plays a crucial role in both the onset and amplification of the inflammatory response, which ultimately leads to intestinal damage [23].Inflammatory cytokine IL-8 is a component of the immune response, and disruption of the balance of these cytokines is key to the pathophysiology of inflammatory bowel disease [24].In epithelial tissues, NF-κB signaling is crucial for preserving immunological homeostasis.Thus, NF-κB appears to exhibit two sides in chronic inflammation: on the one hand, NF-κB activation increases and continues to induce inflammation and tissue damage, but on the other hand, inhibition of NF-κB signaling also disrupts immune homeostasis and triggers inflammation and disease [22].In the present study, TNF-α, IL-1β, IL-8, and NF-κB gene expression levels were all significantly decreased at the level of 25% FCSM replacement.In contrast, excessive substitution levels significantly increased TNF-α, IL-1β, IL-8, and NF-κB gene expression levels, indicating that higher substitution levels of FCSM can induce inflammatory responses in the intestine.IL-10 is an anti-inflammatory factor which plays a role in down-regulating inflammatory response and antagonizing inflammatory mediators [25].Gene expression of the pro-inflammatory factor IL-10 was significantly higher than that of the control group at 12.5%, 25% and 50% substitution levels, suggesting that intestinal inflammation improves at appropriate substitution ranges. The intestinal tract in fish is widely recognized as a significant physical barrier, and maintaining its structural integrity is essential for effectively resisting foreign antigens [26].The presence of compromised or diminished physical barrier function within the intestinal tract of fish is associated with an elevated susceptibility to pathogen infection and a hindered ability to suppress pathogen growth [12].ZO-1 is involved, in its function as a scaffolding protein, by determining the tightness of the epithelial barrier and contributing to the polarity of the epithelial cells [5].The extracellular loop of Occludin goes straight into the TJ complex and interacts with the transmembrane domain to alter the permeability of extracellular selectivity [27].In the present study, the mRNA transcript levels of ZO-1 were up-regulated, but the relative expression levels of the Occludin gene in the CSM25 and FM groups were not significantly different.Claudin-3 is a tightening TJ protein that aids in sealing the paracellular intestinal barrier [28].In this study, the Claudin-3 gene had the highest relative expression in the CSM25 group, indicating that the intestinal barrier closure was strongest at the 25% substitution level.Claudin-15 is widely distributed in the tight junctions of the villi and crypt cells of the small and large intestines.In this study, the relative expression of Claudin-15 in the CSM12.5,CSM25, and CSM100 groups was significantly higher than that in the FM group, suggesting that the structural integrity of intestinal intercellular structures was better in the CSM12.5,CSM25, and CSM100 groups.The results of the above studies suggest that appropriate substitution ratios can ameliorate golden pompano intestinal physical barrier damage induced by FCSM.This is similar to the results of a study on the genes related to the improvement of intestinal tight junction in yellow catfish by fermented rapeseed meal [29].Therefore, FCSM may alleviate the damage, associated with cottonseed meal, to the intestinal structure of golden pompano by degrading anti-nutritional factors, thereby improving the intestinal health of fish. The microbial ecosystem of animals consists of microbial communities and their host microenvironment.The intestinal microenvironment consists of intestinal microorganisms and is the most critical microenvironment of the animal body [30].The intestinal microbiota play a crucial role in various physiological processes, including metabolism, nutrient absorption, growth and development, and immune function.Therefore, it is very important for body health to investigate changes in fish intestinal flora.In the present study, the OTU Wayne diagram found that 71 OTU were common in all groups, indicating that there are still inherent core flora at different levels of FCSM addition in the same culture environment.These results suggested that the tested variations in diet did not exert large, long-term alterations on the intestinal microbiota of golden pompano. Intestinal bacterial microflora level is closely related to digestive function and overall gastrointestinal health [31].In terms of community composition, we found that the dominant intestinal bacteria of the golden pompano belonged to three phyla: Proteobacteria, Firmicutes, and Tenercutes [32]; this has also been reported for the turbot [33].Proteobacteria can catabolize feed ingredients [34].The increased abundance may allow T. ovatus to absorb nutrients from the feed.Proteobacteria, a class of bacteria characterized by their Gram-negative cell wall structure, serves a significant function in the breakdown and fermentation of polysaccharides, proteins, and various organic substances.Moreover, they constitute the prevailing microbial population within the intestinal ecosystem of numerous fish species [35].It is well known that many kinds of Vibrio, Photobacterium, and Mycoplasma are pathogenic bacteria [19,36].These microorganisms are found within the gastrointestinal tract.The degradation of the aquatic environment leads to an escalation in nutrient deficits and physiological dysfunctions, hence heightening the vulnerability of fish to these infections [37].Research shows that the balance of the intestinal microflora was altered primarily by probiotic supplementation and, to a lesser extent, by the energy content of the diet [33].The majority of Fusobacteriales strains identified in fish intestinal samples are attributed to Cetobacterium, a prevalent and extensively distributed species found within the gastrointestinal tracts of fish [38].In this experiment, the dominant intestinal flora of the golden pompano was composed of Firmicutes and proteobacteria.However, with the change in the proportion of FCSM instead of fishmeal in feed, the absolute dominant flora between the groups also changed.At the phylum level, the dominant phylum of the control group for the CSM12.5 group, CSM50 group, and CSM100 group was firmicutes, while the dominant phylum of the CSM25 group was proteobacteria.The results of the above studies showed that the substitution of FM by FCSM did not lead to changes in the dominant strains of the intestinal flora of golden pompano. At the genus level, the dominant genera in the intestinal flora of the T. ovatus are Photobacterium and Paraclostridium.Photobacterium, a bacterium classified as Gram-negative, has a combination of anaerobic and aerobic characteristics, displaying both respiratory and fermentative metabolic pathways.The inclusion of Bacillus probiotics in the diet has been found to promote the proliferation of lactic acid bacteria within the gastrointestinal system, leading to a decrease in pH levels.Consequently, this creates an unfavorable environment for the growth and survival of pathogenic bacteria, such as E. coli and Salmonella spp.[39].Photobacterium, a Gram-negative bacterium, is partly anaerobic and has respiratory and fermentative metabolism types.Photobacterium can infect the shrimp Litopenaeus vannamei to cause its mortality, and may be a common pathogen of shrimps [19].The Photobacterium levels in the CSM25 group and CSM50 group were significantly lower than that in the control group, indicating that the abundance of Photobacterium decreased when the amount of FCSM added to the feed was 25-50% [40]. Beta diversity analysis is mainly used to compare differences in the overall structure of microbial communities in each sample.Non-metric multidimensional scaling (NMDS) is a data analysis method that simplifies the research objects of multidimensional space to low-dimensional space for locating, analysis, and classification while retaining the original relationship between objects [41].In this experiment, we found that the gut microbial composition of the FM group was significantly different from that of the CSM12.5,CSM25, CSM50, and CSM100 groups, and that there were significant differences in the composition of the CSM12.5 and CSM50 groups and the CSM25 and CSM100 groups, indicating that varying levels of fermented cotton meal substitution in the feeds resulted in significant differences in gut microbial composition.Complex microbiota interactions may explain the fact that fermented fishmeal increased the diversity of the gut microbiota of the vanabin carp, a finding which is similar to the results of our study [42]. Conclusions In conclusion, by analyzing the intestinal enzyme activity, inflammation, physical barrier-related gene expression, and intestinal microecology of juvenile golden pompano (T.ovatus), this investigation demonstrated in its results that FCSM replacement of FM affected the intestinal health status of the fish.Replacing 25% of FM with FCSM had no significant effect on LPS and chymotrypsin in juvenile golden pompano, but AMY activity was significantly increased.However, when 50-100% was replaced, LPS and chymotrypsin were significantly reduced, indicating that the substitution of 25% FM by FCSM did not affect intestinal digestion.It was demonstrated that an appropriate substitution ratio could improve nutrient absorption, reduce intestinal inflammation, and improve intestinal physical barrier damage, while not affecting intestinal microecology.However, substitution of a high proportion of FM with FCSM negatively affects the intestinal microflora and nutrient absorption capacity of fish. Fishes , Guangzhou, China), and then stored at −20 • C in a refrigerator until used. 37 • C for 15 min, 85 • C for 5 s, and 4 • C. The obtained cDNA was stored at −20 • C. The primer sequences of the target gene and the internal reference gene β-Actin for real-time Fishes are shown in Table Figure 1 . Figure 1.Effect of fermented cottonseed meal substitution for fish meal on immune-related gene expression of T. ovatus.Data are expressed as means ± SEM (n = 3).Means with different superscripts are significantly different (p < 0.05). Figure 1 . Figure 1.Effect of fermented cottonseed meal substitution for fish meal on immune-related gene expression of T. ovatus.Data are expressed as means ± SEM (n = 3).Means with different superscripts are significantly different (p < 0.05). Figure 2 . Figure 2. Effects of fish meal substitution by a fermented cottonseed meal on the barrier-related gene expression.Data are expressed as means ± SEM (n = 3).Means with different superscripts are significantly different (p < 0.05). Figure 2 . Figure 2. Effects of fish meal substitution by a fermented cottonseed meal on the barrier-related gene expression.Data are expressed as means ± SEM (n = 3).Means with different superscripts are significantly different (p < 0.05). Fishes Microbiota 3.4.1.Analysis of Microbial OUT and Alpha Diversity of Intestinal Flora ACE:The ACE index is utilized as a means to approximate the abundance of operational taxonomic units (OTUs) within a given ecological community.It has been observed that higher ACE values exhibit a positive correlation with the overall diversity and richness of the microbial community.Chao1: The number of OTU in the community was estimated by Chao1 algorithm, and the value of Chao1 was positively correlated with the total number of species.Shannon: The index considers both the abundance and evenness of the community, and there is a positive correlation between its value and the richness and evenness of the community.Simpson: The index was used to estimate microbial diversity in samples, and the values were negatively correlated with community diversity.Data are expressed as means ± SEM (n = 4).Means with different superscripts are significantly different (p < 0.05).a,b,c Means (± SEM) values within the row unlike superscript letters were significantly different (p < 0.05). Figure 3 . Figure 3. Effects of fish meal substitution by a fermented cottonseed meal on the dilution curve of the intestinal flora of T. ovatus. Figure 4 . Figure 4. Venus map of OTUs of the intestinal flora of T. ovatus in different treatment groups. Figure 3 . Figure 3. Effects of fish meal substitution by a fermented cottonseed meal on the dilution curve of the intestinal flora of T. ovatus. ACE index is utilized as a means to approximate the abundance of operational taxonomic units (OTUs) within a given ecological community.It has been observed that higher ACE values exhibit a positive correlation with the overall diversity and richness of the microbial community.Chao1: The number of OTU in the community was estimated by Chao1 algorithm, and the value of Chao1 was positively correlated with the total number of species.Shannon: The index considers both the abundance and evenness of the community, and there is a positive correlation between its value and the richness and evenness of the community.Simpson: The index was used to estimate microbial diversity in samples, and the values were negatively correlated with community diversity.Data are expressed as means ± SEM (n = 4).Means with different superscripts are significantly different (p < 0.05).a,b,c Means (± SEM) values within the row unlike superscript letters were significantly different (p < 0.05). Figure 3 . Figure 3. Effects of fish meal substitution by a fermented cottonseed meal on the dilution curve of the intestinal flora of T. ovatus. Figure 4 . Figure 4. Venus map of OTUs of the intestinal flora of T. ovatus in different treatment groups.Figure 4. Venus map of OTUs of the intestinal flora of T. ovatus in different treatment groups. Figure 4 . Figure 4. Venus map of OTUs of the intestinal flora of T. ovatus in different treatment groups.Figure 4. Venus map of OTUs of the intestinal flora of T. ovatus in different treatment groups. Figure 5 . Figure 5. Distribution of the top 10 microbial phylum levels in the intestinal contents of T. ovatus in different treatment groups.Data are expressed as means ± SEM (n = 4).Means with different superscripts are significantly different (p < 0.05). Figure 5 . Figure 5. Distribution of the top 10 microbial phylum levels in the intestinal contents of T. ovatus in different treatment groups. Figure 6 . Figure 6.Distribution of the top 10 microbial genus levels in the intestinal contents of T. ovatus in different treatment groups.Data are expressed as means ± SEM (n = 4).Means with different superscripts are significantly different (p < 0.05). Figure 6 . Figure 6.Distribution of the top 10 microbial genus levels in the intestinal contents of T. ovatus in different treatment groups. Figure 7 . Figure 7. NMDS analysis at the OTU level of T. ovatus in different treatment groups. Figure 7 . Figure 7. NMDS analysis at the OTU level of T. ovatus in different treatment groups. Table 1 . Formulation and nutrient levels of the experimental diets (% dry matter). 4 Table 3 . The primers for real-time fluorescence quantification PCR. Table 4 . Effects of fish meal substitution by fermented cottonseed meal on an index of intestinal digestive enzymes of T. ovatus. Table 5 . Diversity statistics of intestinal samples of juvenile T. ovatus in different treatment groups. Table 5 . Diversity statistics of intestinal samples of juvenile T. ovatus in different treatment groups.The ACE index is utilized as a means to approximate the abundance of operational taxonomic units (OTUs) within a given ecological community.It has been observed that higher ACE values exhibit a positive correlation with the overall diversity and richness of the microbial community.Chao1: The number of OTU in the community was estimated by Chao1 algorithm, and the value of Chao1 was positively correlated with the total number of species.Shannon: The index considers both the abundance and evenness of the community, and there is a positive correlation between its value and the richness and evenness of the community.Simpson: The index was used to estimate microbial diversity in samples, and the values were negatively correlated with community diversity.Data are expressed as means ± SEM (n = 4).Means with different superscripts are significantly different (p < 0.05). a,b,c Means (±SEM) values within the row unlike superscript letters were significantly different (p < 0.05). Table 6 . Distribution of the top 10 microbial phylum levels in the intestinal contents of T. ovatus in different treatment groups. Table 6 . Distribution of the top 10 microbial phylum levels in the intestinal contents of T. ovatus in different treatment groups.Data are expressed as means ± SEM (n = 4).Means with different superscripts are significantly different (p < 0.05). Table 7 . Distribution of the top 10 microbial genus levels in the intestinal contents of juvenile T. ovatus in different treatment groups. Table 7 . Distribution of the top 10 microbial genus levels in the intestinal contents of juvenile T. ovatus in different treatment groups. Data are expressed as means ± SEM (n = 4).Means with different superscripts are significantly different (p < 0.05).a,b Means (±SEM) values within the row unlike superscript letters were significantly different (p < 0.05).
2023-09-24T16:11:45.654Z
2023-09-18T00:00:00.000
{ "year": 2023, "sha1": "2053bbecc086cffe3fb649e8c6fbfddcaba6f40a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2410-3888/8/9/466/pdf?version=1695041928", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9abd335844aa7e3a5190224b7d4d5a2c33c00e7b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
40191329
pes2o/s2orc
v3-fos-license
Dynamical mean-field equations for strongly interacting fermionic atoms in a potential trap We derive a set of dynamical mean-field equations for strongly interacting fermionic atoms in a potential trap across a Feshbach resonance. Our derivation is based on a variational ansatz, which generalizes the crossover wavefunction to the inhomogeneous case, and the assumption that the order parameter is slowly varying over the size of the Cooper pairs. The equations reduce to a generalized time-dependent Gross-Pitaevskii equation on the BEC side of the resonance. We discuss an iterative method to solve these mean-field equations, and present the solution for a harmonic trap as an illustrating example to self-consistently verify the approximations made in our derivation. The recent experimental advance in ultracold atoms has allowed controlled studies of strongly interacting Fermi gases in various types of potential traps, with the interaction strength tunable by an external magnetic field through the Feshbach resonance [1]. For homogeneous strongly interacting Fermi gases at zero temperature, the physics is captured by the variational crossover wavefunction [2][3][4], which interpolates the BCS and the BEC theories. For strongly interacting atoms in a potential trap, there are currently two main methods to deal with the resultant inhomogeneity: one is the local density approximation (LDA) [4,5], which neglects the kinetic terms associated with the spatial variation of the order parameters; the other is based on the numerical simulation of the Bogoliubov-De-Gennes (BDG) equations [6,7]. Both of these methods have found wide applications recently [4][5][6][7], but each of them also has its own limitation: the LDA becomes inadequate in cases where variations of the order parameters have significant impacts; the BDG equations take into account exactly the spatial variation of the order-parameter, but its numerical solution is typically time-consuming, which limits its applications only to very special types of potentials. In this work, we develop a different method to describe both the static properties and the dynamics of strongly interacting fermionic atoms in arbitrary but slowly varying potential traps. Our starting point is a variational state which is a natural generalization of the crossover wavefunction to the inhomogeneous and dynamical cases. The key simplification in our derivation comes from the assumption that the spatial variation of the order-parameter is small within the size of the Cooper pairs. This assumption is similar to the one in the derivation of the Ginzberg-Landau equation for the weakly interacting fermions [8], but we avoid the use of perturbation expansions so that the order parameter here in general does not need to be small [8,9]. With such an assumption, we derive a set of dynamical mean-field equations for the bare molecular condensate and the Cooper-pair wavefunctions. This set of equations can be solved iteratively, and its zeroth-order approximation, which neglects the order-parameter variation, gives the LDA result. On the BEC side of the Feshbach resonance (with the chemical potential µ ≤ 0), these mean-field equations can be reduced to a generalized dynamical Gross-Pitaevskii (GP) equation [8][9][10], with the effective nonlinear interaction for the bare molecules derived from a fermionic gap equation. When one goes deeper into the BEC region, the nonlinear interaction resumes the conventional GP form, and one can derive an effective scattering length for the bare molecules. We solve the dynamical mean-field equations for a harmonic trap as a simple illustrating example to self-consistently verify the approximations made in our derivation. Another recent work also addresses the dynamics of a trapped Fermi gas across a Feshbach resonance [11]. The set of equations derived therein are semiclassical hydrodynamic equations, which, after linearization, can be applied to calculate the dynamical properties of the system. In contrast, our work follows the time-dependent variational approach. By reducing the dynamical mean field equations to a time-dependent non-linear GP equation on the BEC side of the resonance, we provide a complementary perspective to the problem. Our starting point is the two-channel field Hamiltonian [3,4,12] which describes the interaction between the fermionic atom fields Ψ † σ (σ =↑, ↓ labels atomic internal states) in the open channel and the bosonic bare molecule field Ψ † b in the closed channel. In this Hamiltonian, m is the atom mass, and V (r, t) is the trap potential which could vary both in space and in time. Note that we have assumed that the trap frequencies for a composite boson and for a single atom are the same, so that the potential that a boson feels is twice as a single atom does. The bare atom-molecule coupling rate α, the bare background scattering rate U , and the bare energy detuning of the closed channel molecular level relative to the threshold of the two-atom continuum γ are connected with the physical ones α p , U p , γ p through the standard renormalization relations [4]. The values of the physical parameters α p , U p , γ p are determined respectively from the resonance width, the background scattering length, and the magnetic field detuning relative to the Feshbach resonance point (see, e.g., the explicit expressions in Ref. [13]). Note that following the standard two-channel model, direct collisions between the bosonic bare molecules are neglected, as their contribution is negligible near a broad Feshbach resonance [3,4,12]. At almost zero temperature and with a slowly varying potential V (r, t), the state of the Hamiltonian (1) can be assumed to evolve according to the following variational form: where N is the normalization factor, φ b (r, t) is the condensate wavefunction for the bare molecules, and f (r, r ′ , t) is the Cooper-pair wavefunction. This variational state is a natural generalization of the crossover wavefunction to the inhomogeneous and dynamical cases [14]. Without the fermionic field, this variational state would have the same form as the one in the derivation of the dynamical GP equation for the weakly interacting bosons [10]. To derive the evolution equations for the wavefunctions φ b (r, t) and f (r, r ′ , t), we follow the standard variational procedure to minimize the action S = dt[ Φ |Φ − Φ|Φ ]/(2i) − Φ|H|Φ , where |Φ and H are specified in Eqs. (1) and (2). Under the ansatz state (2), the Wick's theorem implies the decomposition the additional Hartree-Fock terms, which only slightly modify the effective V (r, t), are not important when there is pairing instability [8] and are thus neglected here). It turns out that to get the expression of the action S, the critical part is the calculation of the pair function F * (r 1 , . Under the ansatz state (2), the pair function satisfies the following integral equation (we drop the time variables in F * (r 1 , r 2 , t) and f * (r, r ′ , t) when there is no confusion) To solve this integral equation, we write both F * (r 1 , r 2 ) and f * (r 1 , r 2 ) in terms of the new coordinates r = (r 1 + r 2 )/2 and r − = r 1 − r 2 . Then, we take the Fourier transformation of Eq. (3) and its conjugate with respect to the relative coordinate r − . The Fourier transforms of F (r, r − ) and f (r, r − ) are denoted by F k (r) and f k (r), respectively. In this Fourier transformation, we assume |∂ r f | ≪ ∂ r− f and |∂ r F | ≪ ∂ r− F . Physically, it corresponds to the assumption that the order parameter is slowly varying within the size of the Cooper pairs. Under this assumption, we derive from Eq. (3) and its conjugate the following simple relation between the Fourier components This relation is critical for the explicit calculation of the action S. We can now express the action S in terms of the variational wavefunctions f k (r, t) and for φ b (r, t). From the functional derivatives δS/δf * k (r, t) = 0 and δS/δφ * b (r, t) = 0, we get the following evolution equations for f k (r, t) and for φ b (r, t): where The two equations (5) and (6) represent a central result of this work: they completely determine the evolution of the wavefunctions f k and for φ b , just as the GP equation determines the condensate evolution for the weakly interacting bosons. In the stationary case with a time-independent trap, one just needs to replace i∂ t f k and i∂ t φ b respectively with 2µf k and 2µφ b , where µ is the atom chemical potential. The evolution equations (5) and (6) are a set of coupled nonlinear differential equations. They can be solved through direct numerical simulations (for instance, through the splitstep method), but as the potential V (r, t) is typically slowly varying both in r and in t, the following iterative method may prove to be more efficient. In this case, we expect both φ b (r, t)e i2µt and f k (r, t)e i2µt to be slowly varying in r and t. We can then introduce the following effective potentials , both of which should be small. With these introduced potentials, we can solve f k (r, t) from Eq. (5) as where Substituting Eq. (7) into Eq. (6), we get the following effective gap equation the latter equality comes from the renormalization relation between γ, α, U and γ p , α p , U p [4]). Under the zeroth-order approximation, we assume In this case, the gap equation (8), together with the number equation N = n(r, t)d 3 r, where N denotes the total atom number and n(r, t) = 2|φ b | 2 + 2/8π 3 d 3 k|f k | 2 /(1+|f k | 2 ) is the local atom density, completely solves the problem, the result of which corresponds to a solution under the local density approximation in the adiabatic limit. Thus we recover the LDA result under the zeroth-order approximation which completely neglects V ef f (r, t) and V k ef f (r, t). It is then evident as how to go beyond the LDA. We can use the LDA result φ Substituting these effective potentials into the gap equation (8) and (7), we can find out the next order wavefunctions φ k (r, t). This iterative process should converge if the effective potentials are small (i.e., the order parameters are slowly varying in r and t). In the following, we consider a different simplification of the basic equations (5) and (6) on the BEC side of the resonance with the chemical potential µ ≤ 0 (note that it is not required to be in the deep BEC region). On this side, we expect the wavefunctions φ b (r, t) and f k (r, t) to have similar dependence on r and t, so we assume V ef f (r, t) ≃ V k ef f (r, t). This approximation will be self-consistently tested and we will see that it is well satisfied when µ ≤ 0. Under this approximation, µ k ef f = µ ef f , while µ ef f can be solved from the gap equation (8) This equation has the same form as the dynamical GP equation for the weakly interacting bosons except that the collision term is now replaced by a general nonlinear potential µ ef f [10], a function of |φ b | 2 with its shape determined by the gap equation (8). We have numerically solved Eq.(8) for the function µ ef f (|φ b | 2 ) at several different detunings for 6 Li, and the shapes of these functions are shown in Fig. 1. We can see that µ ef f becomes almost linear in |φ b | 2 when one goes further into the BEC region. In that limit, Eq. (9) reduces to an exact GP equation, and we can define an effective scattering length a ef f for the bare molecular condensate with dµ ef f /d(|φ b | 2 ) = 2πa ef f / (2m). This effective scattering length a ef f is shown in Fig. 1(d) as a function of the field detuning for 6 Li. We should note, however, that the effective scattering length for the bare molecules is in general very different from the one for the dressed molecules [4,9,15]. The dressed molecules are dominantly composed of Cooper pairs of atoms in the open channel (for instance, when the chemical potential µ ≈ 0, the population fraction of the bare molecules is only about 0.1% for 6 Li). As the bare molecules have a very low density near the resonance, they in general have a large effective scattering length to compensate for that, as is shown in Fig. 1(d). The effective scattering lengths for the bare and the dressed molecules coincide with each other only in the deep BEC region with the population dominantly in the closed channel. In this limit, we have checked that the dependence of the effective scattering length on the field detuning is in agreement with a different calculation in Ref. [16] under the two-channel model (we can only apply the two-channel model in this limit because of a large closed channel population [4,17]). Experimentally, the scattering length between the dressed molecules can be measured from the collective excitations of the trapped Fermi gas [18] or from the in-trap radius of the condensate cloud [19]; while it is difficult to measure the scattering length between the bare molecules. The simplified equation (9) determines the distribution of the bosonic molecules. This solution, combined with Eq. (7), then fixes the distribution of the fermionic atoms. As a simple illustrating example, we use them to solve the fermi condensate shape function in a harmonic trap on the BEC side of the resonance. We take the values of the parameters corresponding to 6 Li, and assume a total of N ∼ 10 5 atoms trapped in a time-independent potential V (r) = 1 2 mωr 2 with ω/2π ∼ 100Hz, as is typical in the experiments [1]. Figures 2(a) and 2(b) show the condensate shape functions in two different regions with the magnetic field detunings B − B 0 given respectively by −268G and −107G. The first detuning corresponds to a point deep in the BEC region with (k F a s ) −1 ∼ 11, where a s is the atom scattering length at that detuning and k −1 F is a convenient length unit defined in the caption of Fig. 1. The second one corresponds to the onset of the bosonic region with the atom chemical potential µ ∼ 0 and (k F a s ) −1 ∼ 0.8. We have shown in Fig. 2 the number distributions for the bare molecules and the fermi condensate. One can see that these distribution functions are smooth in space, without the artificial cutoff at the edge of the trap as in the LDA result. The closed channel population (the total bare molecule fraction) is calculated to be about 33% and 0.1% respectively for these two detunings. An important goal of calculation of this simple example is to self consistently check the approximations made in our derivation. First, to derive the basic equations (5,6), we have assumed that the order parameter should be slowly varying over the size of the Cooper pairs. From Fig. (2), we see that the characteristic length for the variation of the order parameter is typically of 100k −1 F , while the size of the Cooper pair is well below k −1 F at these detunings [17]. Therefore, this approximation should be well satisfied for typical experiments. Second, from the basic equations (5,6) to the simplified equation (9), we have used the ap- To check the validity of this approximation, we calculate the effective potentials V ef f (r) and V k ef f (r) (time-independent in this case) with our solutions of φ b (r) and f k (r) from the stationary harmonic trap, and the results are shown in Fig. 3. It is clear that the difference V ef f (r) − V k ef f (r) is small compared with the magnitude of |V ef f (r)| for different values of k when the atom chemical potential µ ≤ 0, which justifies the approximation V k ef f (r, t) ≃ V ef f (r, t) in that region. One can also see that the relative error V ef f (r) − V k ef f (r) / |V ef f (r)| goes up significantly (from roughly 10 −4 to 10 −1 ) when one goes from the field detuning −268G to −107G. If one goes further to the resonance point, this approximation eventually breaks down, and one needs to use the basic equations (5,6) instead of the reduced equation (9). In summary, we have derived a set of dynamical mean-field equations for evolution of strongly interacting fermionic atoms in any slowly varying potential traps, and discussed methods to solve these equations. We show that on the BEC side of the Feshbach resonance, this set of equations are reduced to a generalized dynamical GP equation. As an illustrating example, we solve the reduced equations in the case of a harmonic trap, which self-consistently verifies the approximations made in our derivation. This work was supported by the NSF award (0431476), the ARDA under ARO contracts, and the A. P. Sloan Fellowship.
2017-09-16T19:02:38.352Z
2005-12-20T00:00:00.000
{ "year": 2005, "sha1": "d4418727410e84b7b7c3baa1e54f12857e3c923f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0512517", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d4418727410e84b7b7c3baa1e54f12857e3c923f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250669511
pes2o/s2orc
v3-fos-license
Low temperature spin reorientation in dysprosium iron garnet The spin reorientation (SR) phase transition in dysprosium iron garnet (Dy3Fe5O12 or DyIG) have been studied by specific heat Cp(T) and high field magnetisation measurements MT(H) and MH(T) on single crystals at low temperature. A first order SR is observed with a sharp jump at TSR = 14.5±0.5 K in the Cp(T) curve which corresponds to a spontaneous change from the high temperature (HT) easy direction ⟨111⟩ to an ⟨uuw⟩ angular low temperature (LT) phases. Above TSR, the magnetic structure is described by the irreducible representation (IR) A2g of the rhombohedral space group R 3 c. Below TSR, the magnetic structure changes in the monoclinic the space group C2/c with the IR Ag. When the field H is kept aligned along the hard symmetry directions ⟨100⟩ and ⟨110⟩, we obtain respectively the variation of the angular positions θ(T) and θ'(T) from the total spontaneous magnetisation down to 1.5 K (θ = 39.23° and θ' = 30.14°) and the results are in good agreement with the previous observations in low fields. When the sample is allowed to rotate freely on itself, the critical field Hc(T) between the HT⟨111⟩ and the LT⟨uuw⟩ angular phases permits us to precise the transition line up to 15 T and 40 K between the so called canted field induced (FI) and the associated collinear magnetic phases. The experimental magnetic phase diagram (MPD) is precisely determined in the (Hc-T) plane and the domains of the existence and the stability of the two magnetic phases are specified. Introduction This work take place in the general magnetic study of the crystal field (CF) and the Fe 3+ -RE 3+ anisotropic exchange(AE) interaction effects in rare earth iron garnets RE 3 Fe 5 O 12 (or REIG where RE 3+ is a rare earth ion or Y 3+ ) which are until now very attractive materials for their fundamental investigations [1]. The many interesting magnetic properties at low temperature which is often taken in account including strong magnetocrystalline anisotropy, spontaneous non-collinear magnetic structures (double umbrella-type), SR and FI magnetic phase transitions and their complicated experimental MPD in the vicinity of the compensation temperature, have been thoroughly reviewed by Guillot [2] and Zvezdin [3]. The occurrence of the so-called <uuw> and <uv0> angular LT phases where the iron sublattices magnetisation M Fe l i e s on a low symmetry direction are studied experimentally [4,5,6] and theoretically [7,8] by using different phenomenological models based on the generalized spin effective Hamiltonian broadly applicable to such magnetic transitions. So, the revisiting of these SR phase transitions would give a good chance to improve our knowledge in several topics especially detailed more information about the real non collinear magnetic structure of the RE 3+ at low temperature. Experimental The specific heat C p (T) has been performed on single crystal grown by standard PbO/PbF 2 method (disk of thickness: 1 mm; weight: 0.09074 g) without applied magnetic field and in the 1.25 K-40 K temperature range. Using the extraction technique in superconducting magnets, the magnetisations of two spherical single crystals of DyIG grown by the same method (weights: 0.37682 g and 0.20308 g and diameters respectively: 4.5 mm and 3 mm) have been measured in the 1.5 K-50 K temperature range and in high dc magnetic fields up to 16 T. The magnetic field was applied on the sample allowed to rotate freely on itself and along the three main crystallographic directions <111>, <110> and <100> which were checked by X-ray Laue technique within an error less than one degree. All the magnetisation results are reported in Bohr magneton by the 2(Dy 3 Fe 5 O 12 ) formula units and H is the internal magnetic field. The magnetisations have been also performed on powder sample. By X-ray and neutron diffraction experiments at room temperature, the lattice constant of the powder sample (a=12.403 Å) was found in good agreement with previous values [9]. Results and discussion The variation of the specific heat C p (T) versus T is plotted in the figure 1. The results are in a good agreement with the data obtained on powder sample from 1.3 K to 16 K where the Schottky levels have been calculated [10]. At first, the SR transition is observed with a sharp jump at T SR =14.5±0.5 K in our C p (T) curve which correspond to a spontaneous change from the HT easy direction <111> to the <uuw> angular LT phases as reported and predicted previously [4][5][6][7][8]. At this first order transition temperature T SR a drastic change of the real structure is expected: the rhombohedral distorsion of the R 3 c space group with only two inequivalent magnetic sites [9] is no longer valid below T SR and the symmetry is lowered with the appearance of at least four inequivalent sites of the monoclinic space group C2/c [6]. The more recent study of the Mössbauer spectra of DyIG [11] clearly confirm also that the SR phase transition occur at 15 K down 4.2 K but reveal in the 45 K-295 K temperature range, that the Mössbauer spectral components are consistent with a crystal with a reduced symmetry from cubic to rhombohedral but with the space group R 3 . From the detailed of the magnetisations measurements in low and high magnetic fields and the neutron diffraction experiments, only two types of essential properties are presented in this discussion: is used in the representation analysis of Bertaut [13].The opening of the double umbrella magnetic structure is followed by an increase of the rhombohedral distorsion which modify the cubic description G 4g (T 1g )®A 2g and the crystallographic and magnetic glide planes c and c' respectively are conserved. The change of the symmetry below T SR is a characteristic of a first order transition, and the basis vectors of the IR A g belonging to the monoclinic space group C2/c, are able to describe the non collinear magnetic structures in the LT<uuw> angular phases. The isothermal magnetisation curves M T (H) are reported as a function of the internal magnetic field at 1.5 K (Figure 2). It can be seen that the three main crystallographic directions <111>,<110>and<100> are non easy directions compared to the case of the free sphere where the magnetisation is always greater than when the field is applied along the main directions. The difference ΔM S =M S (free)-M S <111> (where M S (free) is the true spontaneous magnetisation and M S <111> is the forced one obtained with H>Hc) which equal zero for T>T SR is a constant reliable parameter for T<T SR (0.52 μ B /2f.u) in the 10 K-1.5 K temperature range. When the field H is kept aligned along the hard symmetry directions <100> and <110>, we obtain respectively the thermal variations of the angular positions θ(T) and θ'(T) of the total magnetisation down to 1.5 K (see figure 3) where the results (θ=39.23° and θ'=30.14°) are in good agreement with previous observations in low fields [4,5]. It is shown, that the stable phases are of the <uuw> type below T SR and of the <111> type above. The rapid variation of the iron sublattices magnetisation direction M Fe in the vicinity of T SR confirms the character of a first order of the SR transition. The early studies focused on the FI phase transitions around the compensation or inversion temperature (T comp or T I =218.5 K) of this compound [14]. The new observation of these FI phase transitions are obtained in the T SR -50 K temperature range. The sample is aligned along <111> and is allowed to rotate freely on itself at 20.0 and 25.0 K (Figure 4). The transitions between HT<111> and LT<uuw>phases are observed on the curves M T (H) by a change in the susceptibilities at a critical field H c . The magnetisation M Hext (T) has been recorded at constant external magnetic field H ext =1, 2, 3, 4, 5, 8, 10.5 and 13 T ( Figure 5). The domains of the existence and the stability of the two magnetic phases are now specified and the transition line precised up to 15 T between the canted FI and the associated collinear magnetic phases experimental MPD in the (H c -T) plane ( Figure 6). The temperature of this transition between the HT<111> and the LT<uuw> phases increase linearly with the magnetic field and seems to disappears for H>15 T and T>45 K. Conclusion The SR phase transition in dysprosium iron garnet has been studied by specific heat and high field magnetisation measurements at low temperature. A sharp jump is observed at T SR =14.5±0.5 K in the C p (T) curve which correspond to a spontaneous change from the HT<111> easy direction to an LT<uuw> angular phases as predicted and previously reported. The first order transition <111>↔ <uuw> is explained by the transition IR A 2g ↔IR A g from R 3 c to C2/c space groups. When the field H is kept aligned along <100> and <110>, we obtain respectively the variation of the angular positions θ(T) and θ'(T) from the total spontaneous magnetisation down to 1.5 K (θ=39.23° and θ'=30.14°) and the results are in good agreement with previous observations in low fields. When the sample is allowed to rotate freely on itself, the transition line H c (T) is precised up to 15 T and 40 K between the canted FI and the associated collinear magnetic phases. The domains of the existence and the stability of the two magnetic phases are specified in the experimental MPD (H c -T).
2022-06-27T23:31:06.750Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "565380a674411c47b09125d2889508065ba37176", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/150/4/042108", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "565380a674411c47b09125d2889508065ba37176", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244460949
pes2o/s2orc
v3-fos-license
Incident heart failure and myocardial infarction in sodium‐glucose cotransporter‐2 vs. dipeptidyl peptidase‐4 inhibitor users Abstract Aims This study aimed to compare the rates of major cardiovascular adverse events in sodium‐glucose cotransporter‐2 inhibitors (SGLT2I) and dipeptidyl peptidase‐4 inhibitors (DPP4I) users in a Chinese population. SGLT2I and DPP4I are increasingly prescribed for type 2 diabetes mellitus patients. However, few population‐based studies are comparing their effects on incident heart failure or myocardial infarction. Methods and results This was a population‐based retrospective cohort study using the electronic health record database in Hong Kong, including type 2 diabetes mellitus patients receiving either SGLT2I or DPP4I from 1 January 2015 to 31 December 2020. Propensity score matching was performed in a 1:1 ratio based on demographics, past comorbidities, and non‐SGLT2I/DPP4I medications with nearest neighbour matching (caliper = 0.1). Univariable and multivariable Cox models were used to identify significant predictors for new‐onset heart failure, new‐onset myocardial infarction, cardiovascular mortality, and all‐cause mortality. Sensitivity analyses with competing risk models and multiple propensity score matching approaches were conducted. A total of 41 994 patients (58.89% males, median admission age at 58 years old, interquartile range [IQR]: 51.2–65.3) were included with a median follow‐up of 5.6 years (IQR: 5.32–5.82). In the matched cohort, SGLT2I use was significantly associated with lower risks of new‐onset heart failure (hazard ratio [HR]: 0.73, 95% confidence interval [CI]: [0.66, 0.81], P < 0.0001), myocardial infarction (HR: 0.81, 95% CI: [0.73, 0.90], P < 0.0001), cardiovascular mortality (HR: 0.67, 95% CI: [0.53, 0.84], P < 0.001), and all‐cause mortality (HR: 0.26, 95% CI: [0.24, 0.29], P < 0.0001) after adjusting for significant demographics, past comorbidities, and non‐SGLT2I/DPP4I medications. Conclusions SGLT2 inhibitors are protective against adverse cardiovascular events including new‐onset heart failure, myocardial infarction, cardiovascular mortality, and all‐cause mortality. The prescription of SGLT2I is preferred when taken into consideration individual cardiovascular and metabolic risk profiles in addition to drug–drug interactions. Introduction Diabetes mellitus is an increasingly prevalent metabolic disease, currently affecting more than 400 million people, and the patient population is projected to increase up to 642 million by 2040. 1 Given the ever-increasing disease burden, new classes of antidiabetic agents have been introduced into the market over the past decade. The use of two novel classes of antidiabetic agents-sodium-glucose cotransporter-2 inhibitors (SGLT2I) and dipeptidyl peptidase-4 inhibitors (DPP4I)-has increased significantly. 2,3 Besides their favourable side effect profile, studies have reported beneficial effects on metabolic risk from these two classes of drugs. 4 Based on findings from large-scale clinical trials, the cardiovascular mortality-lowering effects of SGLT2I are mostly attributed to its protection against heart failure (HF). [5][6][7][8] On the other hand, the cardiovascular effect of DPP4I appears to be more controversial. Whilst there were reports of DPP4I users having lower cardiovascular risks than nonusers, there are also studies reporting an increased risk of HF in saxagliptin users. 9,10 Whilst small-scale trials are comparing the metabolic effects or specific disease outcomes of SGLT2I and DPP4I, there is a lack of large-scale population studies to evaluate the difference in the presentation of major cardiovascular adverse events between the use of the two drug classes. [11][12][13] Recently, Zheng et al. have demonstrated lower mortality in SGLT2I users in comparison with DPP4I users in a network meta-analysis. 14 However, ultimately, the study is limited by the indirect comparison of the SGLT2I and DPP4I users. Other studies have reported on outcomes such as weight loss, improvement in the liver or renal function, 15 and reduction in atrial fibrillation incidence. 16 Another study recently investigated cardiovascular outcomes such as HF and myocardial infarction (MI), but only in Japanese, Korean, and European cohorts. 17 Therefore, the aim of the present study is to compare the occurrence of major cardiovascular adverse events in SGLT2I and DPP4I users to evaluate their cardiovascular protective effects in a Chinese population. Study design and population This study was approved by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster and from The Joint Chinese University of Hong Kong-New Territories East Cluster Clinical Research Ethics Committee. It included type 2 diabetes mellitus patients with SGLT2I or DPP4I prescriptions from 1 January 2015 to 31 December 2020. Patients who received both DPP4I and SGLT2I, in addition to patients who discontinued the medication dur-ing the study, were excluded. The exclusion criteria for the HF study cohort were as follows: patients with prior HF diagnosis or with the use of medications for HF (e.g. diuretics for HF and beta-blockers for HF). For the MI study cohort, patients with prior old MI or MI diagnosis were excluded. The patients were identified from the Clinical Data Analysis and Reporting System (CDARS), a territory-wide database that centralizes patient information from individual local hospitals to establish comprehensive medical data, including clinical characteristics, disease diagnosis, laboratory results, and drug treatment details. The system has been previously used by both our team and other teams in Hong Kong to conduct population-based cohort studies, 18,19 including those on diabetes mellitus. 20,21 Clinical and biochemical data were extracted from CDARS for the present study. Patients' demographics include sex and age of initial drug use (baseline). Prior comorbidities before initial drug use were extracted, including diabetes with chronic complication, diabetes without chronic complication, gout, hypertension, ischaemic heart disease, liver diseases, peripheral vascular disease, renal diseases, stroke/transient ischaemic attack, atrial fibrillation, ventricular tachycardia (VT)/ventricular fibrillation (VF)/aborted sudden cardiac death (SCD), anaemia, overweight, and cancer. Charlson's standard comorbidity index was also calculated. Mortality was recorded using the International Classification of Diseases Tenth Edition (ICD-10) coding, whilst the study outcomes and comorbidities were documented in CDARS under ICD-9 codes. The ICD codes used to search for diagnoses and outcomes are shown in Supporting Information, Table S1. Standard deviation (SD) was calculated for glycaemic and lipid profile parameters once there are at least three examinations for each patient since initial drug exposure of SGLT2I or DPP4I. We also calculated more specific variability measures for HbA1c and fasting glucose profiles including SD, SD/initial, coefficient of variation (CV), and variability independent of mean as listed in Supporting Information, Table S2. SGLT2i vs. DPP4i: heart failure and AMI Outcomes and statistical analysis The study outcomes are new-onset HF, and new-onset MI, cardiovascular mortality, and all-cause mortality as defined by the first incidence of ICD-9 codes of these adverse events (Supporting Information, Table S1). Mortality data were obtained from the Hong Kong Death Registry, a population-based official government registry with the registered death records of all Hong Kong citizens linked to CDARS. ICD-10 codes I00-I09, I11, I13, and I20-I51 were used to identify cardiovascular mortality. Descriptive statistics are used to summarize baseline clinical and biochemical characteristics of patients with SGLT2I and DPP4I use. For baseline clinical characteristics, the continuous variables were presented as median (95% confidence interval [CI]/interquartile range [IQR]) or mean (SD) and the categorical variables were presented as total number (percentage). Continuous variables were compared using the two-tailed Mann-Whitney U test, whilst the two-tailed χ 2 test with Yates' correction was used to test 2 × 2 contingency data. Univariable Cox regression was used to identify significant predictors for the primary and secondary outcomes. Propensity score matching was performed to generate control of SGLT2I users to compare against DPP4I users in a 1:1 ratio based on baseline age, sex, prior comorbidities, and non-SGLT2I/DPP4I medications using nearest neighbour matching strategy. Multivariable Cox models adjusting for significant risk factors of demographics, past comorbidities, non-SGLT2I/DPP4I medications, subclinical biomarkers, HbA1c, and fasting glucose to identify the treatment effects of SGLT2I vs. DPP4I on the mentioned adverse outcomes. Cause-specific and subdistribution hazard models were conducted to consider possible competing risks. Lastly, subgroup analyses were done on age (≤65 and >65 years) and sex on drug exposure effects. A standardized mean difference (SMD) of no <0.2 between the treatment groups post-weighting was considered negligible. The hazard ratio (HR), 95% CI, and P-value were reported. Statistical significance is defined as P-value < 0.05. The statistical analysis was performed with RStudio software (Version 1.1.456) and Python (Version 3.6). Baseline characteristics In this study, patients with type 2 diabetes mellitus and use of either SGLT2I or DPP4I from 1 January 2015 to 31 December 2020 were included ( Table 1). Patients with the use of both classes, or with prior HF diagnoses or admissions due to HF or with anti-HF drugs (e.g. beta-blockers for HF and diuretics for HF), were excluded. After exclusion, 41 994 patients (58.89% males, median admission age at 58 years old, IQR: 51.2-65.3) fulfilled the eligibility criteria in the study cohort for subsequent analysis ( Figure 1). The study cohort has a median follow-up duration of 5.6 years (IQR: 5.32-5.82). Propensity score matching (1:1) between SGLT2I and DPP4I users using the nearest neighbour search strategy with a 0.1 caliper was performed (Supporting Information, Figure S1). Bootstrapping procedures were performed for propensity matching estimates, and the estimations of bootstrapped standard error (replications = 50) were <0.001. Together, these indicated no significant confounding characteristics remained significant after propensity matching. Significant predictors of the study outcomes The cumulative incidence curves for new-onset HF, MI, cardiovascular mortality, and all-cause mortality stratified by SGLT2I or DPP4I use for the matched cohort are shown in Figure 2. Lower incidences of all of these outcomes were observed for SGLT2I users compared with DPP4I users. Univariable Cox regression was applied to identify significant predictors of the study outcomes (Supporting Information, Tables S2 and S3). In the matched cohort, SGLT2I use was as- Table 2). SGLT2I use remained a significant predictor of all four study outcomes (HR < 1, P < 0.001). To evaluate the predictiveness of the models, different sensitivity analyses were performed. Firstly, a 1 year lag time between treatment initiation and study outcomes was applied (Supporting Information, Table S4). Secondly, competing risk analyses using cause-specific hazard models and subdistribution hazard models were applied (Supporting Information, Table S5). Thirdly, different propensity score approaches were used to evaluate the effects of the matching approach on the analysis, including propensity score stratification, inverse probability of treatment weighting (IPTW), and stable inverse probability of treatment weighting (SIPTW) (Supporting Information, Table S6). All of these analyses demonstrated that SGLT2I use was associated with lower risks of new-onset HF, MI, cardiovascular mortality, and all-cause mortality. Discussion The main finding of the present study is that using DPP4I as a reference, SGLT2I use was associated with a lower risk of 1390 J. Zhou et al. new-onset HF and MI, cardiovascular mortality, and all-cause mortality. Our findings are largely consistent with existing studies. A network meta-analysis of 236 trials has reported the superior cardiovascular protective effects of SGLT2I against DPP4I when users of either medication are compared against the control group. However, the control groups were not matched and no direct comparison was made. 14 A recent study evaluating the cardiovascular effects of SGLT2I and DPP4I amongst cardiorenal disease-free diabetic patients shows that SGLTI users have a lower risk of HF. 17 However, this study found the effect of SGLT2I on the prevention of acute MI to be neutral, which may be explained by the inherent difference between patients with renal failure and the general population. With a structured follow-up and close monitoring, patients with renal failure would have their cardiovascular risk factors optimized as a part of their disease management. Moreover, recent meta-analyses have reported the benefits of SGLT2I in preventing cardiac remodelling in HF patients regardless of glycaemic status 22 and reducing major clinical events in patients with established HF, 23 with a neutral effect on arrhythmic outcomes. 24 Furthermore, a meta-analysis including more than 34 000 patients found that the protective effect of SGLT2I on major cardiovascular adverse events of atherosclerotic origin is limited to patients with established atherosclerotic disease. 25 The difference in the proportion of patients with established atherosclerosis may explain the different effects of SGLT2I on MI observed. The present study demonstrates that the cardiovascular beneficent effects of SGLT2I persist in diabetic patients with pre-existing cardiovascular impairment. There are several hypotheses for the mechanisms underlying the cardiovascular-protective effects of SGLT2I. First of all, the modulatory effect of SGLT2I on the proximal tubules results in glucosuria and natriuresis, thus lowering the preload and the resulting stress on the ventricles. 26 It is speculated that SGLT2I has a unique effect of selectively contracting interstitial fluid specifically, without affecting the intravascular volume, thus particularly useful in the prevention of HF. 27 Figure 2 Cumulative incidence curves for heart failure, myocardial infarction, cardiovascular mortality, and all-cause mortality stratified by SGLT2I or DPP4I use in the matched cohort. DPP4I, dipeptidyl peptidase-4 inhibitors; SGLT2I, sodium-glucose cotransporter-2 inhibitors. The hypothesis is supported by studies comparing the vascular effects of dapagliflozin and bumetanide, where dapagliflozin has been shown to have little effect on the intravascular volume. 28,29 Moreover, inhibition of the sodium-hydrogen ion exchanger in the myocardium, which is activated under HF to increase intracytoplasmic sodium and calcium level, was also hypothesized to be a part of the underlying mechanism. 30,31 However, because SGLT2 receptors are absent in the heart, the exact inhibitory mechanism remains unclear. Other hypotheses on the anti-fibrosis and adipokine-reducing effects, which are effective against both HF and MI, suggest that the cardiovascular-protective effects of SGLT2I may involve multiple biochemical pathways and thus protect against different cardiovascular diseases. 27,32 The multiple processes involved in the cardiovascular-protective effect of SGLTI may also explain its superior outcome against DPP4I. Whilst previous studies reported the benefits of SGLT2I on cardiovascular health are mainly attributed to its protection against HF, a recent territory-wide study has shown that SGLT2I users have a lower incidence of new-onset atrial fibrillation than DPP4I users, which supports the lower cardiovascular and all-cause mortality reported in the present study. 16 This may be attributed to the anti-fibrotic effects of SGLT2I, because atrial remodelling and fibrosis are common pathogenic pathways of atrial fibrillation. 33 The favourable pleiotropic effects of SGLT2I may also improve the patients' cardiometabolic risk, thus further lowering their MI and cardiovascular mortality risk. 15 It should be noted that randomized controlled trials have reported that saxagliptin increases the hospitalization rate for HF, despite having a neutral effect on the occurrence of major cardiovascular adverse effects. 34,35 Because the present study focuses on the incident occurrence of HF and MI, patients on saxagliptin were kept in the study. Amongst the 69 521 patients with type 2 diabetes mellitus, there were in total 353 patients who used saxagliptin use with a low incidence rate of 0.51%. Limitations There are several limitations to the present study. Firstly, inherent information bias with a risk of under-coding and coding errors should be noted, given its observational and retrospective nature. However, the difference in patient characteristics, past comorbidities, and other medication usages between SGLT4I/DPP4I users and controls was addressed through matching using propensity scores, although residual bias may remain. There are also patients with missing data for the laboratory parameters because not all blood tests were routinely performed for all. Moreover, we were unable to access important lifestyle predictors for cardiovascular adverse events, such as body mass index, smoking, and alcoholism. Thirdly, coding for clinical diagnoses of HF was used but echocardiographic data are not coded in the administrative database, and therefore, different types of HF based on ejection fraction could not be examined. Finally, DPP4I use is associated with an increased risk of HF compared with placebo, and therefore, this study could not distinguish between whether gliptins cause HF and whether SGLT2I reduce HF. Conclusions SGLT2 inhibitors are protective against adverse cardiovascular events including new-onset HF, MI, cardiovascular mortality, and all-cause mortality. The prescription of SGLT2I is preferred when taken into consideration individual cardiovascular and metabolic risk profiles in addition to drug-drug interactions. Conflict of interest None declared. Funding None. Author contributions Jiandong Zhou and Sharen Lee: conception of study and literature search, preparation of figures, study design, data collection, data contribution, statistical analysis, data interpretation, manuscript drafting, and critical revision of the manuscript. Keith Sai Kit Leung, Abraham Ka Chung Wai, Tong Liu, Ying Liu, Dong Chang, Wing Tak Wong, Ian Chi Kei Wong, and Bernard Man Yung Cheung: conception of study and literature search, data collection, data contribution, critical revision of the manuscript, and study supervision. Qingpeng Zhang and Gary Tse: conception of study and literature search, study design, data collection, data analysis, data contribution, manuscript drafting, critical revision of manuscript, and study supervision.
2021-11-21T20:06:28.087Z
2021-11-21T00:00:00.000
{ "year": 2022, "sha1": "b342c82663d90619cc2f3d5de7dcbc2b1ea0ddd1", "oa_license": "CCBY", "oa_url": "https://discovery.ucl.ac.uk/10144284/1/ESC%20Heart%20Failure_2022_Zhou.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "83567403b523e1407ce2a19ae5a1eb75eeaedc93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253357028
pes2o/s2orc
v3-fos-license
Method for Improving Range Resolution of Indoor FMCW Radar Systems Using DNN Various studies on object detection are being conducted, and in this regard, research on frequency-modulated continuous wave (FMCW) RADAR is being actively conducted. FMCW RADAR requires high-distance resolution to accurately detect objects. However, if the distance resolution is high, a high-modulation bandwidth is required, which has a prohibitively high cost. To address this issue, we propose a two-step algorithm to detect the location of an object through DNN using many low-cost FMCW RADARs. The algorithm first infers the sector by measuring the distance to the object for each FMCW RADAR and then measures the position through the grid according to the inferred sector. This improves the distance resolution beyond the modulation bandwidth. Additionally, to detect multiple targets, we propose a Gaussian filter. Multiple targets are detected through an ordered-statistic constant false-alarm rate (OS-CFAR), and there is an 11% probability that multiple targets cannot be detected. In the lattice structure proposed in this paper, the performance of the proposed algorithm compared to those in existing works was confirmed with respect to the cost function. The difference in performance versus complexity was also confirmed when the proposed algorithm had the same complexity and the same performance, and it was confirmed that there was a performance improvement of up to five-fold compared to those in previous papers. In addition, multi-target detection was shown in this paper. Through MATLAB simulation and actual measurement on a single target, RMSEs were 0.3542 and 0.41002 m, respectively, and through MATLAB simulation and actual measurement on multiple targets, RMSEs were confirmed to be 0.548265 and 0.762542 m, respectively. Through this, it was confirmed that this algorithm works in real RADAR. Introduction RADAR can effectively detect objects in various weather conditions or external environments compared to other sensors such as cameras and LiDAR [1][2][3]. In addition, RADAR uses millimeter waves to detect information on objects, so it can use miniature and lightweight antennas and transceivers and operate with low power consumption [4,5]. For this reason, RADAR can be effectively used in a variety of fields that require object detection [6,7]. RADAR can be divided into continuous-wave (CW) RADAR and pulse RADAR, according to the transmission/reception signal of the RADAR. First, pulse RADAR transmits radio waves in the form of pulses in RADAR transmission and reception, and then receives incoming signals from the target during the time between pulses. Although it has a highrange resolution for position measurement, it can incur the problem of the frequency band overlapping with the existing system because of an excessively wide frequency band; there is also the problem that a large-sized is used [8,9]. CW RADAR can detect moving objects Sensors 2022, 22, 8461 2 of 20 by use of continuous waves, and it can also detect the speed of objects. However, it is not possible to know exactly where an object is located because the distance information cannot be known [10,11]. Frequency-modulated continuous wave (FMCW) RADARs compensate for the shortcomings of CW RADARs. FMCW RADAR has the advantages of being able to detect object distance, having a modulating bandwidth of up to 1 GHz, and being able to detect multiple objects. For these reasons, FMCW RADAR is suitable for monitoring indoor objects. Further details on the RADARs can be found in Emanuele et al. [12]. Range resolution refers to the extent to which FMCW RADAR can measure objects in more detail. Range resolution is calculated as shown in Equation (1), wherer is the range resolution, c is the speed of light, and B is the bandwidth of the FMCW RADAR. As this shows, the range resolution of the FMCW RADAR varies according to the bandwidth [13]. To detect various situations occurring indoors, a sensor capable of more detailed measurement is needed. Therefore, the range resolution value of the FMCW RADAR should be small (i.e., to achieve more-detailed measurement). However, using FMCW RADAR with small-range resolution values requires a large modulation bandwidth, which is not suitable for use as an indoor measurement sensor because it is expensive. Currently, algorithms have been developed to measure indoor conditions using relatively inexpensive passive infrared (PIR) sensors, LEDs, and photodiodes [14]. However, since PIR sensors detect temperature changes using infrared rays, they are not measured if they are not moved or moved finely, and algorithms that detect using LEDs and photodiodes are easily influenced by external environmental factors such as light [15]. These considerations have increased interest in the use of RADAR, which is relatively less sensitive to the external environment. Accordingly, numerous studies have been conducted to improve the range resolution using low-cost FMCW RADAR using a small bandwidth [16][17][18]. Algorithms have been developed to improve the range resolution by expanding the number of points of a signal containing bit frequency in the time domain of FMCW RADAR [19]. A typical method is to expand the number of points of a signal using mirror padding, which is a method of copying and attaching the signal after the signal. Improving the range resolution through this method has the disadvantage that estimating the distance to the object after performing a fast Fourier transform (FFT) creates main lobes and side lobes, which reduces the ability to distinguish the object. In addition to these methods, other algorithms have been developed to minimize the side lobe of the signal [20]. In the case of the adaptive mirror padding (AMP) method, the number of points of the signal is expanded by checking the pole of the beat frequency signal and copying the signal based on the location of the pole. In addition, the phase correct padding (PCP) method checks the phase of the beat frequency signal to identify the most similar part of the original signal through the slope and size of the phase at the last part of the signal and expands the signal to increase its number of points. This method has a disadvantage in that there is a limit to improving the range resolution as a method of expanding the range resolution through the information of the beat frequency signal. Another algorithm that improves range resolution uses STFT to spectrogram a signal, extract features, and detect situations through SVM [21]. That algorithm can detect objects with high precision, but it is difficult to apply the algorithm to multiple targets. Therefore, it is not suitable for detecting various situations in indoor environments. Recently, algorithms for improving range resolution through artificial intelligence have been developed [22,23]. Through deep learning, a time domain signal containing a beat frequency is learned and so the point of the signal is expanded to improve the range resolution. However, it is difficult to measure spatially with this method. In addition to these methods, algorithms have also been developed to measure objects more accurately through 2D DNNs. However, this approach does not increase the range resolution but reduces the estimation error of the measured object. Nevertheless, detecting multiple objects with this method is difficult. It is not suitable for detecting an indoor environment because it is difficult to detect more diverse situations or various objects in an indoor environment. Looking at Table 1, the advantages and disadvantages of the existing studies described above can be seen. First, passive infrared (PIR) is an existing type of sensor [14]. Field light is an existing method for detection using LEDs and photodiodes [15]. Mirror padding is a study that expands signals using mirror padding [20]. AMP and PCP extend signals [20]. Breath monitoring (BM) extracts the feature of the signal after STFT (Short-Time Fourier Transform) of the signal and detects the situation through a Support Vector Machine (SVM) [21]. (Neural Networks (NNs) extend time signals through artificial intelligence [22], and finally, deep learning-2D (DL-2D) reduces the estimation error of objects measured through 2D DNNs [23]. Through this, the performance of existing studies can be confirmed. In this paper, we use two inexpensive FMCW RADARs to build a 2D sector and grid. The difference between this paper and the papers that improved the performance through the existing DNN [24,25] is that the complexity is improved. In the proposed method, we learn the sector position data, which is the approximate position of the object, and the grid position, which is the position of the detailed object, measured by two FMCW RADARs. It learns from it and suggests ways to improve complexity. We also propose a Gaussian filter method that can separate objects when multiple objects are detected. In this study, the distance of an object is measured using two FMCW RADARs in an indoor situation of 10 m × 10 m, and the position of the object is inferred by inferring it in two steps using DNN. The use of the two-step algorithm reduces the amount of computation compared to when not used. If a grid is set at 0.5 m intervals in a space of 10 m × 10 m, if one inference is calculated for each grid, the entire reasoning must be calculated 441 times. Inferring step 2 results in a total of 50 calculations. Accordingly, the amount of computation is reduced by about nine times. This shows that complexity has improved. The content order of this paper is as follows. Section 2 describes the basic theory for FMCW RADAR and how to estimate distances. Section 3 describes the 2D measurement method of FMCW RADAR in indoor situations. Section 4 describes how to separate multiple targets through a Gaussian filter. In Section 5, algorithm verification is performed by comparing the simulation results of the proposed technique with actual measurement results. Finally, Section 6 describes the conclusions of the study. FMCW RADAR Fundamentals and Distance Estimation Method FMCW RADAR can be transmitted in sawtooth and triangular waves. However, in this paper, we describe sawtooth wave FMCW because we use sawtooth wave FMCW RADAR. Figure 1 shows the FMCW sawtooth wave. The horizontal axis of the plot is time, the vertical axis is frequency, the transmitted signal is a green line, and the signal received after hitting the stationary object is a red line. In this study, there is a frequency difference between the received signal and the transmitted signal due to a delay time, which we refer to as beat frequency. this paper, we describe sawtooth wave FMCW because we use sawtooth wave FMCW RADAR. Figure 1 shows the FMCW sawtooth wave. The horizontal axis of the plot is time, the vertical axis is frequency, the transmitted signal is a green line, and the signal received after hitting the stationary object is a red line. In this study, there is a frequency difference between the received signal and the transmitted signal due to a delay time, which we refer to as beat frequency. Equation (2) is a formula for distance. The time delay of the transmission signal and the reception signal entails the difference in distance from the object measured in FMCW RADAR, which can be obtained through beat frequency. Here, sweep time and bandwidth are unique parameters of the FMCW RADAR and are consequently constant values. c is the speed of light, Δf is the frequency difference between the transmit and receive waveforms, and τ is the temporal difference between the transmit and receive waveforms. The FMCW RADAR structure is shown in Figure 2. First, the wave generator generates a constant linear transmission waveform through the PLL. Thereafter, the frequency of the transmission waveform is modulated through the VCO, amplified by the AMP, and transmitted through the Tx Antenna. Then, the transmitted signal hits the target and is reflected, and the reflected signal is received by the Rx Antenna. The received signal is amplified through a low noise amplifier (LNA), and then a mixer is multiplied to obtain beat frequency, and then a filter is used to extract beat frequency. The extracted beat frequency is converted into a digital signal, and then DSP processing is performed. In this study, the DSP process converts beat frequency into frequency domain through FFT and then measures the distance. The measurement distance of the FMCW RADAR can be confirmed by the method used by Seongmin, et al. [20], and the channel information for the corresponding FMCW RADAR can be confirmed through that in Chang-Heng, et al. [26] and Peli, et al. [27]. Equation (2) is a formula for distance. The time delay of the transmission signal and the reception signal entails the difference in distance from the object measured in FMCW RADAR, which can be obtained through beat frequency. Here, sweep time and bandwidth are unique parameters of the FMCW RADAR and are consequently constant values. c is the speed of light, ∆f is the frequency difference between the transmit and receive waveforms, and τ is the temporal difference between the transmit and receive waveforms. The FMCW RADAR structure is shown in Figure 2. First, the wave generator generates a constant linear transmission waveform through the PLL. Thereafter, the frequency of the transmission waveform is modulated through the VCO, amplified by the AMP, and transmitted through the Tx Antenna. Then, the transmitted signal hits the target and is reflected, and the reflected signal is received by the Rx Antenna. The received signal is amplified through a low noise amplifier (LNA), and then a mixer is multiplied to obtain beat frequency, and then a filter is used to extract beat frequency. The extracted beat frequency is converted into a digital signal, and then DSP processing is performed. In this study, the DSP process converts beat frequency into frequency domain through FFT and then measures the distance. The measurement distance of the FMCW RADAR can be confirmed by the method used by Seongmin, et al. [20], and the channel information for the corresponding FMCW RADAR can be confirmed through that in Chang-Heng, et al. [26] and Peli, et al. [27]. Two-Dimensional Measurement Method of FMCW RADAR in Indoor Situatio In this paper, two FMCW RADARs are used to detect indoor situations. Her indoor situation refers to a space of 10 m × 10 m. Through Figure 3, the valid (disting able) positions of targets set in this paper can be seen in indoor environments. In t door environment, sectors were divided at 2-mm intervals, and as a result, 25 sectors divided. In one sector, grids were placed again at intervals of 0.5 m, resulting in 25 In this study, the FMCW RADAR exists on the left and right sides, respectively, an location of the FMCW RADAR can be confirmed by a triangle display. The indoor was assumed to be 10 m × 10 m, and it can be applied even in a larger size. Accord the number of sectors may be variably applied. In addition, the distance differen tween RADARs can be as wide as the maximum indoor space, and it should be se minimum by the distance interval of wave length/2 of the millimeter-wave MIMO DAR, which has approximately 6.2-mm intervals. However, in this paper, the interv tween FMCW RADAR was measured by dropping 6 m for convenience to identi pattern. The wavelength can be expressed by Equation (3), where λ is the wavelen is the speed of light, and f is the center frequency, 24 GHz. Two-Dimensional Measurement Method of FMCW RADAR in Indoor Situations In this paper, two FMCW RADARs are used to detect indoor situations. Here, the indoor situation refers to a space of 10 m × 10 m. Through Figure 3, the valid (distinguishable) positions of targets set in this paper can be seen in indoor environments. In the indoor environment, sectors were divided at 2-mm intervals, and as a result, 25 sectors were divided. In one sector, grids were placed again at intervals of 0.5 m, resulting in 25 grids. In this study, the FMCW RADAR exists on the left and right sides, respectively, and the location of the FMCW RADAR can be confirmed by a triangle display. The indoor space was assumed to be 10 m × 10 m, and it can be applied even in a larger size. Accordingly, the number of sectors may be variably applied. In addition, the distance difference between RADARs can be as wide as the maximum indoor space, and it should be set at a minimum by the distance interval of wave length/2 of the millimeter-wave MIMO RADAR, which has approximately 6.2-mm intervals. However, in this paper, the interval between FMCW RADAR was measured by dropping 6 m for convenience to identify the pattern. The wavelength can be expressed by Equation (3), where λ is the wavelength, c is the speed of light, and f is the center frequency, 24 GHz. Figure 4 shows the object estimation algorithm. Two FMCW RADARs are used to detect objects in the corresponding indoor environment. Objects are not measured simultaneously using two FMCW RADARs, but one FMCW RADAR. The distance information measured by each FMCW RADAR is then integrated. Thereafter, the integrated distance information is inferred through the DNN to estimate the sector. In the estimated sector, the inference is performed through DNN to confirm the results. After inference in each sector, RMSE is measured, and the smallest value is considered the correct answer. Figure 4 shows the object estimation algorithm. Two FMCW RADARs are used to detect objects in the corresponding indoor environment. Objects are not measured simultaneously using two FMCW RADARs, but one FMCW RADAR. The distance information measured by each FMCW RADAR is then integrated. Thereafter, the integrated distance information is inferred through the DNN to estimate the sector. In the estimated sector, the inference is performed through DNN to confirm the results. After inference in each sector, RMSE is measured, and the smallest value is considered the correct answer. Figure 4 shows the object estimation algorithm. Two FMCW RADARs are used to detect objects in the corresponding indoor environment. Objects are not measured simultaneously using two FMCW RADARs, but one FMCW RADAR. The distance information measured by each FMCW RADAR is then integrated. Thereafter, the integrated distance information is inferred through the DNN to estimate the sector. In the estimated sector, the inference is performed through DNN to confirm the results. After inference in each sector, RMSE is measured, and the smallest value is considered the correct answer. Figure 5 shows the Sector Find method using DNN. As mentioned earlier, the sector is a space of 2 m × 2 m, and there are twenty-five sectors in an indoor environment of 10 m × 10 m. Thereafter, grids exist at 0.5-m intervals in each sector. In this study, the inference is largely started by one sector to find the sector. After that, using the method shown in Figure 6, detailed labels are found within each sector. This approach uses the same DNN structure for inference. Figure 5 shows the Sector Find method using DNN. As mentioned earlier, the sector is a space of 2 m × 2 m, and there are twenty-five sectors in an indoor environment of 10 m × 10 m. Thereafter, grids exist at 0.5-m intervals in each sector. In this study, the inference is largely started by one sector to find the sector. After that, using the method shown in Figure 6, detailed labels are found within each sector. This approach uses the same DNN structure for inference. In this paper, to reduce complexity, DNN inference is performed in two steps, as follows, and the computation amount is reduced compared to detecting the entire existing area. Table 2 shows that the existing entire area uses 441 labels, and the proposed algorithm only uses 25 labels twice. As a result, there is a nine-fold difference in calculation volume. Figure 5 shows the Sector Find method using DNN. As mentioned earlier, the sector is a space of 2 m × 2 m, and there are twenty-five sectors in an indoor environment of 10 m × 10 m. Thereafter, grids exist at 0.5-m intervals in each sector. In this study, the inference is largely started by one sector to find the sector. After that, using the method shown in Figure 6, detailed labels are found within each sector. This approach uses the same DNN structure for inference. In this paper, to reduce complexity, DNN inference is performed in two steps, as follows, and the computation amount is reduced compared to detecting the entire existing area. Table 2 shows that the existing entire area uses 441 labels, and the proposed algorithm only uses 25 labels twice. As a result, there is a nine-fold difference in calculation volume. In this paper, to reduce complexity, DNN inference is performed in two steps, as follows, and the computation amount is reduced compared to detecting the entire existing area. Table 2 shows that the existing entire area uses 441 labels, and the proposed algorithm only uses 25 labels twice. As a result, there is a nine-fold difference in calculation volume. Table 3 shows the difference in RMSE between the entire-area-finding method (onestep DNN) and the two-step DNN method. RMSE was calculated with 100 repetitions. In this study, in the case of the two-step DNN, if the wrong sector is inferred in the first step, the second step is inferred to find the label, and the difference between the inferred result label and the actual location of the label is obtained. As a result, about 0.3823 m is deduced from the entire area and about 0.3542 m for two steps. Table 2. Difference between the total zone and the calculations of the proposed algorithm. Algorithms Computational Complexity (the Number of Labels) whole sector finding (1-step DNN) 441 two-step DNN 50 Table 3. RMSE difference between the entire zone and the proposed. Separate Multiple Targets through Gaussian Filters In this paper, when there are multiple targets, we separate them and infer them through DNNs. In this case, there may be a situation where it is unknown whether the two objects are actually one object. In this study, in Figures 7-9, the y-axis is the magnitude of the signal, which is the magnitude of the frequency signal, which can be seen as the amplitude of the signal. In this study, the magnitude of the corresponding signal is expressed by normalization with the max value. The x-axis is the FFT Point and starts from 0. As can be seen from Figure 7, if the range resolution is 1 m, it is difficult to see the difference between Figure 7a, where a 3.5-m object is detected, and Figure 7b, where objects are detected at 3 and 4 m, through the frequency domain signal. Therefore, when there are multiple targets, they can be separated according to the performance of the range resolution to separate multiple targets from the corresponding FMCW RADAR. As a result, each target can be detected separately according to the range resolution of the unique FMCW RADAR that detects the object, which must be twice the distance of the range resolution to detect each object separately. In this algorithm, when the distance between targets is less than twice the range resolution, it is difficult to separate each target and infer DNN through the pattern of the corresponding signal. As illustrated in Figure 7, if a range resolution is 1 m and Figure 8a shows a peak value at 5 m and any value is measured at 4 and 6 m, it is difficult to identify how many targets exist and to separate each target and infer to DNN. In addition, Figure 8b shows that when the range resolution is 1 m, the targets are between 3 and 4 m and between 5 and 6 m. Although this is not known exactly, they are separable and can be separated to proceed with inference to the DNN. In this study, if each target is twice the range resolution, it is necessary to utilize the pattern for the signal and reduce the remaining values. Therefore, we propose a Gaussian filter. The Gaussian filter uses the Gaussian distribution, which originally uses the Gaussian distribution so that the total sum is 1, but in this paper, the peak value is adjusted to be 1 for inferring the pattern. In addition, when using a Gaussian filter, each has a peak value of 1 for the larger of the max and max values of the pattern, and the influence on the remaining values is reduced according to the Gaussian distribution. This is used to detect an object more accurately by extracting the main lobe feature of an object using a Gaussian filter. As mentioned above, in a Gaussian filter, only the main lobe can be extracted by pressing the side lobe value. The corresponding Gaussian filter was designed using the MATLAB model and set up to accurately measure the FMCW RADAR. Many objects within the range resolution cannot be detected, and as a result, objects that cannot be measured by low-cost FMCW RADARs cannot be detected. Thus, when three or more multiple objects enter, they cannot be separated by the side lobes of the first and second closest objects. In addition, when the distance between objects is very long, it is difficult to separate the signals because the magnitude of the signal measured by the RADAR is different. This can be seen in more detail in Figure 9. In this study, when two targets are generated as shown in Figure 9a, a Gaussian filter that fits the corresponding pattern as shown in Figure 9b is generated. As can be seen in Figure 9c, the target is separated through a Gaussian filter for the target whose distance is the closest to RADAR and is compared with the frequency domain signal of a single target at the same distance measured by FMCW RADAR. In the proposed algorithm, it is necessary to check whether the corresponding pattern is a multiple target or a single target. Therefore, in this paper, to identify multiple targets, we use the constant false alarm rate (CFAR) algorithm [28][29][30][31], which is used to check whether an object has been detected. The CFAR algorithm sets the false alarm rate, measures various fluctuating noises and distortions in average clutter power, and then multiplies the false alarm rate by a corresponding coefficient to variably adjust the threshold to have a fixed false alarm rate. In this case, the threshold value and the magnitude of the frequency domain signal are compared, and if the signal size is greater than the threshold value, the target may be determined; if it is small, multiple targets may be detected by determining that the target is not. The corresponding CFAR algorithms are typically order-statistic (OS) CFAR and cell-averaging (CA) CFAR, and in this paper, OS-CFAR is used to detect multiple targets. The algorithmic flow chart for separating multiple targets can be confirmed in Figure 10, and through this, multiple targets can be separated; as a result, multiple targets can be detected. In this study, if each target is twice the range resolution, it is necessary to utilize the pattern for the signal and reduce the remaining values. Therefore, we propose a Gaussian filter. The Gaussian filter uses the Gaussian distribution, which originally uses the Gaussian distribution so that the total sum is 1, but in this paper, the peak value is adjusted to be 1 for inferring the pattern. In addition, when using a Gaussian filter, each has a peak value of 1 for the larger of the max and max values of the pattern, and the influence on the remaining values is reduced according to the Gaussian distribution. This is used to detect an object more accurately by extracting the main lobe feature of an object using a Gaussian filter. As mentioned above, in a Gaussian filter, only the main lobe can be extracted by pressing the side lobe value. The corresponding Gaussian filter was designed using the MATLAB model and set up to accurately measure the FMCW RADAR. Many objects within the range resolution cannot be detected, and as a result, objects that cannot be measured by low-cost FMCW RADARs cannot be detected. Thus, when three or more multiple objects enter, they cannot be separated by the side lobes of the first and second closest objects. In addition, when the distance between objects is very long, it is difficult to separate the signals because the magnitude of the signal measured by the RADAR is different. This can be seen in more detail in Figure 9. In this study, when two targets are generated as shown in Figure 9a, a Gaussian filter that fits the corresponding pattern as When multiple targets are separated using OS-CFAR, the objects measured in each sector and lattice do not fit the range resolution. Therefore, there is a difference in the actual distance between the object and the FMCW RADAR and the distance measured by the FMCW RADAR. Thus, the target was detected in OS-CFAR through the difference between the actual distance difference in units of 1 m and the peak value of the measured distance through OS-CFAR, and objects that could not be detected were excluded. In addition, in multi-target cases, if peak points are detected in OS-CFAR by a difference of 2 m, the objects are measured as multiple targets. (b) In this study, when multiple targets are separated using OS-CFAR, there is a probability of failure of detection of OS-CFAR itself. OS-CFAR was confirmed by Seongmin, et al. [20], and False Alarm Rate = 10 −5 , Training Cell = 32, Guard Cell = 2, and Rank = 24 were set. The result of checking OS-CFAR through the proposed algorithm. The false alarm rate for a single target is 8.64%, and the false alarm rate for multiple targets is 11.82%. In the proposed algorithm, it is necessary to check whether the corresponding pattern is a multiple target or a single target. Therefore, in this paper, to identify multiple targets, the frequency domain signal are compared, and if the signal size is greater than the threshold value, the target may be determined; if it is small, multiple targets may be detected by determining that the target is not. The corresponding CFAR algorithms are typically order-statistic (OS) CFAR and cell-averaging (CA) CFAR, and in this paper, OS-CFAR is used to detect multiple targets. The algorithmic flow chart for separating multiple targets can be confirmed in Figure 10, and through this, multiple targets can be separated; as a result, multiple targets can be detected. When multiple targets are separated using OS-CFAR, the objects measured in each sector and lattice do not fit the range resolution. Therefore, there is a difference in the actual distance between the object and the FMCW RADAR and the distance measured by the FMCW RADAR. Thus, the target was detected in OS-CFAR through the difference between the actual distance difference in units of 1 m and the peak value of the measured distance through OS-CFAR, and objects that could not be detected were excluded. In Experimental Results In this paper, the performance of the proposed algorithm was verified using the MATLAB tool and FMCW RADAR. Figure 11 shows the MATLAB simulation block diagram. In this study, multipath fading, phase error at 24 GHz, and AWGN were added to implement a simulation environment similar to the actual situation. Through Table 4, simulation parameters were prepared by referring to the parameters of the actual FMCW RADAR. MATLAB Simulation MATLAB simulations for DNNs are performed as shown in Figure 12. In this study, FMCW RADAR's frequency-domain signal extracted through MATLAB is used for DNN learning. The performance is then measured through distance differences (i.e., through RMSE, according to the inferred label of DATA and the label of the corresponding DATA). In this study, the corresponding signal uses randomly opened DATA in each grid and first learns with multipath, AWGN, and phase error-free data to check the results. Subsequently, the results of learning all distortion data and no distortion are added, and the results of the FMCW RADAR output for 5 dB, 6 dB, 7 dB, 8 dB, 9 dB, and 10 dB are combined, respectively. In this study, the performance is checked by mixing 5%, 10%, 15%, and 20% of data with distortion in DATA without distortion, respectively. This is confirmed in Figure 13. Assuming that there is data in the grid, the corresponding result outputs test data every 5 dB, 6 dB, 7 dB, 8 dB, 9 dB, and 10 dB, and then uses the data to conduct inference. In this study, the RMSE is measured, and as a result, it can be confirmed that the data mixed with 20% have the best performance. Measurement below 4 dB is not possible, because the data are broken. set. The result of checking OS-CFAR through the proposed algorithm. The false alarm rate for a single target is 8.64%, and the false alarm rate for multiple targets is 11.82%. Experimental Results In this paper, the performance of the proposed algorithm was verified using the MATLAB tool and FMCW RADAR. Figure 11 shows the MATLAB simulation block diagram. In this study, multipath fading, phase error at 24 GHz, and AWGN were added to implement a simulation environment similar to the actual situation. Through Table 4, simulation parameters were prepared by referring to the parameters of the actual FMCW RADAR. MATLAB simulations for DNNs are performed as shown in Figure 12. In this study, FMCW RADAR's frequency-domain signal extracted through MATLAB is used for DNN learning. The performance is then measured through distance differences (i.e., through RMSE, according to the inferred label of DATA and the label of the corresponding DATA). In this study, the corresponding signal uses randomly opened DATA in each grid and first learns with multipath, AWGN, and phase error-free data to check the results. Subsequently, the results of learning all distortion data and no distortion are added, and the results of the FMCW RADAR output for 5 dB, 6 dB, 7 dB, 8 dB, 9 dB, and 10 dB are combined, respectively. In this study, the performance is checked by mixing 5%, 10%, 15%, and 20% of data with distortion in DATA without distortion, respectively. This is confirmed in Figure 13. Assuming that there is data in the grid, the corresponding result Sector 25 In Figure 12, the input is a 2 × 256 × 1 matrix. This is because the measured results from the two FMCW RADARs are transformed into a matrix through an FFT transformation of 256 points each. DNN datasets were generated using MATLAB tools and examples, and datasets for each sector were prepared separately. The x-and y-axes are each made to have a distance of −0.2 m to 0.2 m from the labels per label inside each sector, and 110 per label are independently randomized. In this study, 100 data points were used as test data and 10 were used as inference data for performance verification. MATLAB Simulation Performance is measured by comparing the DNN structures proposed in previous studies with those proposed in this paper. In this study, since the existing papers do not include an algorithm to divide the sector, it is assumed that a 0.5-m lattice structure exists in the entire grid structure of 10 m × 10 m excluding the corresponding part, and the result is verified by simulation with MATLAB. In this study, the 1D DNN structure simulates the DNN structure described in the paper [23] with MATLAB to confirm the results with the dataset for the DNN structure proposed in this paper. Then, the results were confirmed in the DNN structure of measuring the moving person in the room in the paper [32] under the same environment. In Table 5, results show that RMSE values of papers [23] and [32] have a performance of 0.515734 and 0.48262 m, respectively, and it can be said that the performance of the DNN structure of the proposed paper is better in the corresponding environment. In the C3 DNN structure, the convolution operation is performed three times. Single Target Fifteen random data points per grid are extracted from a single target, and 100 learning data points are extracted to proceed with inference. In this case, the ideal indoor situation is assumed to be AWGN SNR 10 dB and simulation is performed. We measure the RMSE in each sector and check the results. In this study, the RMSE is measured 20 times per sector and calculated as the average of these values. This can be confirmed in Figure 14. conduct inference. In this study, the RMSE is measured, and as a result, it can be confirmed that the data mixed with 20% have the best performance. Measurement below 4 dB is not possible, because the data are broken. In Figure 12, the input is a 2 × 256 × 1 matrix. This is because the measured results from the two FMCW RADARs are transformed into a matrix through an FFT transformation of 256 points each. DNN datasets were generated using MATLAB tools and examples, and datasets for each sector were prepared separately. The x-and y-axes are each made to have a distance of −0.2 m to 0.2 m from the labels per label inside each sector, and 110 per label are independently randomized. In this study, 100 data points were used as test data and 10 were used as inference data for performance verification. Performance is measured by comparing the DNN structures proposed in prev studies with those proposed in this paper. In this study, since the existing papers d include an algorithm to divide the sector, it is assumed that a 0.5-m lattice structure e in the entire grid structure of 10 m × 10 m excluding the corresponding part, and the r is verified by simulation with MATLAB. In this study, the 1D DNN structure simu the DNN structure described in the paper [23] with MATLAB to confirm the results The average of the measured RMSE values in each sector was obtained, confirming that this algorithm has an RMSE value of 0.2242 m in a single target. In addition, the RMSE value changes depending on the sector, and first, in this simulation, the error is severe for objects within 2 m, and in general, the object on the right has a higher RMSE than the object on the left. The type is output like the localization error discussed in the paper [23]. Multi Target To detect multiple targets, when measuring the distance to the object in the following two FMCW RADARs, we find targets that each are more than 2 m apart. In this study, for simulation, the corresponding objects were found for two objects, which are set as shown in Table 6. To detect multiple targets, OS-CFAR was applied to apply a Gaussian filter to the corresponding five objects. In addition, we extract close targets and subtract 100 datasets for the sector and label the corresponding objects from the original signal that detected two objects after DNN inference, respectively. The power value is measured to extract the signal that becomes the lowest power value. This signal is then inferred back to the DNN to confirm the result. In this study, each of the above five targets was repeated 15 times, and the results are shown in Table 7. In this case, the RMSE is obtained as an average of the difference in distance between the inference value of each target and the actual position. In this study, the value of the RMSE for each case can be checked. The average of the measured RMSE values in each sector was obtained, confirming that this algorithm has an RMSE value of 0.2242 m in a single target. In addition, the RMSE value changes depending on the sector, and first, in this simulation, the error is severe for objects within 2 m, and in general, the object on the right has a higher RMSE than the object on the left. The type is output like the localization error discussed in the paper [23]. Multi Target To detect multiple targets, when measuring the distance to the object in the following two FMCW RADARs, we find targets that each are more than 2 m apart. In this study, for simulation, the corresponding objects were found for two objects, which are set as shown in Table 6. To detect multiple targets, OS-CFAR was applied to apply a Gaussian filter to the corresponding five objects. In addition, we extract close targets and subtract 100 datasets for the sector and label the corresponding objects from the original signal that detected two objects after DNN inference, respectively. The power value is measured to extract the signal that becomes the lowest power value. This signal is then inferred back to the DNN to confirm the result. In this study, each of the above five targets was repeated 15 times, and the results are shown in Table 7. In this case, the RMSE is obtained as an average of the difference in distance between the inference value of each target and the actual position. In this study, the value of the RMSE for each case can be checked. In the simulation, the RMSE value for inference is output like the RMSE of a single target, because the value for the main lobe is alive when the measurement is performed by applying a Gaussian filter to the first target. For the second target, we subtract the dataset for the first target from the signal measured by the multiple targets and obtain the dataset for the second target with the lowest power value. Again, we check the inference in the DNN. In this study, a value corresponding to the main lobe of the second target may be eliminated due to distortion, such as multipath fading and AWGN of the first target. Thus, the inference result of the second target is output larger than the inference result of the first target. Actual Measurement The actual measurement was conducted in an empty classroom at Sejong University. Due to the space limitations of the classroom, the actual measurement was conducted in a space of 6 m × 6 m. In this study, the actual measurement was performed at 6-m intervals between RADARs, and the measurement was performed as shown in Figure 15. Figure 16 shows the actual measured area. The area where objects can be measured is the red area, and the FMCW RADAR is identified with a blue triangle. This study used a commercial FMCW RADAR, and Table 8 shows the parameters for the measurement. In this study, the reason for limiting the area where objects were measured is that the space that could be measured by walls and radiators was limited in an empty classroom without obstacles. Therefore, the actual measurement space was 6 m × 6 m. Single Target A single target was measured by selecting three points from among the areas that can be measured: the 7th grid of Sector 8, the 19th grid of Sector 12, and the 8th grid of Sector 13. Actual measurements of these grids are conducted and confirmed through DNN. This can be seen in Table 6. The results are as follows. In this study, the corresponding RMSE result value is a value obtained by repeating 100 times and averaging the differences in distance between the inference value and the actual value. Case 1 is the target Single Target A single target was measured by selecting three points from among the areas that can be measured: the 7th grid of Sector 8, the 19th grid of Sector 12, and the 8th grid of Sector 13. Actual measurements of these grids are conducted and confirmed through DNN. This can be seen in Table 6. The results are as follows. In this study, the corresponding RMSE result value is a value obtained by repeating 100 times and averaging the differences in distance between the inference value and the actual value. Case 1 is the target In the actual measurement, as shown in Table 9, the RMSE is measured as higher than the simulation result. This is because we learned the FMCW simulation results in experiments on the proposed algorithm. Therefore, when actual measurements are made using the proposed algorithm, the output is worse than expected. Single Target A single target was measured by selecting three points from among the areas that can be measured: the 7th grid of Sector 8, the 19th grid of Sector 12, and the 8th grid of Sector 13. Actual measurements of these grids are conducted and confirmed through DNN. This can be seen in Table 6. The results are as follows. In this study, the corresponding RMSE result value is a value obtained by repeating 100 times and averaging the differences in distance between the inference value and the actual value. Case 1 is the target of the seventh grid of Sector 8, and Case 2 is the target of the 19th grid of Sector 12. Case 3 is the target on the eighth grid of Sector 13. The average RMSE of the corresponding targets was measured to be 0.41002 m. Multi Target Actual measurements were performed on several targets in an area that can be measured. Since the distance to the target measured in each FMCW RADAR should differ by 2 m or more, the actual measurement was conducted on two objects. Afterward, the RMSE is measured to confirm the performance. The corresponding RMSE has objects in the third and 18th grids of Sector 8 and Sector 13 and extracts 100 data for the objects and obtains them as the average of the distance differences between the inferred and actual grids. In this case, the target of the No. 8 Sector is 0.55 m, and the target of the No. 13 Sector is 0.975084 m. In this study, the average RMSE of the two targets is 0.762542 m. Performance Comparison with Existing Studies To this end, the performances of the DNN structure proposed in this paper and the DNN structure of previous studies were verified. As shown in Table 10 below, the RMSE performance in this paper is compared with the previous study described in Section 5.1.1. In this case, performance verification is performed on a single target. In this case, the RMSE is the average distance difference between the inferred value and the actual value by repeating the case of Label 5 of Sector 8 100 times. In this study, in Case 1 the RMSE is the same as in the proposed paper, and in Case 2 the complexity is the same as in the proposed algorithm. In Table 10, r1 and r2 represent the ratio of RMSE and computational complexity of the existing algorithm to the proposed algorithm. At this point, the complexity is determined by the number of convolutions. Finally, the cost function is determined as the square root of the product of r1 (ratio of RMSE) and r2 (ratio of computational complexity). As the complexity of the DNN increases, the object detection performance increases. Therefore, the performance index was used to express the performance versus computational complexity of the DNN, which is expressed as a cost function, as shown in Table 10. Conclusions In this paper, we propose a two-step algorithm, an algorithm inferred from DNN using two FMCW RADARs to improve range resolution. The proposed algorithm estimates the sector, which is the approximate location of the object, by measuring the distance from the object in each FMCW RADAR. After estimating the sector, we measured the position of each label through the grid and the exact position of the object in the estimated sector. In this paper, the performance of the algorithm was tested in a limited space, of 10 m × 10 m, assuming an indoor situation. The performance of the algorithm was validated by using MATLAB tools to generate test datasets and validation datasets for the location of objects. In addition, after making a DNN model using the dataset, performance verification was completed with RMSE through actual FMCW RADAR. Performance was verified by separating multiple targets through a Gaussian filter, and it was confirmed that multiple targets can also be measured. The proposed algorithm has almost no performance difference from those in other papers, and it is confirmed that the amount of computation is reduced through the cost function, and there is an up-to-five-fold difference in complexity compared to the performance. This algorithm can be effectively applied to the IoT field because it can detect indoor conditions using low-cost FMCW RADAR.
2022-11-06T16:20:46.068Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "52ad14d4362585b47a0f4a74233c4b8896d82641", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/21/8461/pdf?version=1667476733", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc6b0b0f10aa43757b57385c225ff6ba6ad96476", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
248916334
pes2o/s2orc
v3-fos-license
Mesalazine induces apoptosis via mitochondrial pathway in K562 cell line Inflammation is an initial response of the body to infection and relationship between inflammation and cancer has been established. Nuclear factor kappa B (NF-κB) is a central factor in inflammation and its activity contributes to tumor progression and apoptosis prevention consequently leading to cancer promotion. As a result, NF-κB inhibitors can cause apoptosis. In this study, the effect of mesalazine as a NF-κB inhibitor on growth and apoptosis of K562 cells has been investigated. The K562 cells were first cultured in RPMI-1640 medium containing 10.00% fetal bovine serum. After that, they were treated for 72 hr with different concentrations of mesalazine (20.00, 40.00, 60.00 and 80.00 μM mL-1). The MTT assay was used to evaluate cell viability. Hoechst staining and RT-PCR of apoptosis related genes (Bcl-2 and Bax) were carried out to illustrate apoptosis induction and immunocytochemistry was performed to investigate changes in c-Myc protein level. According to the results of MTT assay, all of applied mesalazine concentrations decreased K562 cells viability. Hoechst staining showed that the fragmented nuclei increased indicating apoptosis induction. Immuno-cytochemical ‎ results showed that mesalazine decreased c-Myc in treated cells. The RT-PCR results also showed an increase in Bax and a decrease in Bcl-2 expressions in mesalazine-treated cells. As the results suggest, mesalazine reduces cell viability by inducing apoptosis in K562 cell line; therefore, it can be used as a candidate for the leukemia treatment. Introduction Cancer is the second leading cause of death in world; thus, massive researches are performing for cancer treatment. Recently, multiple therapeutic approaches have been developed for cancer treatment including chemotherapy, immunotherapy, radiotherapy and surgery; however, the rate of cancer has been raised due to cancer complexity. 1 One of the cancer types is leukemia which is a malignant clonal disorder of blood stem cells that increases not only myeloid cells but also peripheral erythroid cells, myeloid hyperplasia in the bone marrow and platelets in the blood. The average age of the disease is 35 years; but all age groups including children are affected. Chronic myelogenous leukemia diagnosis is usually based on the Philadelphia chromosome presence. Common symptoms of the disease include weakness and loss of appetite and weight; however, 5.00% of people are asymptomatic. 2 Due to imbalance between oncoproteins and tumor suppressors in cancer, cell proliferation increases and apoptosis decreases. Apoptosis is a programmed cell death that regulates cell population and its dysfunction correlates with some physiological and pathological disorders. As a result, apoptosis induction is the best approach in cancer cells elimination. 3 The process is induced by two pathways including extra-cellular and intra-cellular pathways. The Bcl-2 protein family has an important role at intra-cellular apoptosis pathway being initiated from mitochondria. 4 Multiple factors contribute to cancer promotion and tumor development such as genetic mutations, carcinogen exposure, inflammation, radiation and oncogenic viruses. Inflammation is defined as the response of body to infections being classified as acute and chronic inflammations. In acute status, the body responds rapidly to infections; while, in the chronic inflammation, the response occurs in a long time. According to stimulatory role of inflammatory factors on cell proliferation, in the chronic inflammation, unwanted cell proliferation increases which can lead to cancer initiation and tumor progression. 5 One of the most important intra-cellular factors being activated in inflammation is a transcription factor called nuclear factor kappa B (NF-κB).This factor is an essential factor for inflammatory response causing cell proliferation and apoptosis inhibition. The NF-κB overexpression and activation have been observed not only in immune cells but also in many cancers cell lines. Based on NF-κB role in cancer initiation, its inhibition could be a strategy in cancer treatment. 6 Mesalazine, known as 2-aminosalicylate, is a member of non-steroidal anti-inflammatory drugs which can inhibit NF-κB activation and decrease cell proliferation. 7 This drug has been used to treat inflammatory bowel disease as well as for colon cancer prevention. 8 The cytotoxic effect of mesalazine on cancer cells has only been studied on colon cancer cell line. 8 Based on inhibitory effect of mesalazine on NF-κB, it may also have cytotoxic effects on other cancer cell lines. Because of this, for the first time in present work, the effect of mesalazine has been investigated on K562 cell line (erythromyeloid leukemia cell line) viability and apoptosis. Materials and Methods Cell culture, treatment and cell viability assay. K562 (chronic myeloid leukemia) cell line was purchased from Iranian Pasteur Institute, Tehran, Iran. Cell culture medium (RPMI-1640; Gibco, Carlsbad, USA), penicillin streptomycin (Sigma, St. Louis, USA) and fetal bovine serum (FBS; Gibco) were also used. Cells were cultured in RPMI-1640 medium containing 10.00% FBS and 1.00% penicillin streptomycin and incubated at 37.00 ˚C, 95.00% humidity and 5.00% CO2. Mesalazine (Sigma) was dissolved in dimethyl sulfoxide (DMSO) as a solvent to yield different concentrations. Evaluation of cell viability by MTT assay. Six thousand cells per well of 96-well plates were cultured and treated with different concentrations of mesalazine (20.00, 40.00, 60.00 and 80.00 μM mL -1 ). The cells were cultured in three 96-well plates and 3 different times (24, 48and 72 hr), respectively. After incubation, 10.00 µL of MTT solution at a concentration of 5.00 mg mL -1 was added into each well and the plate was shaken for a few sec to spread evenly. After 3 hr, 100 µL of DMSO was poured into the wells. The plate was shaken for 15 min on a low-speed shaker until the formazan crystals were completely dissolved. The absorption of each well was read using the ELISA reader (BioTek, Winooski, USA) at 540 nm. 9 Cell viability was calculated as a percent of each well absorbance compared to control cell wells. Then, based on MTT data, mesalazine IC50 was calculated by CompuSyn Software (version 1.0; ComboSyn Inc., Paramus, USA). The software has been designed by Chou and Talalay. 10 Illumination of the morphological effects of mesalazine on K562 cells. 5.00× 10 5 cells were seeded in each well of 24-well plates, then the cells were treated with different concentrations of mesalazine (20.00, 40.00, 60.00 and 80.00 μM mL -1 ) for 72 hr at 37.00 ˚C and 5.00% CO2. The cells photos have been taken by inverted microscope (Olympus, Tokyo, Japan) after 72 hr. Evaluation of apoptosis by Hoechst staining. 2.00×10 5 cells per well were cultured in 24-well plates and treated with IC50 concentration (54.00 μM) of mesalazine for 72 hr. After incubation, the cells were centrifuged and washed with phosphate-buffered saline (PBS) three times and then fixed with cold methanol in -20.00 ˚C for 20 min. After fixation, the cells were washed with PBS and incubated in Hoechst 33342 solution (Sigma) dissolved in PBS in concentration of 1.00 mg mL -1 ) for 30 min at room temperature. After staining, the cells were washed with PBS, transferred on slides, observed using fluorescent microscope (Micros, Gewerbezone, Austria) and the cells photos were taken. 11 Analysis of Bcl-2 and Bax expressions by RT-PCR. For RNA extraction, approximately 5.00 × 10 6 K562 cells were treated with IC50 concentration (54.00 μM) of mesalazine for 72 hr. Next, cellular RNA was extracted based on the kit manufacturer's instructions (Takara, Tokyo, Japan). After determination of the extracted RNA concentration by Nanodrop, the cDNA was synthesized by the cDNA synthesis kit (Thermo Fisher Scientific, Bremen, Germany). The Bax, Bcl-2 and GAPDH (glyceraldehyde-3phosphate dehydrogenase) primers were designed by Gene Runner Software with the sequences listed in Table1. The PCR reaction was performed with the following steps using Master Mix (Thermo Fisher Scientific, Waltham, USA): Pre-denaturation for 5 min at 95.00 ˚C followed by 35 cycles of 30 sec denaturation at 94.00 ˚C, annealing/extension for 30 sec at 57.00 ˚C and final extension for 30 sec at 72.00 ˚C. Afterwards, DNA concentration was determined at 260 nm. For electrophoresis, 50.00 ng DNA was loaded on a 2.00% agarose gel at 85.00 V for 90 min. Finally, the gel photo was captured by gel-documentation. Densitometric analysis of the bands was done using ImageJ software (National Health Institute, Bethesda, USA) and the software data were graphed by Excel software Excel (version 16.0; Microsoft Corporation, Redmond, USA). 12 c-Myc protein level analysis by immunocytochemistry. One million cells were cultured in each well of 6-well plate and then treated with IC50 concentration 13 Statistical analysis. Charts were plotted in Excel software and data were analyzed by SPSS software (version 16.0; SPSS Inc., Chicago, USA) and ANOVA test to investigate the correlation between cell viability with treated drug concentration and treatment time compared to the control sample at p ≤ 0.05. Results Mesalazine reduced the cell viability of K562 cell line. The cell viability was assessed after 24, 48 and 72 hr. Mesalazine in all administered concentrations reduced cell viability. Along with elevation of its dose and treatment time, mesalazine cytotoxic effect was raised on K562 cell line (Fig. 1A). So, it can be concluded that mesalazine cytotoxic effect is time-and dosedependent. The most cytotoxic effect was observed at 72 hr and 80.00 μM with 54.00% cytotoxicity. The percentages of cell viability reduction at 48 and 24 hr at 80.00 μM concentration were respectively 42.00% and 16.00% being significant compared to the control group. Based on data obtained from the CompuSyn Software, the IC50 of drug was 54.00 μM (Fig. 1B). Morphological effects of mesalazine on K562 cells. The morphology of the treated cells was observed and photographed by inverted microscope after 72 hr treatment with 20.00, 40.00, 60.00 and 80.00 μM of mesalazine as well as control cells. Figure 2 shows that the treated cells are undergone altered morphology and disrupted membrane. The control cells had intact membranes; whereas, most of the treated cells represented disintegrated membranes indicating cell death occurrence. In the treated cells, bleb structures were found on cell membrane indicating apoptosis induction; the structures increased with mesalazine dose elevation (Fig. 2). Mesalazine induces apoptosis in K562 cell line. In apoptosis, cell nucleus is fragmented as a sign of apoptosis occurrence. For observation of the cell nucleus fragmentation, the cells were stained with Hoechst stain. For this purpose, K562 cells were treated with IC50 concentration of mesalazine for 72 hr followed by Hoechst staining. As Figure 3 shows, in control cells, almost all of the cell nuclei are intact; whereas, in treated cells, the fragmented nuclei are apparent. For analysis of apoptotic gene expression level alterations in treated cells compared to control ones, the levels of Bax and Bcl-2expressions in control and treated cells were assessed by RT-PCR. In treated cells, Bcl-2expression is decreased and Bax expression is increased compared to control cells (Figs. 4 and 5). As Figure 5 shows, the level of Bax/Bcl-2 ratio in treated cells was raised about 70.00% compared to control ones being representative of apoptosis occurrence in treated cells. c-Myc proto-oncogene changes in K562 cells treated with mesalazine. The immunoytochemistry was performed to evaluate c-Myc proto-oncoprotein changes. In cancer cells, c-Myc protein is often expressed continuously. As seen in Figure 6, in control cells, c-Myc staining is high indicating that in untreated cells, c-Myc protein is highly expressed. Figure 6 shows also that in treated cells, c-Myc protein staining is lesser than it in control cells indicating that in treated cells, the level of c-Myc protein is decreased. Therefore, mesalazine has reduced c-Myc protein level in the cells. Discussion Chronic inflammation is a risk factor in cancer promotion and tumor progression. 14 Nuclear factor kappa B is a transcription factor playing a key role in several cellular processes including fetal growth, cell proliferation, apoptosis and immune response to infection and inflammation. It is an important intra-cellular factor being essential for inflammatory responses. 15 The NF-κB pathway activation has physiological roles in nerve cells, B lymphocytes and thymocytes functions; however, the pathway is inhibited in other cells except for cancer cells including breast, colon, pancreas, ovary, lymphoma and melanoma. The NF-κB activity contributes to tumor progression by inducing the secretion of cytokines and growth factors and activating some anti-apoptotic proteins such as survivin. Due to the roles of NF-κB, the inhibitors of the NF-κB pathway can reduce cancer cell viability. 16 Some of the NF-κB inhibitors include emetine, fluorosalan, metformin, mesalazine and aspirin, which are approved by the United States Food and Drug Administration. 17 Metformin is used to reduce blood glucose in the treatment of type 2 diabetes. It decreases the viability of all ESCC cell lines dose dependently by NF-κB inhibition and causes growth suppression and invasion inhibition in the cell lines. 18 Metformin also alters the expression of NRF-2 and NF-κB in HT29 cells in a dose-and time-dependent manner. It significantly decreases cell viability after 48 hr of treatment compared to 24 hr treatment. 19 Celastrol is a NF-κB inhibitor, also reduces LP-1 myeloma cells proliferation in a dose-dependent manner. 20 It also suppresses the invasion of ovarian cancer cells and inhibits the NF-κB/MMP-9 pathway. 21 Previous studies have shown that sulindac and its metabolites slow down the proliferation of the colon cancer cell lines by inhibiting the NF-κB pathway. 22 In the present study, mesalazine reduced the growth of K562 leukemia cells. Based on the results, it was observed that the viability of the treated cells decreased compared to the controls in a dose-and time-dependent manner being confirmed by morphological changes. Mesalazine induces apoptosis in colon cancer cells by caspase-3 activation without Bcl-2 family proteins levels alteration. 23 Morphological changes observed in cell phenotype of HT29 cells treated with metformin suggest that metformin not only can inhibit cell growth of HT29 cells but also can induce apoptosis in cancer cells through inner and outer apoptosis pathways. 24 It has been shown that the level of Bcl-2 significantly decreases after exposure to celastrol; while, a slight change in Bax level is occurred. 20 In this work, mesalazinetreated cells showed apoptotic nuclei indicating that mesalazine induces apoptosis. In treated cells, Bax expression elevation and Bcl-2expression reduction led to Bax/Bcl-2 ratio increase indicating that mesalazine induces apoptosis via mitochondrial pathway. For further investigation of the mesalazine apoptosis inducing effect, c-Myc protein level was analyzed. The c-Myc is a major transcription factor for cell proliferation and it has been implicated in many hematological and solid cancers. 25 In the present study, the results of immunofluorescence assay showed that mesalazinetreated cells had less staining than control cells indicating c-Myc protein level reduction in the treated cells. Mesalazine inhibits the Wnt/B-catenin pathway in CRC cells and increases B-catenin phosphorylation; thereby, reduces c-Myc expression playing an important role in tumorigenesis. 26 In the other hand, c-Myc continuously enhances B cell proliferation by activating NF-κB in vitro and in vivo leading to tumorigenesis. 27 As observed in this study, mesalazine induces apoptosis probably by NF-κB suppression via c-Myc protein reduction. In conclusion, the results suggest that mesalazine dose and time dependently decreases cell viability and induces apoptosis via apoptotic internal pathway. Mesalazine may inhibit NF-κB by c-Myc protein reduction. Therefore, based on the results, mesalazine could be a suitable agent for treatment of erythromyeloid leukemia.
2022-05-21T05:23:54.835Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "46156a81306d54b4b316d636d3187506ae6a9679", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "46156a81306d54b4b316d636d3187506ae6a9679", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218467125
pes2o/s2orc
v3-fos-license
Postprandial blood glucose response: does the glycaemic index (GI) value matter even in the low GI range? A growing body of research over the last decades has shown that diets based on the low glycaemic index (GI) foods reduce the risk of developing diabetes and improve blood glucose control in people with diabetes. The range of inflexion on the glycaemic response of low GI (LGI) foods is an interesting observation that has not been studied by many. LGI 1 (GI 54 ± 3.3) biscuit was formulated using a basic formulation while the LGI 2 (23.8 ± 3.3) biscuits was a modification of LGI 1 recipe, formulated with the inclusion of functional ingredients. Biscuits were formulated to be iso-caloric (kcal/100 g: 521 ± 12). Each participant consumed identical standard meals for lunch and dinner. Biscuits were consumed as breakfast and mid-afternoon snack. Using a randomized, controlled, crossover study, 13 males [(means ± SD) age: 25.3 ± 1.0 years, BMI 21.6 ± 0.5 kg/m2, fasting blood glucose 4.7 ± 0.1 mmol/L] wore continuous glucose monitoring systems (CGMS™) for 3 days for each test session. The postprandial glycaemic response and insulin response were compared within participants. Total iAUC for breakfast and standard dinner were significantly lower for LGI 2 treatment (p < 0.05) than LGI 1 treatment. Second-meal glucose tolerance was observed at the dinner meal. The overall iAUC insulin response over 180 min was significantly lower for LGI 2 biscuits (p = 0.01). The postprandial glycaemic response of two types of biscuits that fall within the low GI classification (GI 24 and 54) differed with LGI 2 biscuits (GI 24) showing a more suppressed postprandial glycaemic response. Our study shows that even within the low GI range, the GI value matters in influencing postprandial glucose. Introduction The prevalence of type 2 diabetes (T2D) is on the rise globally 1 . The World Health Organisation estimated that 2.2 million deaths in 2012 were attributed to high blood glucose and related comorbidities, with another 1.5 million directly attributed to diabetes 2 . A holistic approach to manage the disease is recommended, including dietary modifications, increasing physical activity and pharmaceutical interventions to manage blood glucose levels if necessary. Among these, dietary modifications play a significant role in diabetes management. Blood glucose concentration is affected by factors, such as type and amount of dietary carbohydrate, nature of starch, quantity of protein and fat, dietary fibre content, particle size, method of food processing, and food form ref. 3 . The main aim of these dietary interventions are to reduce the glycaemic index (GI) of the food so that the blood glucose does not increase after its consumption. The GI was coined by Jenkins et al. 3 . It indicates the blood glucoseraising potential of foods. Foods have been classified as being low, medium or high GI based on this concept. There is substantial evidence to suggest that consumption of low GI foods can result in a lower glycaemic response which can reduce the risk of type 2 diabetes and cardiovascular disease 4,5 . Therefore, there is an increased consumer demand for diabetes-related functional foods, with the primary goal of improving blood glucose response. To date, there have been numerous studies that investigated the relationship between the GI of foods and the subsequent postprandial glycaemic response [5][6][7][8] . Postprandial blood glucose levels have been shown to be better predictors of long-term health consequences 9 . Thus, lowering fluctuation and peaks of blood glucose after carbohydrate meals is important. However, the majority of studies investigating the impact of GI on the postprandial glycaemic response generally compare between the low (GI > 55) and high GI (GI < 70) categories. The impact on postprandial glycaemic response between foods classified within the same range i.e. low GI (GI > 55) but with differing GI values (24 and 54) has not been reported. Second meal effect is another factor that is studied along with the GI of the foods. Second meal effect refers to the effect of the first meal on the postprandial glycaemic response of the second meal, termed the "second meal phenomenon" 10 . Various studies have investigated this phenomenon using various GI food types 11,12 . It has been observed widely that a low GI food will reduce the subsequent postprandial glycaemic response largely as compared to a high GI food. However, such investigations have not been done among foods that belong to the same GI range i.e second meal effects of two low GI foods, which will be an interesting observation to make. This study, for the first time, compared the postprandial glycaemic response of two types of biscuits that fall within the low GI range. Though both biscuits are classified to be low GI, the range of inflexion on the glycaemic response is an interesting observation that has not been studied by many. The aim of this study was to compare the glycaemic impact of a basic low GI biscuit (GI 54) against a modified version of this biscuit that had a lower GI (GI 24). The biscuits were tested in young, healthy nondiabetic volunteers. This study also explored the potential second meal effect after the consumption of the biscuits. Subjects and methods The study was conducted in accordance with the guidelines laid down in the Declaration of Helsinki, and all procedures involving human participants were approved by the Domain Specific Review Board (DSRB) of National Healthcare Group, Singapore (Reference no. 2018/01066). Subjects The inclusion criteria for participants were healthy, young Asian Chinese males aged between 21 and 40 years, non-smoker, body mass index (BMI) between 18.5 and 25 kg/m 2 and normal blood pressure (<140/90 mm·Hg). Exclusion criteria were metabolic diseases (such as diabetes, hypertension, etc.), known glucose-6-phosphate dehydrogenase deficiency (G6PD deficiency), medical conditions and/or taking medications known to affect glycaemia (glucocorticoids, thyroid hormones and thiazide diuretics), intolerances or allergies to foods, partake in sports at the competitive and/or endurance levels, intentionally restrict food intake, and fasting blood glucose more than 6 mmol/L. A total of 14 participants were screened and recruited. One subject dropped out after one session, resulting in 13 data sets being analysed. The study was conducted at the Clinical Nutrition Research Centre (CNRC), Singapore. The protocol was well explained to the subjects and they gave their informed consent before participation. The study was registered in the Clinicaltrial.gov registry as NCT04115579. Study protocol A randomized, controlled, single-blinded cross-over design was adopted for this study. Each participant attended two test sessions (consisting of 3 days each), separated by a wash-out period of at least 3 days. Figure 1 shows a schematic overview of a study session. Participants were advised not to perform any rigorous activities three days prior to and during the study session. At each session, subject would consume either the LGI 1 biscuit or the LGI 2 biscuits, depending on the randomization for that session. Fig. 1 The 3-day study protocol, consisting of two sessions as a randomized, cross-over trial: all participants consume identical standard meals and biscuits, while wearing the continuous glucose monitoring (CGM) device. On day 0, CGM was inserted and a standard meal was given. On Day 1, breakfast at 09:00 h, lunch at 12:00 h, snack at 16:00 h and dinner at 19:00 h. On day 2, the CGM device was removed from participant. Each test session spanned three consecutive days from around 16:00 on day 0 till 9:00 on day 2 consisting of over 24 h continuous glucose monitoring (CGM). On day 0, the continuous glucose monitoring system (CGMS™) was inserted in the afternoon. On day 1, participants arrived at the centre around 8:30 am to 9:00 am following a 10-12 h overnight fast. The participants were first allowed to rest for 10 min before testing began. An indwelling intravenous cannula was inserted into a forearm vein by a phlebotomytrained state registered nurse and a baseline blood sample (0 min) was obtained. Subsequently, participants consumed the LGI 1 or LGI 2 biscuits, with 250 ml water, at a comfortable pace within 15 min. Following the breakfast meal, venous blood samples were collected at 30, 60, 90, 120, 150 and 180 min intervals following the start of the meal. Participants were then given a standardized lunch consisting of spaghetti with chicken sauce and a fruit cocktail which was to be consumed in 20 min. LGI 1 or LGI 2 biscuits were given for afternoon snack to be consumed at home at 16:00 h (within 15 min) and a standardized dinner to consume at home at 19:00 h (within 20 min). Treatment meals All standardized meals for lunch and dinner had the same macronutrient content and composition. These standard meals reflected a typical local rice-based or pasta-based meal accompanied with a drink or fruit. All meals given were identical for both sessions, with the only difference being the treatment biscuits given for breakfast and snack. Participants were not allowed to eat or drink anything other than the test meals and plain water during the study period. All participants were also asked to avoid alcohol and excessive physical activity for 2 days prior to and during the study period. LGI 1 biscuits and LGI 2 biscuits were produced in the CNRC food product development kitchen. The GI of biscuits were previously tested according to the ISO 26642:2010 method, in the CNRC laboratory 13 . LGI 1 biscuits were formulated using basic ingredients for a biscuit recipe consisting of all-purpose flour, butter, sugar, vanilla flavour, baking soda, egg and salt. In the formulation of LGI 2 biscuits, all-purpose flour was replaced with a mixture of plain flour, soluble fibre and a plant-based protein (derived from soya). Butter was replaced with coconut oil and sugar was partially replaced with a low GI sweetener. LGI 1 and LGI 2 biscuits were given in portions containing 50 g of available carbohydrates at breakfast and 25 g available carbohydrates for mid-afternoon snack. The LGI 1 biscuit had a GI of 54.4 ± 6.3 and LGI 2 biscuit had a GI of 23.8 ± 3.3. Table 1 shows the nutrient composition of both biscuits, and the study foods provided for both sessions. CGM and insulin measurement Continuous glucose monitoring (CGM) (iPro™2 Professional CGM-Medtronic MiniMed, Northbridge, CA, USA) was used to measure glycaemic response, defined as the primary outcome. The insertion was performed on day 0 around 16:00 and the sensor was removed on day 2 of the study at 9:00. During each test session, the CGM sensor was calibrated against finger-stick blood glucose measurements four times a day before every meal and before sleeping using the FreeStyle Optium Neo Blood Glucose meter (Abbott Laboratories). Data were collated and processed using online software (Medtronic Diabetes CareLink iPro; carelink.minimed.eu). The data reported in this paper represent 24 h interstitial glucose readings recorded every 5 min from the start 00:00 Day 0 until 24 h later around 00:00 on day 2. On day 1, participants arrived in a fasted state and a finger-prick blood glucose measurement for CGM calibration was taken and this fasting blood glucose measurement was recorded. Then an indwelling intravenous cannula was inserted into a forearm vein by a phlebotomy-trained state registered nurse and a baseline blood sample (0 min) was obtained. Subsequently, participants consumed the LGI 1 or LGI 2 biscuits, at a comfortable pace and finished it within 12 min. Venous blood samples were collected at 30, 60, 90, 120, 150 and 180 min intervals following the start of the meal. Insulin determinations were performed for both LGI 1 and LGI 2 arms. Venous blood samples collected were centrifuged at 1500 × g for 10 min at 4°C, and serum was aliquoted and stored at −80°C. Serum insulin concentrations were determined using a Cobas e411 (Roche, Hitachi, USA), where the intra-and inter-assay CVs were <5% and <6%, respectively. Data processing and statistical analysis The primary outcome was to determine how the inclusion of LGI 1 and LGI 2 biscuits would affect postprandial glycaemic response over the 24 h. The baseline glucose value for each subject was determined from the average CGM interstitial glucose readings for half-hour at a fasted state on day 1. It was used to calculate the change in glucose levels for the subsequent time points for the 24 h. The glycaemic response was expressed as the incremental area under the curve (iAUC) and calculated using the trapezoidal rule 14,15 . The secondary outcome was the insulin response during breakfast. The IAUC insulin was also calculated using the trapezoidal rule during the breakfast period 14,15 . All areas below baseline were excluded from the calculations. A cross-over design with a minimum of 8 subjects would be sufficient to detect a 15% change in area under the glucose curve (24 h) with a power of 0.85 at a significance level of 0.05 6 . Data and figures were processed in a Microsoft Excel spreadsheet (Microsoft Corporation). Values were presented as mean ± SEM (standard error of the mean) unless otherwise stated, coefficient of variation (CV) was reported as median (Inter-Quartile range). Prior to statistical analysis, the normality of the data were assured using the Shapiro-Wilks test and Quantile-Quantile (Q-Q plot) of the differenced values. The parametric paired t test was used to compare the mean iAUC values between the treatments and non-parametric t test was used for the comparison of CV between the treatments. Statistical significance was set at p-value < 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23 (SPSS Inc.). Baseline characteristics For the present study, 14 participants enrolled, but one was excluded because he was unable to complete the second session due to personal reasons. Thus, 13 young, healthy Chinese male adults completed both arms of the study, and their characteristics are shown in Table 2. Assessment by continuous glucose monitoring There were no significant differences in the fasting concentrations of glucose prior to the consumption of the LGI 1 and LGI 2 biscuits at breakfast (p-value = 0.61). The CGM glycaemic profiles for the LGI 1 and LGI 2 treatments are graphically presented in Fig. 2. The glycaemic outcome parameters are presented in Table 3. The incremental glucose peak, iAUC 0-1 h, 0-2 h and 0-3 h were significantly lower after LGI 2 biscuits compared to the LGI 1 biscuits (p-value < 0.05). The LGI 2 snack had a lowered postprandial glucose response at the first 1 h that was also observed with the standard dinner ( Table 3). The total iAUC120 for LGI 2 breakfast, and standard dinner were significantly lower for the LGI 2 treatment (p-value < 0.05) than LGI 1 treatment (Fig. 3). There was no significant difference in the median iAUC 24 h (p-value = 0.51), between the treatments from 12 midnight of day 0 till 12 midnight of day 2. Insulin response There were no significant differences in the fasting concentrations of insulin prior to the consumption of the LGI 1 and LGI 2 biscuits at breakfast (p-value = 0.25). At breakfast, the incremental insulin response to LGI 2 biscuits were significantly lower than to the LGI 1 biscuits in volunteers (Fig. 3). The overall iAUC insulin response over 180 min was significantly lower for LGI 2 biscuits (p-value = 0.02). Fasting blood glucose (mmol/L) 4.7 ± 0.1 Data are means ± SD (standard deviation). Discussion The purpose of this study was to investigate the glycaemic effects of consuming two biscuits in the low GI range (i.e. 24 and 54) and its impact on postprandial glucose. Modifying the biscuits with functional ingredients was essentially to create a healthy, nutrient-dense, high-fibre low GI biscuit (LGI 2) that would favourably impact glucose metabolism and yet not increase overall energy intake. Therefore, this novel low GI biscuit (LGI 2) was created to be advantageous for body weight and glycaemic control. The consumption of LGI 2 biscuits (containing 50 g available carbohydrates) resulted in a 56.4% reduction in glucose response and a concomitant 45% reduction in insulin response at breakfast. LGI 2 biscuits consumed as a mid-afternoon snack (containing 25 g available carbohydrates), showed a 24% reduction in glucose response, albeit not significant, but may be physiologically relevant. There was no effect on the second-meal glucose tolerance at standard lunch when LGI 2 biscuits were given. Repeating the analysis for the iAUC120 of lunch while controlling for the iAUC120 of breakfast did not change the conclusions reported in Table 3 (results not shown). This again confirms that there was no residual effect of the breakfast onto lunch. Similarly repeating the analysis for iAUC120 of dinner with the iAUC from 9 am to 6 pm did not change the conclusions as well (results not Table 3 Glucose parameters in healthy subjects (n = 13). Glycaemic parameters LGI 2 LGI 1 P value LGI 2 LGI 1 P value LGI 2 LGI 1 P value LGI 2 shown). Previous studies have reported a delayed postprandial response in blood glucose after a low GI breakfast/morning meal to the subsequent meal 6 , and at breakfast after a low GI dinner 16 . Our results revealed a new finding with the most prominent effect on secondmeal glucose tolerance observed at the dinner meal. The metabolic basis of this finding remains uncertain, but it has been shown that insulin resistance is higher at night than in the morning or during the day 17,18 . This results in a diurnal variation in insulin resistance and plasma FFA concentrations. Since the second meal effect is related to suppression of plasma free fatty acid (FFA) concentrations 19 , it remains to be studied how consuming the LGI 2 biscuits affect the time course of plasma FFA concentration to the subsequent meal and over the course of the day. The CV of the glycaemic response are measures used to describe the variability. It is measured by dividing the standard deviation of the raw glucose responses by their mean values for the period of observation. Percentage CV (% CV) during the LGI 1 breakfast was significantly higher than under the LGI 2 biscuit conditions (Table 3). 13). a Represents the breakfast portion of the incremental glucose curves for 120min; b represents the lunch portion; c represents the snack portion; d represents dinner portion. The solid black line represents the LGI 2 biscuits and the dashed lines represent the LGI 1 biscuits. The bar plots on the right hand side are displayed as mean with error bars using SEM; n=13. iAUC120 was calculated using the trapezoid rule ignoring the area below the baseline. Total iAUC120 corresponds to the area under the curve for the entire 120min of measurement. e Represents the mean change from baseline postprandial insulin after breakfast over 180min. The iAUC for blood insulin concentration in the overall 180min after the breakfast (bar plot). *p-value<0.05 (LGI 2 biscuits compared to LGI 1 biscuits). P value calculated using paired t test. Borderline differences in the percentage CV values was also observed during the snack consumption (Table 3). It is to be noted that these variability values are small. This is mainly attributed to the fact that both the LGI 1 and LGI 2 biscuits were low GI biscuits which are known to result in lesser glycaemic fluctuation than their high GI counterparts. Furthermore, all the subjects in this study were healthy individuals with no type 2 diabetes. Hence the difference in % CV observed was relative between the treatment biscuits used in this study. Higher variability of the LGI 1 biscuit suggests that it results in greater fluctuations of the blood plasma glucose which would stress the system increasing the risk of insulin insensitivity and diabetes risk in the long-term 20 . The LGI 2 (GI 54) biscuits were almost negligible in fibre content. Some earlier studies have shown that certain soluble fibres consumed at a dose as low as 5.1 g in the first meal of the day exhibited postprandial effects immediately following the first meal, resulting in residual effects that blunt postprandial glycaemia after meals eaten several hours after fibre ingestion 21 . In our study, the addition of soluble fibre in LGI 2 biscuits made up onefifth of the LGI 1 biscuit formulation. This proportion of fibre may contribute to the suppression of acute glucose elevation after ingestion at breakfast and at mid-afternoon, attributed to their ability to delay carbohydrate digestion and absorption from the gut by increasing the viscosity of the stomach and intestinal contents 22 . It is generally accepted that fats lower GR, and the type of fat used also affects carbohydrate metabolism 23 . However, one criticism of some low GI foods is the high fat content, which is particularly concerning for people with diabetes due to their risk of cardiovascular disease. Emerging evidence has shown that the addition of functional lipids during cooking of carbohydrate-rich staple foods may be an effective and practical strategy for improving glycaemic control 24 . The differential patterns in glucose and insulin responses at breakfast may be possibly explained by the variation in the fat type used for our LGI 2 biscuits. In our study, we used an equal proportion of coconut oil to replace butter in LGI 2 formulation. Previous work by our team has shown that coconut oil incorporated in baked bread showed the greatest attenuation of GR compared to butter 24 . There was an attenuation in GR with the LGI 2 (coconut-oil based) biscuits compared to LGI 1 (butterbased). Coconut oil contains medium-chain triglycerides (MCTs) such as lauric acid and myristic acid, which could delay gastric emptying rates due their higher osmolarity 25 and form amylose-lipid complexes resulting in resistant starch formation 26 . The use of simple dietary interventions, such as the addition of functional lipids during cooking of carbohydrate-rich foods may be an effective and practical strategy for improving glycaemic control. As biscuits were made with other functional ingredients, a combination of other factors could contribute to the reduction in glucose and insulin response. The partial replacement of sucrose/sugar with a low GI sweetener was to provide glucose-attenuation properties and yet not compromise on palatability and taste. Protein fortification involved the addition of a protein powder to increase the protein content of the LGI 2 biscuits. Bearing in mind that Asians consume a largely plant-based diet 27 , a plant-based protein was chosen for our modified LGI 2 biscuits. Besides increasing the total amount of dietary protein in the modified biscuits, the source of protein also determines their effectiveness in the regulation of postprandial glycaemia, by having superior glycaemic-reducing effects than that of animal protein 28,29 . The knowledge generated from this study suggests that modifying a wheat-based product, such as biscuits, with functional ingredients (plant-based fat, soluble fibre, plant-based protein and low GI sweetener) may provide a viable option for innovative food products that can modify the post-meal glycaemic response while preserving pancreatic beta-cells, especially at breakfast, as observed in our findings. The novel low GI biscuit (LGI 2) has shown to favourably impact glucose metabolism, and further work needs to explore its impact on other metabolic biomarkers such as triglycerides. The strengths of our study was the randomized crossover design where each subject serves as his control. The CGMS™ was an important tool used in this study for monitoring the glycaemic response of volunteers in the centre and at home. This is important, as it is likely to mimic a "real-world" situation than a laboratory-based study, which was also the aim of our present study. The uniqueness of the present study was to feed a standard diet that only differed in the type of biscuit consumed. This question is especially relevant today in light of the increasing burden of diabetes and the need for healthier food products that can be a nutritious addition to the everyday meal plan. Among the limitations of the study was that it was conducted in a small group of healthy young Chinese males so the generalisability of our findings to other populations e.g. prediabetics, abnormal blood glucose, needs to be examined in the future. Our sample population was modest, however, the withinsubject crossover design reduced the between-subject variability in our study. Also, we did not measure metabolic biomarkers such as plasma lipids and biochemical indices of satiety and appetite, as this study was designed to be an exploratory study. In conclusion, our study shows that even within the low GI range, the GI value matters in influencing postprandial glucose. The postprandial glycaemic response of two types of biscuits that fall within the low GI classification (GI 24 and 54) differed with the novel low GI biscuits (GI 24) showing a more suppressed postprandial glycaemic response. A simple strategy based on the approach of using alternative, functional ingredients may have an important role in dietary management for individuals at risk of T2D and cardiovascular disease.
2020-05-02T13:37:59.805Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "c4392fdee2428e850a480db9cd01cbcfc6e14208", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41387-020-0118-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4392fdee2428e850a480db9cd01cbcfc6e14208", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234680024
pes2o/s2orc
v3-fos-license
Overstable Convective Modes in Rotating Early Type Stars We calculate overstable convective (OsC) modes of $2M_\odot$, $4M_\odot$, and $20M_\odot$ main sequence stars. To compute non-adiabatic OsC modes in the core, we assume $(\nabla\cdot\vec{F}_C)^\prime=0$ as a prescription for the approximation called frozen-in convection in pulsating stars where $\vec{F}_C$ is the convective energy flux and the prime $^\prime$ indicates Eulerian perturbation. We find that the general properties of the OsC modes are roughly the same as those obtained by Lee \&Saio (2020) who assumed $\delta (\nabla\cdot\vec{F}_C)=0$, except that no OsC modes behave like inertial modes when they tend toward complete stabilization with increasing rotation frequency where $\delta$ indicates the Lagrangian perturbation. As the rotation frequency of the stars increases, the OsC modes are stabilized to resonantly excite $g$-modes in the envelope when the core rotates slightly faster than the envelope. The frequency of the OsC modes that excite envelope $g$-modes is approximately given by $\sigma\sim |m\Omega_c|$ in the inertial frame and hence $\sigma_{m=-2}\approx2\sigma_{m=-1}$ where $m$ is the azimuthal wavenumber of the modes and $\Omega_c$ is the rotation frequency of the core. We find that the modal properties of OsC modes do not strongly depend on the mass of the stars. We discuss angular momentum transport by OsC modes in resonance with envelope $g$-modes in the main sequence stars. We suggest that angular momentum transfer takes place from the core to the envelope and that the OsC modes may help the stars rotate uniformly and keep the rotation frequency of the core low during their evolution as main sequence stars. INTRODUCTION Low frequency photometric variations have been detected in many rotating early type stars such as A-type stars (e.g., Balona 2013Balona , 2017 and B-type stars (e.g., Degroote et al. 2011;Balona 2016;Balona et al. 2019;Balona & Ozuyar 2020). Their frequencies are consistent with rotation frequency of the stars and hence the variabilities are called rotational modulation. The origin of rotational modulation in early type stars is not necessarily well understood. We usually assume that rotational modulation is produced by inhomogeneous brightness distribution on the surface of rotating stars and that the origin of the inhomogeneity is attributed to the existence of global magnetic fields at the stellar surface. However, early type stars do not posses a thick surface convection zone and hence dynamo mechanism is not necessarily an efficient mechanism for generating global surface magnetic fields. It has been suggested that subsurface convection zones in early type stars can generate surface magnetic fields in hot massive stars (Cantiello & Braithwaite 2011) and in A-and late B-type stars (Cantiello & Braithwaite 2019). It is important to note that although Cantiello & Braithwaite (2019) have also predicted regions in the H-R diagram where ⋆ E-mail: lee@astr.tohoku.ac.jp subsurface convection is unlikely to produce significant magnetic fields, rotational modulation is observed to be present in such regions (Balona & Ozuyar 2020). It may be desirable to finds a generating mechanism for rotational modulation that does not need surface spots and magnetic fields. Rotational modulations have also been identified in pulsating variables such as γ Dor stars (e.g., Van Reeth et al. 2018) and slowly pulsating B (SPB) stars and β Cephei stars (see Table 1 of Balona & Ozuyar (2020)). Assuming that the rotational modulations are produced by spots on the surface of rotating stars and using observed low frequency g-modes to derive the rotation rate in the near-core region for γ Dor stars, Van Reeth et al. (2018) suggested that almost rigid rotation prevails in the envelope of the stars. Convective modes, also called g − -modes, in early type stars are confined in the convective core and are unstable in the sense that the amplitudes grow exponentially with time. Assuming uniform rotation, Lee & Saio (1986) have numerically shown for a 10M⊙ main sequence star that low m convective modes in the core become overstable when the star rotates and that as the rotation speed increases, the overstable convective (OsC) modes are stabilized to be oscillatory in time and to resonantly excite low frequency g-modes in the envelope where m is the azimuthal wavenumber of the modes. Recently, computed OsC modes of 2M⊙ main sequence stars with some improvements to the previous study (Lee & Saio 1986) and showed that low m OsC modes in the core can resonantly excite prograde sectoral g-modes in the envelope if the core rotates slightly faster than the envelope. It may be interesting to note that no effective excitation of envelope g-modes by OsC modes takes place for uniform rotation. If the excited g-modes have significant amplitudes at the stellar surface, they will be observed as low frequency oscillations. thus proposed that the OsC modes are responsible for rotational modulations observed in rotating A-type main sequence stars (e.g. Balona 2013). In fact, the oscillation frequency σ of the excited g-modes in the inertial frame is in a good approximation given by σ ≈ |mΩc| with Ωc being the rotation frequency of the core, and the low frequency oscillations of σ ≈ |mΩc| for small |m| will be recognized as rotational modulations. also suggested that if m = −1 and m = −2 OsC modes simultaneously excite g-modes in the envelope, photometric variations with the frequencies σ ≈ Ωc and ≈ 2Ωc will be observed as rotational modulation with a low frequency and its first harmonic (Balona 2013). Since we are to compute convective modes in the core of rotating stars, some comments may be needed on how to treat turbulent convective fluid motions in pulsating stars, which has been a difficult problem to solve, particularly when we are interested in non-adiabatic analyses to discuss the pulsational stability of oscillation modes. To describe interactions between pulsations and convective fluid motions in non-rotating stars, we may use the theory of time-dependent convection (e.g., Dupret et al. 2005), which has been successfully applied to explain the drivng mechanism for g-modes in γ Dor stars. Note that the theory of time-dependent convection has also been applied to rotating stars by Bouabid et al. (2013), who used the traditional approximation of rotation (TAR) (e.g., Lee & Saio 1997). However, it is difficult to apply the theory to rotating stars without TAR and we usually employ a simplifying approximation, called frozen-in convection, for turbulent convection in rotating and pulsating stars. There are several prescriptions for the approximation. For example, the frozen-in convection in pulsating stars may be prescribed by δ(∇ · F C ) = 0 (e.g., or by (∇ · F C ) ′ = 0 (e.g., Lee & Baraffe 1995) where δ and the prime ′ indicate Lagrangian and Eulerian perturbation, respectively. See also Unno et al. (1989) for other possible prescriptions. We need to examine whether or not the different prescriptions δ(∇ · F C ) = 0 and (∇ · F C ) ′ = 0 for the frozenin convection lead to significant differences in the properties of OsC modes. Slowly pulsating B stars (Waelkens 1991) are known to show slow photometric variabilities due to g-mode pulsations excited by the iron opacity bump mechanism (e.g., Gautschy & Saio 1993;Dziembowski et al. 1993). Since many low frequency g-modes are excited in a SPB star, precise observational determination of the frequencies provide us with a good information concerning the internal structure of the stars (e.g., Degroote et al. 2010Degroote et al. , 2012Pápics et al. 2014Pápics et al. , 2015Pápics et al. , 2017. If the stars rapidly rotate, even if the low frequency modes are reasonably well described under TAR, some complexities due to rapid rotation may arise in the frequency spectra of low frequency g-modes. Note that in this paper we use the word "rapid rotation" somewhat loosely to suggest that the rotation velocity is greater than about half the breakup rotation velocity. For example, in γ Dor stars, many low frequency g-mods in the envelope are excited by the mechanism called convection blocking in the subsurface convection zone (e.g., Guzik et al. 2000, but see Kahraman et al. 2020). For rapidly rotating γ Dor stars, period spacings ∆Pn = Pn+1 − Pn of observed g-modes have been used to derive the rotation rate in the radiative regions in the envelope close to the convective core of the stars (e.g., Bouabid et al. 2013;Van Reeth et al. 2016) where Pn is the oscillation period of g-mode and n denotes its radial order. Quazzani et al. (2020) suggested for γ Dor stars that the period spacings may be disturbed to describe a deep dip when plotted as a function of Pn if the g-modes are in resonance with an inertial mode in the convective core. Such a resonance can provide useful information concerning the convective core itself (Saio et al. 2021). It is likely that similar resonance phenomena between g-modes and an inertial mode take place in rotating SPB stars, in which g-modes are excited by the opacity bump mechanism. We also expect that resonances may occur between opacity driven g-modes and OsC modes in rapidly rotating SPB stars and it is one of our interests to see whether or not this really happens. Angular momentum transport by non-radial oscillations in rotating stars has been discussed by many authors, including Ando (1983), Lee & Saio (1993), Talon et al. (2002), Rogers & Glatzmaier (2006), Townsend et al. (2018), andNeiner et al. (2020). See a recent review by Aerts et al. (2019) for angular momentum transport in rotating stars. As Ando (1983) discussed, prograde waves extract angular momentum from rotating fluid to decelerate its rotation where the waves are excited, and deposit angular momentum to accelerate the rotation where the waves are damped. For example, Talon et al. (2002) and Rogers & Glatzmaier (2006) carried out numerical simulations to follow the evolution of internal rotation for solar models, taking account of angular momentum transport by gravity waves, which are assumed to be excited by turbulent fluid motions in the convective envelope and to suffer radiative and viscous dissipations in the radiative core. They found that the radiative dissipation tends to strengthen and the viscous dissipation to smooth out differential rotation in the radiative core and that the competition between the two effects may lead to quasi-periodic changes in the internal rotation. Townsend et al. (2018) also numerically followed the evolution of internal rotation in SPB stars, taking account of angular momentum transport by a g-mode excited by the opacity bump mechanism. Assuming finite oscillation amplitudes consistent with observations for the g-mode, they found that the rotation speed in the surface layers is significantly decelerated as a result of angular momentum redistribution in the envelope. More recently, Neiner et al. (2020) discussed angular momentum transport by low frequency g-modes stochastically excited in the core of massive main sequence stars. They suggested that the surface layers are significantly accelerated if the stars rotate rapidly. Since OsC modes that resonantly excite envelope g-modes have amplitudes both in the core and in the envelope of rotating stars (e.g., , they are expected to play an important role in angular momentum transport between the core and the envelope of the stars. Prograde OsC modes are excited by convective instability and ǫ mechanism in the core. On the other hand, prograde g-modes driven by the OsC modes suffer from radiative dissipation in the envelope. We guess that OsC modes in resonance with envelope g-modes can be a carrier of angular momentum from the core to the envelope in rotating early type main sequence stars. In this paper, we carry out non-adiabatic calculations of low m OsC modes for 2M⊙, 4M⊙, and 20M⊙ main sequence stars, assuming that the core rotates slightly faster than the envelope. We calculate OsC modes in 2M⊙ main sequence stars to compare the results obtained by using the two different prescriptions for the approximation of frozen-in convection. 4M⊙ main sequence models correspond to SPB stars, in which we expect OsC modes to coexist with low frequency g-modes excited by the opacity bump mechanism. Massive main sequence stars have a large convective core and rather thick subsurface convection zones due to the iron opacity bump in the envelope. It is therefore interesting to see whether or not OsC modes in 20M⊙ main sequence stars behave differently from those of 2M⊙ stars. A brief account of method of calculation is given in §2 and numerical results for the low m OsC modes are presented in §3. We discuss angular momentum transport by the OsC modes in resonance with envelope g-modes in §4. We also discuss OsC modes with negative energy of oscillation in §5 and we conclude in §6. In the Appendix, we discuss low frequency modes in the convective core assuming the two different prescriptions for the approximation of frozen-in convection. METHOD OF CALCULATION We compute non-adiabatic low frequency oscillations in 2M⊙, 4M⊙, and 20M⊙ main sequence stars, taking account of the effects of differential rotation on the oscillation modes. The back ground models used for mode calculations are computed by using a stellar evolution code, originally written by Paczyński (1970), with OPAL opacity (Iglesias & Rogers 1996) for the initial composition X = 0.7 and Y = 0.28. The models have a convective core and an envelope, which consists of radiative layers and geometrically thin subsurface convection zones. The method of calculation of non-adiabatic oscillations of differentially rotating stars is the same as that used by Lee & Saio (1993), except that for the approximation called frozen-in convection we employ a prescription given by (∇ · F C) ′ = 0, instead of δ(∇ · F C) = 0. In general, the difference in the prescriptions for frozen-in convection should have no significant influences on the stability results of p-and g-modes. But, we found that the modal properties of low frequency modes in the convective core depends on the prescriptions. In fact, we found that for the prescription δ(∇ · F C) = 0, there appears low frequency modes which are destabilized by nuclear energy generation and are confined in the core. We may call them core modes. We found that the core modes exist even when the super-adiabatic temperature gradient ∇ − ∇ ad vanishes in the core and that they do not have adiabatic counterparts where ∇ = d ln T /d ln p and ∇ ad = (∂ ln T /∂ ln p) ad with T and p being the temperature and the pressure, respectively. We therefore have decided in this paper to employ the prescription (∇ · F C) ′ = 0 so that we can discuss non-adiabatic OsC modes free from the core modes. Note that OsC modes exist only when ∇ − ∇ ad > 0 and non-adiabatic OsC modes have adiabatic counterparts. See the Appendix A for further discussions. To represent oscillation modes in a rotating star, we use series expansion for the perturbations. The displacement vector ξ(r, θ, φ, t) may be represented by and the Eulerian pressure perturbation p ′ (r, θ, φ, t) by where Y m l is the spherical harmonic function Y m l (θ, φ), and S l , H l , T l ′ , and p l are the expansion coefficients which depend only on r, and lj = |m| + 2(j − 1) and l ′ j = lj + 1 for even modes and lj = |m| + 2j − 1 and l ′ j = lj − 1 for odd modes with j = 1, 2, · · · , jmax (see, e.g., Lee & Saio 1986). The parameter jmax gives the length of expansions. Substituting these expansions into linearized basic equations, we obtain a set of linear ordinary differential equations for the expansion coefficients (e.g., Lee & Saio 1986). The set of differential equations for non-adiabatic oscillations in differentially rotating stars for the prescription (∇ · F C) ′ = 0 are given in the Appendix B. In this study we employ the Cowling approximation (Cowling 1941), neglecting the Euler perturbation of the gravitational potential. We also ignore the terms associated with centrifugal force, which is justified because most of the kinetic energy of the low frequency modes is confined into deep interior. For the series expansions, we use jmax = 10 to 15, with which the frequencies and eigenfunctions become insensitive to jmax. For differentially rotating stars, we assume a rotation law given by (e.g., Lee 1988) where x = r/R, and xc denotes the outer boundary of the convective core, Ωs is the rotation speed at the stellar surface, and a and b are parameters. Uniform rotation is given by b = 1. The condition b > 1 implies that the core rotates faster than the envelope. In this paper we use a = 100, for which Ω(r) stays ≈ bΩs for x < xc but decreases steeply to Ωs around xc. For oscillation modes in differentially rotating stars, we use the symbol σ to represent the angular frequency (eigenfrequency) of oscillation in the inertial frame. Although the inertial frame frequency σ does not depend on r, the frequency ω = σ + mΩ(r) in a local co-rotating frame depends on r (but Im(ω) = Im(σ)). If we let ωc denote an oscillation frequency in the co-rotating frame of the core, the frequency ωs in the co-rotating frame of the envelope is given by where ωc = σ + mΩc with Ωc = Ω(0) and b ≈ Ωc/Ωs. If a prograde convective mode has a frequency ωc > 0 for m < 0 in the core, the frequency ωc should be shifted to ωs in the envelope. Then the g-modes in resonance with ωs in the envelope should have a radial order much lower than that of a g-mode having the frequency ωc in the envelope, which is one of the important effects of differential rotation on the modal properties of low frequency modes. In this paper, we let ω and Ωs denote dimensionless frequencies defined as ω = ω/σ0 and Ωs = Ωs/σ0 where σ0 = GM/R 3 with M and R being the mass and radius of the star and G the gravitational constant. We also let ωR and ωI denote the real and imaginary part of the complex frequency ω = ωR + iωI, respectively, and we note that unstable modes have negative ωI. OSC MODES IN DIFFERENTIALLY ROTATING MAIN SEQUENCE STARS For modal analysis, we use main sequence models with Xc = 0.7 (ZAMS model) and Xc = 0.2 (evolved model), where Xc is the mass fraction of hydrogen at the stellar centre. In the convective core, we assume a finite value for the superadiabatic temperature gradient ∇ − ∇ ad = 10 −5 as in . It is difficult to correctly estimate the value of ∇−∇ ad in the core of rotating stars (e.g., Stevenson 1979). We guess that ∇ − ∇ ad for rotating stars could be much larger than that estimated for non-rotating stars. 2M⊙ models To compare with the OsC modes obtained by who assumed δ(∇ · F C ) = 0 for the convective en-ergy flux, we have computed low m OsC modes in 2M⊙ main sequence models assuming (∇ · F C ) ′ = 0. Since OsC modes in uniformly rotating stars do not effectively excite g-modes as shown by , we consider OsC modes in weakly differentially rotating stars given by b = 1.1 or b = 1.2. The complex eigenfrequency ωc and the ratio Aenv/Acore of m = −1 and m = −2 OsC modes in the ZAMS model are plotted as a function of Ωs in Fig.1 for b = 1.1 and in Fig.2 for b = 1.2, where the OsC modes are labeled Bn with n being the number of radial nodes of S l 1 in the convective core (see Lee 2019). As Ωs increases from Ωs ∼ 0, ωcR of a Bn-mode increases to reach a maximum and then decreases, describing a peaked curve ωcR(Ωs). As the radial order n of Bn-modes increases, the height and width of the peak becomes lower and broader and the peak itself shifts to higher Ωs. On the other hand, the imaginary part |ωcI|, in general, decreases as Ωs increases, indicating that the OsC modes are stabilized by rotation. Effective stabilization by rotation, however, occurs only for low radial order Bn-modes and as n increases, stabilization effects becomes weaker, that is, |ωcI| depends on Ωs only weakly and tends to stay large. When |ωcI| of OsC modes in the core becomes vanishingly small as a result of rotational stabilization, there occurs resonant excitation of envelope g-modes by OsC modes and we have the ratio Aenv/Acore 1. Even if |ωcI| of OsC modes is not vanishingly small, however, resonant excitation of gmodes can occur when the Doppler shifted frequency ωsR of the modes is large enough to be coupled with low radial order g-modes in the envelope . Note that resonances between the OsC mode and envelope g-modes manifest themselves as quasi-periodic fluctuations of ωc and Aenv/Acore as a function of Ωs unless the radial orders of g-modes are extremely high. We also find that resonant excitation of g-modes takes place even if OsC modes have a corotation point defined by ωR(r) = 0, the existence of which is suggested by ωcR < 0. For example, for the m = −2 and b = 1.1 B4-mode, we obtain Aenv/Acore 1 when Ωs 0.4, for which ωcR < 0. Note that we cannot properly compute OsC modes when ωc ≈ 0, which is the reason why we had to stop computing some of OsC modes beyond certain values of Ωs. We carry out similar computations of low m OsC modes for the evolved model with Xc = 0.2 and the results for b = 1.1 are shown in Fig.3. Because the radius of the evolved model is larger than that of the ZAMS model, the stabilizing effect of core rotation Ωc ≈ bΩs = bΩsσ0 on the OsC modes for the evolved model is weaker than for the ZAMS model for a given value of Ωs. Note that σ0 for the former is smaller than for the latter. For Ωs 0.6, the low radial order Bn-modes in the figure are almost all well stabilized to have vanishingly small |ωcI| and show resonant fluctuations of ωc and Aenv/Acore as a function of Ωs. Note that the m = −1 B3-mode is not strongly stabilized by rotation and does not excite envelope g-modes for Ωs 0.6. It is also interesting to note that the B0-modes can excite g-modes even for b = 1.1, which does not occur in the ZAMS model. In Fig.4, the inertial frame frequency σR/2π of the OsC modes that excite envelope g-modes so that Aenv/Acore 1 is plotted against Ωs/2π for the 2M⊙ ZAMS model (Xc = 0.7) and evolved model (Xc = 0.2). The figure shows that the frequency σR of the OsC modes is approximately proportional to |mΩs|, and this comes from the fact that σR = ωcR + mΩc ≈ mΩc ≈ mbΩs since |ωcR| ≪ |mΩc| for the OsC modes. We thus obtain σm=−2 ≈ 2σm=−1 for the OsC modes. The frequency σR of the OsC modes at a given Ωs for b = 1.2 is slightly higher than that for b = 1.1. For the ZAMS model, we find that in wide ranges of Ωs/2π both m = −1 and m = −2 OsC modes simultaneously excite g-modes although there exists a break of Ωs/2π in which no effective g-mode excitation occurs for the m = −2 OsC modes for b = 1.2. Since the B0-modes excite envelope g-modes for b = 1.2, the lower limit to Ωs/2π for the OsC modes to have Aenv/Acore 1 extends to smaller values, compared to that for b = 1.1. For the evolved model, on the other hand, simultaneous excitation of m = −1 and m = −2 g-modes occurs only limited intervals of Ωs/2π. The difficulty in g-mode excitation by OsC modes in the evolved model may be caused by the µ-gradient zone outside the convective core. The general properties of OsC modes obtained by assuming (∇ · F C ) ′ = 0 are quite similar to those of OsC modes calculated by assuming δ(∇ · F C ) = 0, except that for (∇ · F C ) ′ = 0 we find no OsC modes that follow the relation ωcR ∝ Ωc ≈ bΩs when they tend towards complete stabilization with increasing Ωs. Note that the ratio ωR/Ω is approximately constant for inertial modes. 4M⊙ models For slowly pulsating B (SPB) stars, we compute OsC modes of 4M⊙ main sequence stars for Xc = 0.7 (ZAMS model) and Xc = 0.2 (evolved model). The behavior of low m OsC modes as a function of Ωs for the 4M⊙ models is quite sim- ilar to that found for the 2M⊙ main sequence models. This is shown by Fig. 5 in which σR/2π of the OsC modes having Aenv/Acore 1 are plotted against Ωs/2π. Note that the vertically aligned open squares in the figure indicate prograde sectoral g-modes excited by the opacity bump mechanism for Ωs = 0.1, 0.3, and 0.5. Again for the OsC modes in SPB stars, we obtain σR ∝ mΩs so that σm=−2 ≈ 2σm=−1. The OsC modes with Aenv/Acore 1 for b = 1.2 extend to lower rotation frequencies Ωs/2π than those for b = 1.1, and this extension occurs both for the ZAMS model and for the evolved model. Fig. 5 shows that the frequencies σR of the OsC modes and of the opacity driven g-modes are well separated from each other for slowly rotating SPB stars and that the frequency separation becomes smaller as Ωs increases. We find that mode crossings between OsC modes and opacity driven g-modes take place, for example, for Ωs 0.5 for the ZAMS model and that such mode crossings in rapidly rotating SPB stars become more likely as |m| and b increase. As an example, Fig.6 depicts mode crossings between the B2-mode and g-modes for m = −1 and b = 1.2. Because of the mode crossings, ωcR(Ωs) of the B2-mode describes a zig-zag curve as a function of Ωs. At Ωs = 0.6, for example, the B2-mode stands between g15-and g16-modes, which are both unstable, and hence the period spacings of the low frequency modes including g-modes will be different depending on whether or not we count the B2-mode as an observable low frequency mode. Using the periods of opacity driven g-modes and OsC modes we discuss how the relations between the period P and period spacing ∆P look like for the low frequency modes in rapidly rotating SPB stars, particularly when an OsC mode comes in between opacity driven g-modes. In Fig.7 we plot ∆P as a function of P for the opacity driven g-modes and the B2 mode of the 4M⊙ ZAMS star at Ωs = 0.6 for m = −1 and inertial mode in the core. As a result of the resonance, the radial component of the displacement vector, ξr, of a g-mode has comparable amplitudes both in the core and in the envelope, an example of which is shown in Fig.8, where the real parts of the expansion coefficient xS l are plotted versus x = r/R for the m = −1 g8-mode. Similar dips are known to exist in the P − ∆P relations observationally obtained for γ Dor stars (e.g., Li et al. 2020) and they are believed to be produced as a result of resonances between g-modes and inertial modes as discussed by Quazzani et al. (2020) and Saio et al. (2021). Although the resonances with core inertial modes produce rather prominent features in P − ∆P relations, the OsC modes are likely to disturb only the longest period parts of the P − ∆P relations. If we define the spin parameter sc ≡ 2Ωc/ωcR evaluated in the core, we obtain sc = 1.04 × 10 1 for the g8-mode and sc = 1.15 × 10 3 for the B2-mode, and the resonances of opacity driven g-modes are more likely to occur with an inertial mode than with an OsC mode. 1. The behavior of ωc is essentially the same as that obtained for the 2M⊙ and 4M⊙ models although there exists some minor differences. For b = 1.1, for example, strong rotational stabilization of the OsC modes with increasing Ωs occurs for the B0-to B4modes for m = −1, although only the B0-to B2-modes are subject to such strong stabilization for the 2M⊙ model. The difference may be due to the fact that the fractional radius xc of the convective core of the 20M⊙ model is larger by a factor ∼ 2 than that of the 2M⊙ model. For the rotation law given by equation (5), the width of the fractional radius, ∆x, over which Ω(r) rapidly changes from Ωc to Ωs may be estimated as ∆x ≈ 2/a = 0.02. When the fractional wavelengths ∼ xc/(n + 1) are comparable to or less than ∆x, the effects of the differential rotation on OsC modes are significant and strong rotational stabilization of the OsC modes with increasing Ωs cannot occur. In other words, strong stabilization of OsC modes takes place when xc/(n + 1) ∆x, which may be consistent to the numerical results in this paper. 20M⊙ models It is interesting to note that since ωcR of Bn-modes is much smaller than |mΩs(b−1)| for rapid rotation unless b ≈ 1, their Doppler shifted frequencies in the envelope take almost the same value given by ωsR ≈ −mΩs(b − 1) and in the inertial frame by σR ≈ −mΩsb. This suggests that several OsC Bnmodes with different radial orders n can be in resonance with a low frequency envelope g-mode having the frequency ωg ≈ −mΩs(b − 1) to obtain large amplitudes at the surface. If this multiple excitation of Bn-modes happens at a given Ωs, we would observe a fine structure of frequency around −mΩsb in the frequency spectrum and a frequency separation in the fine structure may be given by |ω n 2 cR −ω n 1 cR | where n1 and n2 are the radial orders of OsC Bn-modes in resonance with a g-mode. ANGULAR MOMENTUM TRANSPORT BY OSC MODES Since OsC modes that excite envelope g-modes have amplitudes both in the core and in the envelope, it is worth examining angular momentum transport by the OsC modes. In the Cowling approximation (Cowling 1941), angular momentum transport by low frequency oscillations in rotating stars may be described by (e.g., Pantillon et al. 2007;Mathis 2009;Lee 2013; see also Belkacem et al. 2015) ρ d< ℓ > dt = − 1 4πr 2 ∂ ∂r Lr, where ℓ = r 2 sin 2 θΩ denotes the specific angular momentum around the rotation axis, and and f = π 0 2π 0 f sin θdθdφ. Note that to evaluate Lr, we use v ′ = iωξ − r sin θ (ξ · ∇Ω) e φ . We regard Lr as the angular momentum luminosity transported by the waves. If Lr increases with increasing r, we consider that the waves extract angular momentum from rotating fluids, decelerating the rotation. On the other hand, if Lr decreases, the waves deposit angular momentum to the fluids, accelerating the rotation. It is also useful to calculate the time scale τ of angular momentum changes expected in the interior. In the envelope, we may define τ by and for the convective core, we calculate the averaged time scale τ defined by where v turb is the turbulent velocity in the convective core and is computed using the mixing length theory of convection for non-rotating stars. Equation (11) may suggest that the kinetic energy of the turbulent fluids in the core is redistributed over the entire interior by the OsC modes when the modes have amplitudes both in the core and in the envelope. In Fig. 10, Lr/(GM 2 /R) and 1/τ of the m = −1 B3-modes at two different Ωss are plotted versus r/R for b = 1.1 for the 2M⊙ ZAMS model. As r/R increases from the centre, Lr increases from zero to the maximum within the core and then decreases towards the core boundary at rc/R = 0.132. Positive Lr(rc) at the boundary indicates that the rotation speed of the core is decelerated by exciting the prograde OsC modes as suggested by equation (10). In the envelope, on the other hand, Lr gradually decreases as r/R increases from the core boundary to r/R ∼ 0.9, from which it decreases steeply towards the surface. This suggests that angular momentum deposition takes place in the envelope, particularly in the layers close to the stellar surface. From the plot of 1/τ against r/R we find that the time scale τ in the envelope is generally positive and it can be very small in the surface layers where the density is low and the dissipations are large. This may be suggested by the inset, which is a magnification to show 1/τ in the region close to the surface. On the other hand, we have negative τ in the convective core and |τ | can be small when a large amount of angular momentum extraction by OsC modes occurs. We note that, for the amplitude normalization given by equation (11), the time scale |τ | in the core can be longer for the OsC modes that excite envelope g-modes than for the OsC modes that tend to be confined in the core. From Fig. 10 we consider that the OsC modes that excites g-modes can transfer angular momentum from the convective core to the envelope and to the surface of the stars. Fig. 11 plots Lr/(GM 2 /R) and 1/τ of the m = −1 B5modes against r/R for the 20M⊙ ZAMS model. Their behavior as a function of r/R is similar to that found for the 2M⊙ model and |τ | in the core is much longer than that at the stellar surface. We also note that |τ | in the core is by a factor ∼ 3 shorter than that for the 2M⊙ stars for the normalization (11). DISCUSSIONS Our numerical investigations in this paper have suggested that low radial order Bn-modes in weakly differentially rotating stars are strongly stabilized by rotation to obtain vanishingly small |ωcI| as Ωs increases, and that as the radial oder n increases rotational stabilization of the Bn-modes become weaker so that the magnitudes of ωcI only gradually decrease with increasing Ωs. For the low radial order Bnmodes, it is important to note that even if they are stabilized to have vanishingly small |ωcI|, they remain unstable with small but finite |ωcI| = 0 and excite g-modes in the envelope. Lee & Saio (1989, 1990 suggested that this destabilization of g-modes occurs when oscillatory convective modes with negative energy of oscillation are in resonances with envelope g-modes with positive energy of oscillation. In other words, because of the resonances OsC modes cannot be completely stabilized by rotation, that is, they cannot be an oscillatory convective mode having a pure real frequency ωc. To probe the suggestion given above, we compute the oscillation energy of OsC modes. We define the specific energy eW of oscillation as the sum of the kinetic energy eK and the potential energy eP of oscillation, eW = eK + eP , where in the Cowling approximation and N 2 = −gA with N being the Brund-Väisälä frequency, c = Γ1p/ρ with Γ1 = (∂ ln p/∂ ln ρ) ad is the adiabatic sound velocity, g = GMr/r 2 with Mr = r 0 4πr 2 ρdr and G the gravitational constant, and Note that f indicates the time average of the quantity f . In non-rotating stars, we usually have the equipartition of energy as given by eKdV = eP dV , but this equipartition of energy is not always satisfied for rotating stars. In Fig.12 we plot the ratio η ≡ Ec/(Ee + |Ec|) as a function of Ωs for OsC modes of the 2M⊙ ZAMS model for b = 1.1 where Ec and Ee are defined as As Ωs → 0, the modes tend towards pure convective modes confined in the core, whose ωc is pure imaginary. This takes place when R 0 eP dV ≈ rc 0 eP dV < 0 because N 2 < 0 and N 2 ξ 2 r dominates (p ′ /ρc) 2 in the convective core. As Ωs increases from Ωs ∼ 0, the convective modes are stabilized to be overstable and Ec of the modes changes its sign from being negative to being positive, although Ec of the m = −1 B0mode is an exception to this rule. As Ωs further increases, we note that low radial order Bn-modes are likely to suffer strong rotational stabilization. As the low radial order Bn-modes get close to a state of complete stabilization with ωcI ∼ 0, their oscillation energy Ec in the core changes its sign from being positive to being negative as shown by Fig.12. This may confirm the interpretation that excitation of envelope g-modes by low radial order Bn-modes occurs as a result of resonant couplings between oscillatory convective modes with negative energy and envelope g-modes with positive energy of oscillation (Lee & Saio 1990). As the radial order n increases, the Bn-modes in differentially rotating stars tend to be only weakly stabilized by rotation and do not always obtain vanishingly small |ωcI|. In this case, the oscillation energy Ec is likely to stay positive even for rapid rotation. However, as suggested by Figs. 1 and 2, the modes still can excite envelope g-modes for rapid rotation speeds, e.g., for Ωs 0.5. This can be understood by using the quantity Φ R 1 defined by where λ km is the eigenvalue of Laplace's tidal equation (e.g., Lee & Saio 1997) and for prograde sectoral g-modes in the envelope we can assume √ λ km ∼ |m| (e.g., Townsend 2003), and ne is the number of radial nodes of the g-modes in the envelope . Note that ωsI = ωcI. used |Φ R 1 | 1 as the condition that effective excitation of envelope g-modes by OsC modes can take place. This condition may be satisfied when |ωsI/ωsR| ≪ 1 and/or the number of nodes, ne, is not extremely large. As Ωs increases, the Doppler shifted frequency ωsR of the Bn-modes is approximately given by ωsR ≈ −mΩs(b − 1) in the envelope. For example, for b = 1.1 we have ωsR 0.05|m| for Ωs 0.5. Even if the ratio |ωsI/ωsR| is not necessarily very small, we may have |Φ R 1 | 1 for ne ∼ 10 and hence even the Bn-modes whose |ωsI| is not necessarily vanishingly small can excite envelope g-modes in rapidly rotating stars. Using the amplitude normalization given by equation (11), we compute the relative luminosity variation δLr/Lr at the stellar surface. An example of such computations is shown in Fig. 13 for the OsC modes of m = −1 (black lines) and m = −2 (red lines) for the 2M⊙ (upper panel) and 20M⊙ (lower panel) ZAMS models where b = 1.1 is assumed. In general, |δLr/Lr| increases as Ωs increases and is saturated to be of order of ∼ 0.1 for Ωs 0.5. Observationally, the amplitudes of the variations are of order of ∼ 10 −3 (e.g., Balona 2016;Balona et al. 2019), suggesting |δLr/Lr| 10 −3 , and hence the normalization (11) leads to an overestimation of the oscillation amplitudes by a factor f 10 2 . If we let L 0 r and τ 0 denote the evaluations by equation (11), we have Lr = L 0 r /f 2 and τ = τ 0 × f 2 . For example, the time scale τ 0 in the core of the 2M⊙ ZAMS star is of order of 10 4 year (see Fig.10), and if we use f ∼ 10 2 we obtain τ ∼ 10 8 year, which is much less than the lifespan of 2M⊙ main sequence stars (e.g., Kippenhahn et al. 2012). This may suggest that angular momentum transport by the OsC modes can be a viable mechanism for extracting excess angular momentum from the core of the main sequence stars. Hot main sequence stars may suffer mass loss due to optically thin winds from the stellar surface (e.g., Krtička & Kubát 2010. The properties of stellar pulsations of mass losing stars with stellar winds are not necessarily well understood. It is difficult to properly treat pulsations of optically thin winds moving with supersonic velocities. However, the effects of stellar winds on g-mode excitation by OsC modes are probably insignificant since the mode excitation takes place in the deep interior of the envelope and is unlikely to be affected by the winds from the surface. Microscopic diffusion processes in the envelope of stars with stellar winds and surface magnetic fields have been assumed to explain the existence of chemically peculiar stars (e.g., Michaud et al. 1983) and of stars with helium inhomogeneities (Vauclair et al. 1991;Leone & Lanzafame 1997). So long as these chemical peculiarities occur in a thin surface layer occupying a tiny fraction of the stellar mass, however, their effects on the g-mode excitation by OsC modes will also be insignificant, although angular momentum transport near the stellar surface could be somewhat affected since thermal properties as represented by opacity in the surface layer would be modified by chemical peculiarities, particularly, by helium stratification. CONCLUSIONS We have computed low |m| OsC (Bn-) modes of 2M⊙, 4M⊙ and 20M⊙ main sequence stars assuming that the core rotates slightly faster than the envelope. We find that the OsC modes in rapidly rotating stars can resonantly excite prograde sectoral g-modes in the envelope, which will be observed as rotational modulations in early type stars. We find that low radial order Bn-modes in differentially rotating stars are likely subject to strong stabilization by rotation, but as the radial order n increases, the stabilizing effect of rotation on the Bn-modes becomes weak. We find that the general properties of the OsC modes do not strongly depend on the mass of the stars. To compute non-adiabatic OsC modes in this paper, we have employed the prescription (∇ · F C ) ′ = 0 for the approximation of frozen-in convection in pulsating stars. We have compared the results for the 2M⊙ models to those obtained by , who used δ(∇ · F C ) = 0 to compute the OsC modes. We find that the general properties of the OsC modes obtained by applying the two different prescriptions for the convective energy flux are roughly the same, except that for (∇ · F C ) ′ = 0, no OsC modes behave like inertial modes that satisfy the relation ωc ∝ Ωc, when they tend toward complete stabilization with increasing Ωc ≈ bΩs. 4M⊙ main sequence stars correspond to SPB variables, in which many low frequency g-modes are excited by the iron opacity bump mechanism and OsC modes are expected to coexist with such opacity driven g-modes. We have compared the frequency of the OsC modes to that of prograde sectoral g-modes driven by the opacity mechanism. The frequency of the OsC modes in the inertial frame is in general smaller than that of the opacity driven g-modes and the OsC modes and the g-modes at a given Ωs are well separated when Ωs is not large. In this case, the OsC modes will be observed as rotational modulations. As Ωs increases, however, the OsC modes will come close to or stand among the g-modes. If the OsC modes and g-modes are not well separated, some complexities due to the OsC modes will arise in the analyses of low frequency modes using P − ∆P relations. For Ωc/Ωs ≈ 1.2, for example, we find that the period spacings of opacity driven g-modes of the 4M⊙ ZAMS model are disturbed by low |m| OsC modes for Ωs 0.5. For rapidly rotating SPB stars, we also find that opacity driven g-modes can resonate with an inertial mode in the core and the periods and period spacing relations of the g-modes are disturbed to describe deep dips in the P − ∆P plots. See Quazzani et al. (2020) and Saio et al. (2021) for similar phenomena in γ Dor stars. Fittings of theoretical P − ∆P relations to observational ones (e.g., Pápics et al. 2017) will provide us with important information concerning the interior structure of SPB stars. To see the mass dependence of resonant g-mode excitation by OsC modes, we compute OsC modes in 20M⊙ main sequence stars and find that the OsC modes excite envelope gmodes when the core rotates slightly faster than the envelope, as found for 2M⊙ and 4M⊙ main sequence stars. We confirm no strong mass dependence of resonant g-mode excitation by OsC modes. Of course, there exist some minor differences that depend on the mass of stars. For example, the density in the envelope of 20M⊙ main sequence stars is lower than that of 2M⊙ stars if plotted as a function of the fractional radius and the magnitudes of the growth rate η ≡ −ωsI/ωsR for envelope g-modes are much larger for the former than for the latter. This may explain why resonant fluctuations of ωc and Aenv/Acore as a function of Ωs for rapid rotation are much smoother for the 20M⊙ stars than for the 2M⊙ stars. Calculating Lr and 1/τ for OsC modes in rotating main sequence stars, we have shown that the OsC modes in resonance with envelope g-modes can transport angular momentum from the core to the envelope of the stars. We have also suggested that if the angular momentum transport from the core to the envelope occurs efficiently in weakly differentially rotating main sequence stars, the rotation rate of the core is kept low, which helps the stars rotate uniformly during their main sequence evolution. APPENDIX A: TWO PRESCRIPTIONS FOR FROZEN-IN CONVECTION Energy in the stellar interior is transported by radiation and/or convection. Although energy transport by radiation occurs without fluid motions, convective energy transport is accompanied by fluid motions which are turbulent in the stellar interior. In pulsating stars we have to calculate perturbations of both radiative energy flux and convective energy flux. Since we usually employ the diffusion approximation for the radiative energy transfer in the interior, the perturbed radiative energy flux may be governed by the temperature perturbation. However, for the convective energy transfer, we have to consider the perturbations of turbulent fluid motion, to which statistical description should be applied. Although this is not necessarily an easy problem to solve, there have been several attempts to describe the perturbations of turbulent fluids in pulsating stars (e.g., Unno 1967;Gabriel et al. 1975;Gough 1977;Xiong 1977;Grigahène et al. 2005). Since it is difficult to properly treat the coupling between convective fluid motions and pulsations in rotating stars (see Belkacem et al. 2015), we usually employ a simplifying assumption called frozen-in convection to compute nonadiabatic pulsations of the stars. We may perturb the entropy equation to obtain where F = F R + F C, and F R and F C are the radiative energy flux and the convective energy flux, respectively, and other symbols have their usual meanings. Note that in equilibrium we have ∇ · F = ρǫ. Approximation called frozenin convection is given in various ways. For example, we may assume δLC = δFC,H = 0 or L ′ C = F ′ C,H = 0 (e.g., Glatzel & Mehren 1996) for non-radial oscillations of nonrotating stars where LC = 4πr 2 FC, and FC and FC,H are the convective energy flux in radial and horizontal directions, respectively. As discussed in Unno et al. (1989), we may also assume δ(∇ · F C) = 0 or δ(ρ −1 ∇ · F C) = 0, and there can be other prescriptions to give the approximation of frozen-in convection. For rotating stars, for example, Lee & Saio (1993 assumed δ(∇ · F C) = 0, while Lee & Baraffe (1995) used (∇ · F C) ′ = 0. We can usually expect that the different prescriptions for frozen-in convection do not lead to significantly contradicting results for the stability of g-modes and p-modes in general. But, this is not always the case for low frequency modes in the convective core of the stars. Considering most of the energy generated in the convective core is transported by convection, we may ignore the term δ(∇ · F R ) in the convective core. Substituting δs/cp given by equation (A5) with δ(∇ · F R) = 0 into equations (A2) and (A3) and using δp/p = V (y2 − y1), we obtain a set of differential equations with complex coefficients for the dependent variables y1 and y2: where γ = αT c3 c2ω ǫ ad + 1 Γ1 V. On the other hand, if we assume (∇ · F C) ′ = 0, we obtain from equation (A1) iωρT cp δs cp where we have neglected the term δ(∇ · F R). If we approximate ρǫ ≈ ∇ · F C in the convective core, we obtain iωc2 δs cp = c3 ǫ ad + 1 Γ1 V y2, with which we obtain Although the differences between the set of equations (A6) and (A7) and that of equations (A11) and (A12) seem insignificant, we find the differences have important consequences for the modal property of low frequency modes in the convective core. To see this, we may employ a local analysis. For the case of δ(∇ · F C) = 0, substituting y1 ∝ exp (ikrr + iωt) , y2 ∝ exp (ikrr + iωt) , into equations (A6) and (A7), we obtain for the radial wavenumber kr (e.g., Unno et al. 1989) −r 2 k 2 r = c1ω 2 + rA + iγ which reduces to −r 2 k 2 r = l(l + 1)iαT for |ω| ≪ 1 and rA = 0 in the convective core. The factor 1/ω 3 is the large parameter in our local analysis and is rewritten using the growth rate η = −ωI/ωR as Table A1. Complex eigenfrequency ω = ω R + iω I of low l core modes in the 4M ⊙ ZAMS model with X = 0.7 and Z = 0.02 for Ωs = 0, where ω = ω/ GM/R 3 and nc is the number radial nodes of S l 1 in the core. The notation a(b) implies a number given by a × 10 b . To make k 2 r be real and positive, we assume η = 1/ √ 3 ≈ 0.577 to obtain which suggests the existence of low frequency modes propagative in the core. The low frequency modes are called core modes in this paper. The dispersion relation (A17) indicates that the existence of the core modes is closely related to nuclear energy generation c3ǫ ad . Using the method of calculation by Lee & Saio (1993) in which δ(∇ · F C) = 0 is assumed, we compute low frequency core modes in the 4M⊙ ZAMS model assuming ∇ − ∇ ad = 0, and the results are summarized in the table A1. As found from the table, the growth rates are nearly equal to 1/ √ 3 ≈ 0.577. Since the core modes exist even for rA = 0, they cannot be convective modes. Note also that the core modes have no adiabatic counterparts. If we assume (∇ · F C) ′ = 0, on the other hand, we obtain for the wavenumber kr −r 2 k 2 r = c1ω 2 + rA l(l + 1) which reduces to r 2 k 2 r = −l(l + 1) (A19) for |ω| ≪ 1 and for rA = 0 in the core. This dispersion relation suggests that for (∇ · F C) ′ = 0, there appears no low frequency modes propagative in the core if rA = 0. APPENDIX B: OSCILLATION EQUATIONS FOR DIFFERENTIALLY ROTATING STARS In this appendix, we present the set of linear ordinary differential equations for non-adiabatic oscillations of differentially rotating stars in the Cowling approximation, obtained by assuming (∇·F C) ′ = 0 for the convective energy flux F C. Using the dependent variables y 1 to y 4 defined as where g = GMr/r 2 with Mr = r 0 4πr 2 ρdr, cp is the specific heat at constant pressure, LR is the radiative luminosity, and L ′ R is its Euler perturbation, the set of linear differential equations may be given by r dy 2 dr = c1ω 2 + rA 1 − 4c1Ω 2 G − c1 ∂Ω 2 ∂ ln r R2 y 1 1 ∇V r dy 4 dr = (D1 + 1 −ĉ3) y 1 − D1y 2
2021-05-17T01:16:13.211Z
2021-05-14T00:00:00.000
{ "year": 2021, "sha1": "adb8d35ca519756f128b04f2a6ee08538d0cff93", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2105.06667", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "adb8d35ca519756f128b04f2a6ee08538d0cff93", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227188075
pes2o/s2orc
v3-fos-license
Autosomal Dominant Polycystic Kidney Disease does not significantly alter major COVID-19 outcomes among veterans Chronic kidney disease (CKD), as well as its common causes (e.g., diabetes and obesity), are recognized risk factors for severe COVID-19 illness. To explore whether the most common inherited cause of CKD, autosomal dominant polycystic kidney disease (ADPKD), is also an independent risk factor, we studied data from the VA health system and the VA COVID-19-shared resources (e.g., ICD codes, demographics, pre-existing conditions, pre-testing symptoms, and post-testing outcomes). Among 61 COVID-19-positive ADPKD patients, 21 (34.4%) were hospitalized, 10 (16.4%) were admitted to ICU, 4 (6.6%) required ventilator, and 4 (6.6%) died by August 18, 2020. These rates were comparable to patients with other cystic kidney diseases and cystic liver-only diseases. ADPKD was not a significant risk factor for any of the four outcomes in multivariable logistic regression analyses when compared with other cystic kidney diseases and cystic liver-only diseases. In contrast, diabetes was a significant risk factor for hospitalization [OR 2.30 (1.61, 3.30), p<0.001], ICU admission [OR 2.23 (1.47, 3.42), p<0.001], and ventilator requirement [OR 2.20 (1.27, 3.88), p=0.005]. Black race significantly increased the risk for ventilator requirement [OR 2.00 (1.18, 3.44), p=0.011] and mortality [OR 1.60 (1.02, 2.51), p=0.040]. We also examined the outcome of starting dialysis after COVID-19 confirmation. The main risk factor for starting dialysis was CKD [OR 6.37 (2.43, 16.7)] and Black race [OR 3.47 (1.48, 8.1)]. After controlling for CKD, ADPKD did not significantly increase the risk for newly starting dialysis comparing with other cystic kidney diseases and cystic liver-only diseases. In summary, ADPKD did not significantly alter major COVID-19 outcomes among veterans when compared to other cystic kidney and liver patients. Headache30d Headache: Ever /never <=30 days IndexDate LossSmell30d Loss of smell: Ever /never <=30 days IndexDate LossTaste30d Loss of taste: Ever /never <=30 days IndexDate Myalgia30d Muscle aches (myalgia): Ever /never <=30 days IndexDate Nausea30d Nausea/Vomiting: Ever /never <=30 days IndexDate NoRecordOfSymptoms30d Coming Soon! No record of symptoms: If no record of symptoms listed above <=30 days IndexDate then 1 else 0. Note: this is different than the explicit mention of being asymptomatic. Rhinorrhea30d Runny nose (rhinorrhea): Ever /never <=30 days IndexDate SoreThroat30d Sore Throat: Ever /never <=30 days IndexDate RefreshDate Date all ORDCOVID patient level tables were refreshed. *Please note: Patients are added to the COVID cohort (i.e., positive, negative, and suspect cases) approximately four days prior to the Refresh Date. This brief time lag is needed for NLP processing and to complete quality assurance checks. * . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 29, 2020. ;https://doi.org/10.1101https://doi.org/10. /2020 Supplementary Note: The sample size of the COVID-19 negative patients is more than 10 times larger than that of positive patients. . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 29, 2020. ;https://doi.org/10.1101https://doi.org/10. /2020 Supplementary Note: The p values are for testing the positive rates of each outcome, where missing data were treated as negative. Dialysis status change is from the comparison of the pretesting to 60 days post testing. NA, missing. . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 29, 2020. ; https://doi.org/10.1101/2020.11.25.20238675 doi: medRxiv preprint . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint Supplementary The copyright holder for this this version posted November 29, 2020. ; https://doi.org/10. 1101/2020 Note: COVID-19 positive and negative patients were analyzed separately. OR, odds ratio. Other cystic kidney group is designated as CysticKidney; Cystic liver-only group as CysticLiver; ADPKD-enriched group as ADPKD. . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 29, 2020. ;https://doi.org/10.1101https://doi.org/10. /2020 Supplementary . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 29, 2020. ; https://doi.org/10. 1101/2020 Supplementary Figure 1. Percentage of patients with major COVID-19 outcomes in positive cases and negative cases among hospitalized patients. The number of patients is provided at the top of each bar. COVID-19 Outcomes of Hospitalized Cases . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 29, 2020. ; https://doi.org/10.1101/2020.11.25.20238675 doi: medRxiv preprint * * * * * * * * * * 11% 26% 50% 75% . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 29, 2020. ; https://doi.org/10. 1101/2020 Supplementary Figure 3. Number of cases for each outcome among patients tested for COVID-19 over time. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 29, 2020. ; https://doi.org/10. 1101/2020
2020-11-29T14:05:31.527Z
2020-11-29T00:00:00.000
{ "year": 2020, "sha1": "f060c7a7e52de5786b153b44ac93bf5c3689d7ed", "oa_license": "CCBYND", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/11/29/2020.11.25.20238675.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "59bd8a9097523f8c99b8ae50a79da21632d23bc6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238650095
pes2o/s2orc
v3-fos-license
Reactive Power Support in Radial Distribution Network Using Mine Blast Algorithm 1 Abstract —The passive power distribution networks are prone to imperfect voltage profile and higher power losses, especially at the far end of long feeders. The capacitor placement is studied in this article using a novel Mine Blast Algorithm (MBA). The voltage profile improvement and reduction in the net annual cost are also considered along with minimizing the power loss. The optimization problem is formulated and solved in two steps. Firstly, the Voltage Stability Index (VSI) is used to rank the nodes for placement of the capacitors. Secondly, from the priority list of nodes in the previous step, the MBA is utilized to provide the optimal location and sizes of the capacitors ensuring loss minimization, voltage profile improvement, and reduced net annual cost. Finally, the results are tested on 33 and 69 radial node systems in MATLAB. The results for the considered variables are presented which show a significant improvement in active and reactive power loss reduction and voltage profile with lesser reactive power injection. I. INTRODUCTION The conventional power system enables the energy flow at the longer electrical distance leading to excessive power losses. About 13 % of the losses in electrical power system occur at distribution systems [1]. A considerable contributing factor in these losses is the excessive reactive power demand at the distribution level, which negatively impacts the voltage profile of the system [2]. To protect the significant amount of power from wastage in the form of electrical losses, various methods, including Synchronous Condenser (SC) [3], Distribution Generation (DG) [4], FACTS devices [5], and capacitor placement [2], have been widely studied. The SCs are considered to be the very mature and well understood technology for reactive power compensation. Nevertheless, the synchronous condensers are hardly utilized nowadays since they need a significant amount of starting and protection gears. In addition, they cannot be modified rapidly enough to accommodate the rapid load changes [6]. In recent years, DGs have been extensively penetrated into the distribution network, which can provide additional benefits of loss minimization, reactive power Manuscript received 9 March, 2021; accepted 4 June, 2021. support, and voltage profile improvement. However, as the conventional distribution systems are designed for unidirectional power flow, DGs can pose a challenge of reverse power flow [7]. The FACTs are attributed as additional damping schemes used to enhance the controllability and power transfer capability of power system. However, they may not be able to achieve the fast and sufficient damping over the oscillations [8]. The capacitor banks in the electric distribution system are widely utilized due to the fact that they are more reliable and economical than previously discussed technologies [9]. The advantages of installing capacitors are the improvement of the systems voltage profile, power factor, the reduction of power loss, reduced reactive power demand from main grid, and the boost in available feeder capacity. Additionally, the capacitor banks can help in reducing the voltage fluctuations resulting due to the operation of some DG types. Moreover, to maximize the benefits and utilization, the optimally sized capacitors should be placed at optimal locations. [2]. Improper placement of the capacitor results in even higher system losses and voltage drops [10]. The problem of optimal capacitor allocation attempts to determine the location, size, and the number of capacitor banks to be installed such that the maximum economics can be obtained without breaching the operational constraints [11]. II. RELATED WORKS Several recent studies, based upon various computational techniques, have been concentrated on the capacitor banks placement in the power distribution systems. The available strategies are categorized as numerical programming, analytical, artificial intelligence, and heuristic algorithm [12]. A few authors proposed the classical methods based capacitors allocation with the loss reduction in a power system as the main objective [13], [14]. The classical approaches have the drawbacks, such as difficulty in escaping local minima and trouble in handling discrete control variables [15]. The optimal capacitor placement problem is a combinatorial problem; hence utilization of the modern heuristic methods has got more attention. In [16], the simulated annealing (SA) based optimal capacitor allocation is proposed. This approach is prone to trapping at a local optima leading to the possibility of inaccurate prediction of the optimum cost. In [17], the capacitor allocation problem is solved through Tabu Search (TS). Despite the fact that TS appears to be successful for the design problem, the use of complicated objective functions, as well as a large number of optimization parameters, decreases the performance of the algorithm. In [18], the Genetic Algorithm (GA) is proposed for optimum allocation and rating of the capacitors, but it takes a long time based on the size of the network. In [19], the Particle Swarm Optimization (PSO) is implemented; however, it tends to suffer from slow convergence in the search stage, poor local search capability, and it may trap in a local minima. In [20], the capacitive compensation is done through the Direct Search Algorithm (DSA), however, this study does not consider the maintenance and installation cost for the capacitor banks. The Plant Growth Simulation Algorithm (PGSA) [21] and the Cuckoo Search Algorithm (CSA) [22], [23] are proposed for the problem of the capacitor placement in the power network. Although these methods have produced good results, instead of discrete indices, continuous values of the capacitors were used, which lead to giving the unavailable sizes of the capacitors. In [23], to control the size of the search space, the problem was split into two subparts and used the Loss Sensitivity Factors (LSF) to locate optimal locations. In this way, the speed increased at the cost of accuracy. In [24], the Artificial Bee Colony (ABC) is presented for solving the same issue, however, the convergence is very slow. In [25], another method, namely, cuckoo search is proposed for the allocation of the capacitor; however, it takes a long time and a large number of iterations to obtain the optimal solution. The Ant Colony Optimization (ACO) [25] is presented for the same issue, however, the theoretical calculations are difficult, and with the iteration, its probability distribution changes. In [26], the Firefly Algorithm (FA) and LSF based optimal capacitor placement is done. This method did not consider the power loss cost and the reduced accuracy because of the split algorithm. The Harmony Search (HS) [27] is implemented for the problem of the capacitor allocation. However, good results came from it; the cost for the power losses is considered as the objective function only and the installation and operating costs are ignored. A population based metaheuristic algorithm based on the explosion of a mine bomb concept called "Mine Blast Algorithm" (MBA) is proposed in [28]. The effectiveness and superiority of the MBA are studied in several engineering problems in terms of function evaluation and found better than contemporary methods [28]. As discussed earlier, the recent approaches are unable to cater for the optimal installation and operation costs of the capacitor banks. The major contributions in this paper are to reduce both the active and the reactive power loss, thereby improving the voltage profile of the network along with reducing the total annual cost for installation and operation of the capacitor banks. At first, the locations for the placement of the capacitors are examined and arranged on the basis of the Voltage Stability Index (VSI) [29]. Afterwards, the MBA is implemented to determine the optimum sizes and locations for the capacitors placement from the VSI based arranged nodes. It is pertinent to highlight that the proposed method only arranges nodes on the basis of VSI which is unlike the selection based on the LSF as done in [25], [26]. The sorting of the nodes on the basis of VSI is preferred over LSF because the reactive power injection is closely related to the voltage stability as compared to the losses. The proposed method is implemented in MATLAB and the results are validated on the standard IEEE 33 and 69 node networks. To observe the effectiveness of the proposed work, the results are compared with other algorithms, namely, Strawberry Plant Propagation Algorithm (SPPA) [4], [30], Genetic Moth Swarm Algorithm (GMSA) [31], and Flower Pollination Algorithm (FPA) [32]. The rest of the article is organized as follows. Section III explains the problem formulation. In Section IV, the implementation strategy is explained. This section contains details of the methods and methodology, including VSI, MBA, and complete set of steps for implementation. The results and respective discussion are provided in Section V. This section also details the voltage quality index used in this work to quantify the improvement in the voltage profile. Finally, the conclusions are given in Section VI. A. Total Power Loss The objective function of the optimal capacitor allocation problem is a subject to various constraints. In this work, the improvement in voltage profile and the reduction in net annual cost are considered by reducing the active and reactive power loss in the network. The major objective for the optimal allocation of the capacitors in a radial distribution network is to mitigate the total power losses [33], (1) while satisfying all the operational constraints subject to the following constraints: B. Total Annual Active Power Loss Cost The total annual active power loss cost (N) measured in $/year given by (7) [34] is also studied after optimal capacitors placement where : Table I [34]. The net saving ($/year) is calculated by taking difference of the annual cost with and without the reactive power compensation. The main objective of this work is to locate the optimal location(s) and size(s) of capacitor(s) in to ensure minimum loss operation, while maintaining the system under the suitable operation limit. The problem has two parts; the location selection, which is from a limited number of nodes in the network, and the size of capacitor(s) for the selected node(s) in the former part which will be a continuous variable. Keeping in view the nature of the problem, it is split into two parts. Firstly, the nodes are ranked according to the VSI [35]. In a large distributed network, the algorithm have to search for optimal location by inspecting every node for the candidate location. This increases the evaluation time in finding the optimal solution. Hence, due to VSI, the search decreases due to this initial estimation of candidate nodes. Secondly, the optimization algorithm is utilized to locate the optimal node from the selected candidate nodes in the former part and find the appropriate size(s) of the capacitor(s) to be placed. A. Voltage Stability Index (VSI) The degree of voltage stability of the networks can be calculated using the VSI which identifies the most vulnerable node as the candidate location for the allocation of the capacitor in the power system [35]. The search space in the implemented optimization algorithm is significantly reduced by this initial estimation of candidate nodes. Consider a two-node equivalent system having line impedance of ( ) ( ) R j jX j  and a load of connected to node 2 m as shown in Fig. 1. The VSI is given as B. Mine Blast Algorithm (MBA) Sadollah, Bahreininejad, Eskandar, and Hamdi developed a population-based mine bomb explosion-inspired optimizer called "Mine Blast Algorithm" (MBA) [28]. The idea behind the implemented algorithm is based on the observance of a mine bomb explosion, in which the scattered fragments from the mine bomb collide with many other mine bombs in the vicinity of the explosion, resulting in their explosions [36]. To better understand the condition, consider a minefield for which the primary goal is to clear up the minefield from mines. The objective, therefore, is to locate the explosive mines, whilst it is necessary to locate the most explosive one placed at the optimum location X*, which may create the maximum casualties. The algorithm begins via an initial value termed as a first shot point 0 . For exploring a new location, an exploration factor ()  is initiated. The  factor is used during the initial iterations of the algorithm which is then compared to the iteration value . k If the value is greater than iteration value , k the exploration process starts. The equations used to explore the solution search space are as: 2 The direction and the distance of shrapnel pieces in the exploitation process are given as: where F at location 1 t  represents the fitness function. The ability of global search is increased when the initial distance of the shrapnel pieces is gradually reduced to enable mine bombs to search for a likely global minimal location. The  being a user parameter is introduced to achieve a convergent optimized solution. The distance is formulated as At the last iteration, the shrapnel pieces distance value will be approximately zero. The schematic figure of the implemented algorithm portrays two aspects (the colour lines indicate the exploring process and the black colour lines are the exploitation process, see Fig. 2). C. Steps of Implemented Methodology 1. Read the system input data and run the base case load flow. 2. Calculate VSI of every node and rank the nodes in the ascending order according to the VSI. 3. Impose limit on the number of nodes to be considered for the capacitor placement, i.e., set value of "nvar". In this study, top ten nodes in the VSI list developed in step 2 are considered. 4. Set the upper and lower limits of the problem, i.e., capacitor sizes (according to reactive power injection) to be considered for placement by the algorithm. The range of 0 MVar-1.5 MVar is taken in this work. 5. Initialize the algorithm by setting the iteration count, population size, exploration, and reduction factors. 6. Check the exploration factor ()  condition. 7. If the condition of the exploration factor is met, i.e.,  > iter, then:  Calculate the location of a piece, i.e., node in the network using (12);  Calculate the distance of a shrapnel piece, i.e., capacitor size according to (11) at the respective location;  Calculate the direction of pieces using (14);  Calculate the fitness function in terms of loss. 8. Evaluate function value for any improvements. 9. If improved, swap the shrapnel piece's position with the finest temporal solution. If not, go to step 12. 10. Save the finest piece of shrapnel, i.e., capacitor size as the best temporary solution. 11. If , iter   then generate the shrapnel pieces for the exploitation process. 12. Calculate the distance and location, i.e., capacitor size and node, respectively.  Using (10), calculate the location.  Calculate the distance of pieces using (13).  Inspect the parameters of the constraint for the generated pieces of shrapnel. 13. Adaptively reduce the distance of shrapnel pieces, i.e., search the area for capacitor sizes using (15). 14. Analyse the criteria for convergence. If true, then End else, go to step 6. V. RESULTS AND DISCUSSION To assess the efficiency of the proposed method for the optimal allocation and sizing of capacitors, the two standard radial distribution networks, i.e., 33-node and 69-node are used for the test bench. The implementation of the MBA is done in MATLAB. The results are compared for the voltage levels on the nodes in the network, total active and reactive power losses and annual cost and net savings. To better quantify the improvement in the voltage profile, the voltage quality index (VQI) [37] of the network is also calculated as 100%, where nom V denotes the nominal voltage magnitude in p.u., which is considered to be unity in this case. max V and min V denotes the maximum and the minimum value of the voltage on a particular node in the power network. A. 33-Node Network In the first case, the standard IEEE-33 node system is used. The line and node details are taken from [38]. It comprises of 32 nodes and 5 tie lines. The total substation active and reactive power loads are 3.7 MW and 2.3 MVar, respectively. All the measurements are done in per-unit system. The base values of 100 MVA and 12.66 kV are considered in this implementation. The test system in single line diagram is displayed in Fig. 3. Fig. 3. IEEE-33 Node system single line diagram. The radial distribution network has heavy inductive loads that cause the low voltages on the nodes. This low voltage can be enhanced by integrating the capacitors to the nodes that deliver a portion of the reactive power consumed by the load, thus reducing the flow of the current and losses. The active and reactive power loss for the base case (before compensation) are 208.18 kW and 138.9 kVAR, respectively. After implementing the MBA, the losses are considerably decreased due to placing the appropriately sized capacitor(s) at the optimal location(s). Figure 4 highlights the reduction in active power loss over individual branches, which accumulates to about 34.13 % with MBA (see Table II). The figure also shows that the MBA produced better results than SPPA. The loss reduction with SPPA is approximately 29.9 % of the base case losses. The reduction for reactive power losses in the network with MBA and SPPA are compared with the base case and are shown in Fig. 5. It is observed that the reduction of the reactive power losses from the base case with MBA is 33.69 % and with SPPA is 27.4 % in comparison to the base case as detailed in Table II. The comparison of the voltage levels for the base case with MBA and SPPA is shown in Fig. 6. In this study, the external grid is connected to node 1, which is considered as the reference and the voltage at it is taken to be 1 p.u. The least value of the voltage in the base case occurs at node 18, i.e., 0.9125 p.u as shown in Table II. After the capacitor placement with MBA, the voltage profile improved significantly and the minimum voltage is observed at node 18, i.e., 0.9405 p.u., which is slightly less than 0.9461 p.u at node 14 obtained from SPPA. The detailed information of optimal location, optimal size, power losses and the minimum voltage for different algorithms is summarized in Table II. It can be seen that the yearly net cost after implementing MBA is reduced from 109419 $ to 82277.3 $ which counts to a percentage saving of 24.81 %. Table II also highlights the effectiveness of MBA in comparison to the GMSA, FPA and SPPA in terms of different variables under consideration, despite injecting lesser amount of reactive power (Vars). The total injected Vars by MBA are 1683.2 kVAR that is significantly lesser than the likes of GMSA with 3000 kVAR, FPA with 1800 kVAR and SPPA with 1658 kVAR. Although the % active power loss is very slightly better in case of GMSA and FPA than MBA, yet the injected Vars are higher and the minimum recorded voltage is lesser. The comparison of net saving and annual cost also speaks about the better performance with the MBA than any counterpart. It is also worth mentioning that the cost of installation of capacitors would also be lesser because the improved performance of the network in terms of losses and voltage profile were observed with lesser Vars injection with the proposed method, as depicted in Table II as cost of injected Vars. Fig. 6. IEEE-33 node system voltage profile. B. 69-Node Network In this case, IEEE 69-node radial system is used. The system data is taken from [38]. The per-unit base ratings for the system are 12.7 kV and 10 MVA. The test system is given as a single line diagram in Fig. 7. It has 68 nodes and one slack node. The total active and reactive loads are 3.8 MW and 2.7 MVar, respectively. Before compensation, the system's total active and reactive losses are 225 kW and 102.2 kVAR, respectively whereas the annual power loss cost is calculated to be 118260 $. The reduction in active power loss before and after the optimal capacitor placement with MBA and SPPA can be seen in Fig. 8. In total, the loss reduction of 34.1 % with MBA and 33.4 % with SPPA is observed in comparison to the base case, as shown in Table III. In Fig. 9, the reduction of reactive power losses after implementing MBA and SPPA in comparison with the base case is given. There are 32.46 % and 31.74 % of reduction in reactive power loss after MBA and SPPA, respectively, as given in Table III. Like before, node 1 is taken as a slack node having a maximum voltage of 1 p.u. Before compensation the lowest value of voltage, i.e., 0.91309 p.u. occurs at node 65 as shown in Table III. After compensation with MBA, the least voltage value of 0.9320 p.u. occurs at node 65 whereas with SPPA the minimum voltage of 0.9337 occurs at node 64. It is noted that SPPA has a slightly better voltage value than MBA, however this is due to the fact that injected kVAR by SPPA is higher than that of later one. The implementation of MBA and SPPA, enhances the overall voltage profile of the network and is compared with the base case shown in Fig. 10. Fig. 8. IEEE-69 node system active power loss. In Table III, the detailed information of the optimal location, optimal size, power losses, and minimum voltage obtained from different algorithms is summarized. The effectiveness of the MBA is compared with GMSA, FPA, and SPPA in terms of considered variables. The yearly net cost after the placement of capacitors is reduced from 118260 $ to 89294 $, which gives the percentage saving of 24.5 %. The injected Vars by MBA is 1780 kVAR, i.e., significantly lesser than GMSA (2750 kVAR), FPA (1950 kVAR), and SPPA (1828 kVAR). Although the active power loss reduction given by the GMSA and FPA is slightly better than MBA, yet the injected Vars are higher and recorded lesser minimum voltage. The higher injected Vars also account for lesser % saving. The better performance of MBA in terms of net saving and annual cost is shown in Table III as compared with other algorithms. In addition, the cost of the installation of capacitors with the proposed method is lesser as lesser Vars are injected, yet gives improved performance in terms of losses and voltage profile than the other counterparts. VI. CONCLUSIONS The problem of the active and reactive power loss minimization in the radial distribution network is solved using the novel MBA subsequently improving the voltage profile and reducing the net cost. The initial sorting and selection of the nodes is done through voltage stability index (VSI). The results with MBA are compared to the results with SPPA, GMSA, and FPA in both of the considered networks. For 33-node network, the active and reactive power loss reduction is 34.13 % and 33.69 %, respectively. Similarly, in 69-node network, the active and reactive power loss reduction is 34.1 % and 32.46 %, respectively. The voltage quality index is improved after capacitor placement with MBA to 94.01 % and 93.2 % in 33-and 69-node networks, respectively. Despite improving results significantly, the total injected Vars with MBA are reduced leading to increased annual saving of 24.8 % and 24.5 % in 33-and 69-node networks. CONFLICTS OF INTEREST The authors declare that they have no conflicts of interest.
2021-09-27T18:41:07.237Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "f9da1e50d2bee7b2b226a0d5012fbe1efa757091", "oa_license": "CCBY", "oa_url": "https://eejournal.ktu.lt/index.php/elt/article/download/28917/14972", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a1f5bf60b15263d8fea6c1983a0bc642a3e45bc9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
245454809
pes2o/s2orc
v3-fos-license
Exploring brand symbolism amongst 10 year-old urban Pakistani children This research uses consumer culture theory and thematic analysis to study the phenomenon of brand symbolism and self-image in 10 year old Pakistani boys from the high socioeconomic class. Results reveal that 10-year-old Pakistani boys want to be seen as intelligent and mature. They start benchmarking themselves against an ideal self-image and also develop an understanding of symbolic consumption. Their sense of how different brands correlate with different age groups is well developed and their own consumption is moving in favor of brands and product categories that are patronized by adults. Introduction Marketing scholars across the world now recognize the growing potential of the child, both as a consumer and as an influencer of the consumption decisions of his/her caregivers (McNeal 1992;Calvert 2008;Chaudhary and Gupta 2012). As a result, brands are directly targeting children, as well as their parents through them, for products ranging from confectionary items, to laundry detergents, to bank accounts. Therefore, understanding how young children perceive and register the distinctions between brands is critical to aid further development in young consumers research (Belk et al 1982). Children's recognition of consumption choices and their demonstration of brand symbolism has been well documented as comprising a progression up the steps of a defined ladder of cognitive and social development (Piaget 1964;Selman 1980;John 1999) engendering a generation of discourse around children as S. Husain, A. Rashid consumers (Carlson and Grossbart 1988;Valkenburg and Cantor 2001;Arnould and Thompson 2005). The spotlight on child-centered consumption, with its determination of sense of self (Young 2005), purchasing power, and agency, stands notably distinct from earlier notions on the subject which assigned higher importance to the role of gatekeepers in making consumption decisions for the child (Berey and Pollay 1968), with/without the child being aware of the choices themselves. A positive and growing relationship between agency and purchasing power (the social and financial freedom to make consumption choices) and consumer socialization (Ward 1974;Moschis and Smith 1985) is often seen as an underlying assumption in this particular field of study. However, this development-centric approach does not account for the fundamental differences in culture which challenge these assumed boundaries of agency (Belk et al 1984), and by extension of the availability of consumption choices, particularly in developing countries such as Pakistan where the familial structure is rather dismissive of children's opinions, and the parents' role is more of a gatekeeper than a facilitator for their children's experiences of consumer products (Anitha and Mohan 2016). The alternate framework that supports and encourages the investigation of such cultural differences is collectively known as the Consumer Culture Theory (henceforth CCT) (Arnould and Thompson 2005). Through this study, we explicate the nuances of a child consumer in Pakistan, with a specific focus on their understanding of brands and related associations. We also explore the role of the social environment in shaping perceptions related to brands among 10 year-olds. The paper is organized as follows: it begins with a literature review, followed by describing the research aim including specific research questions and research methodology, continuing with thematic data analysis and findings, and ending with a conclusion that contrasts the contextual differences between our findings and prior research. We will use young consumer and child consumer as interchangeable terms to refer to our research participants. Literature review This literature review is structured to bring to the fore existing theories on young consumers and highlight the gap that this research is designed to fill. The section opens with a discussion about the evolution and importance of young consumers. Next it reviews cognitive and social theories of development of children as consumers. It then highlights the role of socializing agents in informing children's consumption practices. Then the review moves to a discussion of John (1999) model of consumer development. This model is specifically used in this research to identify the 'appropriate age' of participants for the study. Furthermore, a discussion about children's sense of brand symbolism is presented. As the review progresses, the discussion unfolds criticism of the developmentalists' approach to studying consumption and therefore the need for a culturally informed study of consumption. This section closes by explaining the usefulness of using Consumer Culture Theory in studying brand symbolism Exploring brand symbolism ... among young consumers. There is consensus in literature that definitions of 'consumer' generally indicate a person who is able to "(1) feel wants and preferences, (2) search to fulfill them, (3) make a choice and a purchase, and (4) evaluate the product and its alternatives" (Mowen &Minor, 1998 in Valkenburg andCantor 2001). However, the demarcation of children as a consumer segment has been considerably more contested, evolving over decades. From initially being considered passive subjects on which parental consumption decisions were superimposed (Berey and Pollay 1968) which Berey and Pollay (1968) term the 'gatekeeper effect,' children have moved on to being independent decision makers with agency and purchasing power that has only increased over time (McNeal 1992;Calvert 2008), particularly in the developed markets such as the United States. According to Lopez and Rodriguez (2018), children 'understand and conceptualize' brands. Children are now understood to command a three-way market potential (Mc-Neal 1992) as current and future markets as well as influencers; they fall into any of these segments or multiple segments at any given time. They become decision makers at a very young age when they are allowed to select their own treats and toys (John 1999), when they are entrusted with shopping for groceries (McNeal 1992), or when they exercise their influence on their family's consumption patterns, a practice widely regarded as 'pester power' (Furnham and Gunter 2008;Anitha and Mohan 2016). We have further evidence of such consumption having long-term implications as childhood experiences greatly influence brand preferences in later years (Guest 1964;Ward 1974) lending credence to the notion that childhood consumption behavior is a significant predictor of adult preferences in consumption (Lesáková 2011). Moreover, understanding the role of the child as a consumer has been at the forefront of research on young consumers' behavior, primarily because it has been associated with the origin of conscious consumption that has generally been regarded as the outcome of a sequential learning process defined in stages across a child's cognitive development (Piaget 1964;Selman 1980;John 1999). According to the Piagetian model of cognitive development, children in their preoperational stage are able to recognize the perceptual differences between brands, and children aged 7-11 are able to make complex consumption choices, undergoing consumer socialization. Ward (1974), who authored the term, defines consumer socialization as "the process by which young people acquire skills, knowledge and attitudes relevant to their functioning as consumers." Children go through stages of development both cognitively and socially; these developments form the context in which consumer socialization takes place (John 1999). He recognized that cognitive abilities help consumers compare and evaluate products before making their final choice and age-related social development helps consumers to understand the importance of interpersonal relationships. Subsequent researchers have highlighted the distinction between these paradigms with three types of theories being associated with consumer socialization ranging from the developmental lens to schools of thought associated with social learning to theories of social systems (Moschis and Smith 1985). Developmental theorists in consumer behavior use the Piagetian model of https://ir.iba.edu.pk/businessreview/vol13/iss2/7 DOI: https://doi.org/10. 54784/1990-6587.1033 Published by iRepository, December 2020 S. Husain, A. Rashid development as the baseline, that introduces specific tiers or stages of cognitive development that children go through, over corresponding chronological age ranges (Piaget 1964). At each stage, the child develops sequentially advanced cognitive capacities, evolving from 'perceptually bound' children in the earlier stages to adolescents who demonstrate analytical capabilities (John 1999). The commonalities in literature based on the developmental perspective suggest that 'consumption' develops as an abstract idea with the capacity of a child to understand information that includes the development of perspective taking (Selman 1980). According to Valkenburg and Cantor (2001), children acquire the four defining characteristics of consumption stated earlier at each stage of development. Simply put, children take more information into account as they mature (Selman 1980). Integrating the cognitive and social development theories, John (1999) identified three stages that children go through: -Perceptual stage (ages 3-7) is focused towards those features of the marketplace which can be observed easily; -Analytical stage (ages 7-11) is marked by significant improvement in information processing and developing a sophisticated understanding of the marketplace; and -Reflective stage (ages 11-16) is when children develop a sound understanding of consumption embedded in their societal setup. The social learning and social systems perspectives differ from developmental models in their emphasis on environmental forces, which are termed 'socialization agents' (Moschis and Smith 1985). Socialization agents can be persons or organizations that are the primary institutions of influence in a child's life because of their structural/familial authority or relationship (Moschis et al 1984). Ward (1974) believes that these agents guide children into being part of the consumer culture by teaching them the 'social significance' of commodities and their contribution to individual and societal relationships. Reimer and Rosengren (1990) counted at least eight types of socialization agents that are present in every person's life including family, religious and educational institutions, legal systems, and peer groups among others. Knowledge about consumption choices in early years comes from peers (Hawkins andConey 1974 in John 1999), parents, and mass media (Robertson et al 1979in John 1999. There is evidence in literature to indicate that the number of purchase requests made by children is linked directly to the amount of time spent on commercial TV viewing (Galst and White 1976) governed in varying degrees by parental control over their access to information (Ward 1974;Bindah and Othman 2011). Parental involvement influences how young children perceive brand placement (Hudders and Cauberghe 2018). Early ideas of 'price' and value for money, in particular, have been considered strongly linked with parent-child communication ( (Moschis and Moore 1979). John (1999) further differentiates between structural knowledge (about product categorization and specific brand names) and symbolic knowledge ("the symbolic meaning and status accorded to certain types of products and brand names"), the acquisition of the latter leading to the recognition of higher social functions of consumer culture which adults use to make inferences about 120 Business Review: (2018) 13(2):117-131 https://ir.iba.edu.pk/businessreview/vol13/iss2/7 DOI: https://doi.org/10. 54784/1990-6587.1033 Published by iRepository, December 2020 Exploring brand symbolism ... individuals/households. Similarly, in the "analytical stage" (John 1999) children start to form judgements about people after analyzing the products they are using (Achenreiner 1997;Belk et al 1984). According to John (1999), in the "perceptual stage" children have a very different sense of material value as compared to the analytical stage during which they gain the capability to make social comparisons based on possessions. By sixth grade, children develop a good sense of the symbolic meaning behind brand names (John 1999). Young (2005) categorized children aged 6 and above in the middle childhood stage (corresponding with John's analytical stage), in which they become more strategic in their thought and behavior; their sense of humor, and vocabulary. Their ability to plan their attention and draw comparison with their peers also starts developing at this stage (Young 2005). By the time children enter their tweens (ages 8-13 years), they develop an ability to manage their impression, so as to remain socially acceptable. This reflection aligns with Belk et al (1984) study which shows that by middle childhood, children develop a sense of brands and people who possess them generating what Belk refers to as 'stereotypes' based on observable consumption/possession differences. Souiden and M'saad (2011) define brand symbolism as an understanding of a brand's meaning in the mind of a consumer through the association of images, concepts, and emotions attached with its possession or consumption. It involves perceptions of both brand use and the brand user, (McAlister and Cornwell 2010). The emergence of the consciousness of brand symbolism has traditionally been understood as an outcome of the Piagetian developmental cycle (John 1999); however, recent research has observed signs of symbolic understanding in children occupying the pre-operational stage as well (McAlister and Cornwell 2010). Furthermore, Souiden and M'saad (2011) have studied the use of brand symbolism as a differentiating tool between social groups based on what using the brand communicates about its user with the most symbolic brands being synonymous with the attributes attached to their consumption (Escalas and Bettman 2005). Research by Nairn et al (2008) further sheds light onto children's preference of brands based on the meanings they associate with their use/non-use, expanding Belk et al (1984) earlier contribution by introducing an innovative qualitative methodology that seeks to make sense of the nuances of consumer perception that define young individual consumers, rather than the developmentalist approach of 'one size fits all' towards recognition of symbolism. It is important to note that while exercising developmental inclinations in his research, Belk calls for a broadening of the research framework of developmentalism and has been one of the first to present a critique for the Piagetian Model (Belk et al 1984). In his critique, he points out the model's exclusive focus on chronological age (other demographic factors are not accounted for), the predominance of the cognitive perspective, and the seeming cultural isolation of children from broader social differences across the developing vs the developed world (Belk et al 1984). While developmental theory has progressed over time, the cognitive devel- (2008) argue against such an approach, asserting that "it cannot shed much light on the meanings and uses of specific brands for children in relation to the social and cultural contexts of their everyday lives." Hence, scholarship needs to look beyond linear cognitive and social development in order to factor in sociological and cultural differences in different communities of children, engendered in the CCT which is "a family of theoretical perspectives that address the dynamic relationships between consumer actions, the marketplace, and cultural meanings" (Arnould and Thompson 2005). Nairn et al (2008) have been instrumental in introducing CCT as an alternate to the developmental approach of studying children and their relationship with brands, negotiating gendered identity of the child consumer by investigating their associations of what they consider 'cool,' an association that is embedded very deeply in the sociocultural sphere surrounding them. Basing our research in a developing market, we employ the CCT approach. It allows room to explore sociological factors instrumental in shaping an understanding and perception of brands within young consumers. Research aim The objective of this research is to study young consumers' knowledge of brands and their mental representations of brands in a developing country context like Pakistan. By doing so, we aim to add to the theory on young consumers' sense of brand consumption by providing a culturally informed view to extant scholarship. This study incorporates views from 10-year-old boys about their exposure to different brands, perceptions developed for various brands when they compare choices and ability to explain choices about brands. The study was guided by the following specific research questions: -Which brands do 10-year-old boys in Karachi register from their environment? -What themes do they associate with the brands they name? -What is the role of the social environment in shaping brand perceptions? Methodology As mentioned before, we used the CCT approach, finding it more 'liberating' when compared with developmentalism as it encourages assessing contextual differences that vary across different consumer cultures, not being bound under the assumptions of homogeneity which stem from the Piagetian model. In this study, we recruited 10-year old boys belonging to affluent socio-economic households and attending top private primary school. According to John (1999), this age corresponds with the analytical stage and hence we began with an assumption that our participants would have a sophisticated awareness of their 122 Business Review: (2018) 13(2):117-131 https://ir.iba.edu.pk/businessreview/vol13/iss2/7 DOI: https://doi.org/10. 54784/1990-6587.1033 Published by iRepository, December 2020 Exploring brand symbolism ... marketplace. Participants were recruited from a school in a high socioeconomic area of Karachi, Pakistan. Being the largest metro area of Pakistan, housing a considerable portion of the urban Pakistani population, it was useful to gather insights from participants from Karachi. Since this is a qualitative exploratory study, the research pursued a purposeful sampling style. The boys belonged to the same study group in their school and were known to each other. Furthermore, they resided in one of the posh localities of the city. All these considerations enabled inquiry with children of affluent households. Economic affluence translates into peculiar media habits, consumption styles and preferences, which makes the sampling viable in terms of both the subjects' ability to patronize brands, as well as the frequency of purchases. We believe that working with a group familiar with each other enabled us to get more 'open' responses from the subjects. We used focus group in our research design. This data collection approach helped us in observing peer dynamics that influence consumption styles and preferences. Moreover, extant literature informs that research conducted with children is challenging as they may not understand the questions or researchers may struggle to decipher the context and motivation of the answer given by the respondents (Damay 2008). Keeping these insights in mind and following Nairn et al (2008) methodology, data was collected from participants through an engaging, in-depth discussion in a small focus group. Taking Damay (2008) advice, participants were not asked direct questions; they were instead engaged in a birthday planning process for a boy aged 10 years. This was done in order to engage young participants whereby their true opinions could be recorded using an ambiguous stimulus i.e. birthday party planning. The research followed a key criterion used by Ji (2002). Therefore, in order to discover whether relationships had been established by children with brands, researchers checked if they recalled brands by name in a product category and whether they could elaborate on their experience under the right circumstances (Ji 2002). One of the participating children was the son of the lead researcher on this paper. Due to this, the lead researcher did not conduct the focus group. Instead the co-author, who was an unknown person to most of the informant group, conducted it while the lead researcher observed. Informed consent of both the mothers and the children was taken prior to recruitment (Nairn and Clarke 2012). Since young respondents are not able to completely understand the implications of their involvement, advance permission from their parents/guardians was sought. The children were informed of the research in terms of a story/scenario. In order to ensure anonymity of the participants, we have assigned pseudonyms. Data analysis Thematic Analysis was used in the essentialist or realist manner (Braun and Clarke 2006) to report the experiences and reality of participants and to identify the most popular themes explaining mental representations of brands in the Business Review: (2018) 13(2):117-131 123 https://ir.iba.edu.pk/businessreview/vol13/iss2/7 DOI: https://doi.org/10. 54784/1990-6587.1033 Published by iRepository, December 2020 S. Husain, A. Rashid minds of 10-year old boys in a developing market context. This approach enabled the identification of major categories or themes (Saldaña 2009). Data analysis was conducted in a sequential manner as follows: -The focus group discussion spanning over an hour was audio recorded and later transcribed. Transcription was useful in recording the verbal responses of the participants and developing a rich understanding of data (Braun and Clarke 2006). Since both English and Urdu (the national language of Pakistan) were used for conducting the focus group, the transcription involved translation of responses from Urdu to English. During this process, we were careful to retain the essence of the participants' responses. The transcripts were proof read by the researchers individually to ensure accuracy. -Transcripts from the focus group were studied individually to identify recurring patterns and generate initial codes (Boyatzis 1998). Conscious effort was made to go beyond the superficial meanings of data and interpret it in a manner that could create an accurate representation of what the participants meant. This was an inductive process where data codes were developed throughout the research process and refined to arrive at meaningful start codes (Guest et al 2011). -Next, main themes emerged from the coding process that reflected participants' meanings. This was done in order to increase data dependability (Guest et al 2011). These themes also made meaningful contributions to answer research questions set out at the beginning of the research process. Subsequently, these main themes were explained (Braun and Clarke 2006) using examples from the data. -Finally, the emerging themes from the focus group discussion were consolidated to provide an overall holistic picture of the participants' feedback. Findings and discussion As mentioned previously, we used birthday party planning as an ambiguous stimulus to draw out responses from participants regarding their awareness of brands and the associations held with respect to marketplace offerings. Engaged in an animated discussion, 10-year old informants spoke about brands relating to food, shoes, toys and sports. With birthday as a focused consumption occasion in our discussion, participants' advice was based on their likes/dislikes and recommendations. This was an opportunity for us to explore the popularity of different brands and the mental representations linked with them. Our informants, 10-year-old Pakistani boys from high socioeconomic class households, were enthusiastic brand consumers with clear reasons behind their preferences and consumption decisions. They were not only conscious of their purchase reasons but were also able to articulate them. Hence, they were able to answer the question: "Why do you like a certain brand of pizza?" Recognition of symbolism (Escalas and Bettman 2005;McAlister and Cornwell 2010;Souiden and M'saad 2011) behind brands explains purchase and consumption choices. In the following sections, we identify popular themes and substantiate them using direct quotes (italicized) of participants from data. Functionality -value for money The first theme that emerged from our findings is value for money. Our respondents enthusiastically indicated a few food brands as their preferred choice because they were "not expensive". Damay (2008) studied the understanding of the word 'expensive' amongst children and explained that as time passes and their age increases, children's understanding of 'functionality' and abstract concepts like 'utility' develops. The research participants in the study assessed available alternatives in terms of 'value for money' and declared the best deal as their preferred choice. Children compared different pizza brands and recommended particular brands as "their (14th street pizza) slices are very big and one slice is good for one child " as compared to Pizza Hut's whose pizzas were thought to be "smaller and more expensive". They not only had an opinion about the brands that delivered the best value for money, they also realized that "if you find better taste for cheaper price", it is something to feel proud of! Interestingly, value for money as a perception was not limited to branded offerings. It was also linked to counterfeit market offerings. In a developing market context, with an abundant supply of knockoffs available, it was no surprise to know that our participants were aware of them: "I go to the cricket academy and there is a boy there who brings a bat that says Adidis instead of Adidas". However, their openness to give these counterfeits a share of their purchase allowance was noteworthy. When asked whether they would buy copy or original figurines, the word "depends" was used multiple times by children. This can be understood as demonstration of an active evaluation of choices (in this case, original or copy) that children make; the differences between their choices may be fluid and complex. Informants' ability to compare popular brands to me-too market offerings was noteworthy: "I have a lot of Banbao legos. They are the same as Legos, the only difference is in maps. BanBao is easier to follow through the map. Once you complete a step, the whole picture is replaced by the next one where only the block that you have to work with is visible. The next step only highlights the block you are working on with a different colour. In Lego, the whole picture remains the same and it is more difficult to follow ". So as long as the experience is not significantly different, children were indifferent between popular, original, expensive brands and their look-alike, less expensive versions. Moreover, the use of legos as a generic term to refer to any construction blocks is also noteworthy. The sense of value for money prevailed for food consumption as well: "Zinger (popular burger from KFC) is juicy but the Burger Shack is juicier and cheaper ". Children made practical and informed decisions based on past experiences. It is important to note that the sense of value for money was not essentially linked to the cheapest option available. If a brand satisfied the child consumer, he found a higher price tag justifiable: "Pizza Hut's pizzas are smaller and more expensive than 14th Street's, but I love it! " John (1999) asserted that cognitive abilities help in making a product choice after careful comparison with other alternatives present in the environment. He further iterated that children in their 'analytical age' (7-11 years) analyzed Business Review: (2018) 13(2):117-131 125 https://ir.iba.edu.pk/businessreview/vol13/iss2/7 DOI: https://doi.org/10. 54784/1990-6587.1033 Published by iRepository, December 2020 S. Husain, A. Rashid product categories on multiple attributes and made generalizations on the basis of personal experience. Evidence from the research concurs with John (1999) hypothesis and indicates an ability to think about stimuli in a more careful and informed manner when a child is in the analytical stage. Research participants pronounced concrete reasons behind their preference and consumption of brands. Unique offer Several international food brands are popular in Pakistan. In addition to these, some local brands also offer fast food items while some others offer localized variety of street or fast food. From the broad alternatives available to informants, their preference was towards the international brands. As we explored the perceptions that shaped these choices, we identified our second theme: uniqueness of the offer. Respondents spoke like savvy consumers who were aware of the assessmentworthy attributes of different food brands. Some features mentioned were taste, secret recipe, size of the serving, and variety. Their perception was shaped by repeated consumption experience, media promotions and feedback from peers as well as immediate family members. A particular brand specializing in chicken was deemed best as "Nandos knows how to make their chicken...they have a secret recipe and they roast it". Similarly, informants knew exactly where to buy their donuts from: "order donuts from Dunkin Donuts...they have different sizes and so much variety in flavors, you can't find it anywhere else". Burgers were an especially emotive topic with foreign brands not just competing with each other but also with homemade burgers and the local variety, the bun kebabs. 1 Comparisons between these alternatives were articulated. Regarding homemade burgers, children were clear that "these burgers are good but not as good as the ones from restaurants because those have their own secret recipe". The idea of a secret recipe fascinated children. Our informants were confident about their choices and loyal consumers. Every time the moderator suggested a brand or alternative different from their own preferred choice, participants showed a conspicuous lack of enthusiasm and made it obvious that they did not want to know more about it. Although the respondents preferred modern fast food options over local alternatives, they were well aware of the places where the best local variety was available:"For tikkas, 2 go to Bundoo Khan, 3 as they have really good tikkas instead of wasting your energy trying to find the same taste elsewhere". Such knowledge, however, did not convince them to include local street food in their consideration set. The bun kebabs of street vendors were held in general disfavor as they were perceived to be "unhygienic because they are cooked outside". On prompting them to compare local street food to fast food, we were told "KFC and McDonald's are clean and they have proper kitchen and I Exploring brand symbolism ... have seen people wearing gloves". Our participants were knowledgeable about hygiene concerns and showed the ability to observe and compare different food alternatives. Symbolism A third theme that emerged from our data analysis was symbolism. Well-defined perceptions of a certain brand of shoes helped us to know how symbolism plays a role in shaping informants' shoe consumption choices: "Bata is where you would go for school shoes and casual shoes. If I need sports shoes then I go to Nike. Everyone (referring to friends) wears it and people know when you are wearing Nike". Participants had a clear sense of what brands stood for and how brands lent a sense of image to their consumers. There was also a sense of associating a brand with human-like features, for instance, 'upper-class' (MacInnis and Folkes 2017). Our birthday party planning agenda allowed us to steer the focus of the group towards toys that can become good birthday presents for a 10-year old boy. This pasture enabled us to understand that young consumers categorize this consumption along age. Our participants believed that: "7-year-olds like Spiderman action figures while we grew out of playing with figurines when we were 8-9 years old ". The consumption of toys was understood as specific to a certain age and once that age was crossed, those toys became redundant: "I had figurines to play with but now I have put them away. We now like to play with block set and video games". The group agreement on this statement is noteworthy. A participant went on to explain how these consumption choices are linked to age. Such symbolic understanding of consumption categories was not limited to toys. Participants held clear perceptions about food products and brands they wanted to be associated with: "Parties at KFC restaurant are for 7-year-olds! " Their need to be seen as mature young men was apparent in the discussion as they debated the beverage most appropriate for their age: "I like cappuccino with whipped cream on top from Espresso" and "green tea is my favorite". Furthermore, findings show that children use brands to assert that they are growing up or have grown up. Hence, 10-year-old Pakistani boys are inclined towards brands that are either challenging or used by the adults in their environment. Role of the social environment In addition to identifying the mental representations, we also wanted to explicate the role of social environment in developing such perceptions. Participants were aware of the socialization process that shaped them into their role as consumers. More specifically, we wanted to compare the developmentalist theories (John 1999) to the social environment theories (Reimer and Rosengren 1990;John 1999) to understand how social environment renders universalistic claims of the developmentalists work. S. Husain, A. Rashid Informants articulated how they perceived the consumer socialization process as helpful and evolutionary: "You first buy kids our age a small cell phone so that they learn to take care of it, not lose it. Then you buy them the expensive ones. That's what my parents did with my elder siblings". Participants' sense of progressively improving/developing consumption and their acceptance of the stages of becoming savvy consumers was informed by their environment: "First you have to start children with cell phones that are not smartphones those you can use only for calling and messaging. My mother gave me a basic phone when she was leaving for Hajj ". 4 Even those whose parents had refused to get them a phone until later had a clear opinion about brand positioning: "iPod is the best as a starter for children...I would like to get a cell phone but my parents say I will get it when I am 18 ". Social environment (Ji 2002) is crucial in developing children to grow up as enlightened consumers. Participants identified their parents as the source of learning and indicated their parents' method to be ideal in graduating younger children to more advanced products or brands. Generally, it can be deduced that 10-year old boys in a developing country context are led by their parents. For example, some children were passionate about smartphones while others declared it to be a product category not yet meant for them as their parents had told them so. Parents play a strong role as socialization agents (Moore and Moschis 1983) and are a strong source of new brand knowledge as well as trial induction. Children indicated consumption of hot drinks like coffee and tea and were aware of brands that were not officially marketed in Pakistan: "My father gets Davidoff from Dubai ". However, influences from electronic sources and peers also played a strong role. TV content, being a significant socializing force in the environment, develops interest in related merchandise. Children stated clearly that when Pokemon ran "on TV, everyone was talking about them and buying them too". Once the cartoon was over, demand for its merchandise fell as well. Such insights validate the influence of mass media on children. Children not only recalled the source of their knowledge but were also able to acknowledge how exposure led to peer pressure (Baker and Gentry 1996;Moschis and Smith 1985). This finding coincides with the consumer socialization literature and acknowledges the role of TV content on children's consumption patterns/behavior. Conclusion and way forward The researchers were privileged to be able to spend time with the young participants to gather data for this paper. This research identifies similarities and differences of previous researches with the local context. Overall, the findings of our research concur with the existing literature, wherein consumer socialization takes place via credible socialization agents, for instance, media, parents and peers. Symbolism plays a huge role for children this age; 10-year-old boys are capable of conscious analysis of brand use and communication. They recognize logos, local/foreign products, and so on. Exploring brand symbolism ... However, while viewing the research within the context of the debate between developmentalism versus social environmentalism, it emerges that the emphasis on functionality and value for money as a factor is more important to children rather than the mere recognition of symbolic differences. Children do develop a sense of what is age/gender appropriate for them (adding to Nairn's research and validating the CCT approach). However, parental influence and control over access to product categories also plays a significant role in certain categories, especially ones that require access to technology such as cell phones. Parental attitudes and behavior play a pivotal role in this instance. If only linear development was taking place, one would expect all children belonging to a similar demographic (same age and socio-economic background) to have homogeneous opinions and preferences. Instead, societal influence in the form of parents, peers, and media came up again and again during the research, clearly indicating how dominant they are in a child's consumer socialization. This study has the potential to act as a gateway for Pakistani managers to delve deeper into the power children hold as consumers and potential consumers. Managers need to be aware that children are influenced in various ways by their social context and target them accordingly. The uniqueness of this paper is based on the tween boys' participation which provides useful insights to managers on what influences the choices made by young consumers in their consumption choices. Exploring their mental brand representations from their viewpoint allowed us to incorporate their perspective. With this study, we were also able to incorporate one element from the social environment i.e. peers. Since the role of social environment was found to be significant in shaping the brand perceptions, future work can explore the role of parents more particularly. Specifically, a study with parents of the same children can be helpful in investigating the view of parents about their children's mental brand representations. This way forward can help in triangulation of data. Furthermore, a study with a bigger sample size and a positivist philosophy can be designed based on the preliminary insights from the current study to assess the spread of the phenomenon which is the perception of brands among 10-year-old urban Pakistani boys.
2021-12-25T16:18:09.880Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "f155dbb55aa9cb45ff848d0b731122e353e9f1d3", "oa_license": "CCBY", "oa_url": "https://ir.iba.edu.pk/cgi/viewcontent.cgi?article=1033&context=businessreview", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ab0b98ea0326ddc5cef7ef4afd30e3d2d6c02b9c", "s2fieldsofstudy": [ "Business", "Sociology" ], "extfieldsofstudy": [] }
225746565
pes2o/s2orc
v3-fos-license
Spontaneous Occlusion of Several Cerebral Venous Sinuses Mimicking Parkinson Disease - Idiopathic occlusion of nearly all cerebral venous sinuses in association with the widespread formation of dural arteriovenous fistulas (AVF) is an extremely rare condition. The cause-and-effect relationship between thrombosis and AVF is not known, but a disturbance in venous flow and distant stagnation has been mentioned as probable pathomechanisms. We introduce a patient that was misdiagnosed as Parkinson's disease and treated accordingly for weeks. Then, rapidly-progressive dementia and shortly after that, an intracerebral hemorrhage occurred, and the diagnosis was established after Magnetic Resonance Imaging and Angiography. There were a whole venous sinus system thrombosis and the formation of numerous dural arteriovenous fistulas. The mechanism and diagnostic nuances are described in this paper, and the treatment options and prognosis are discussed. Introduction Occlusion of several cerebral venous sinuses as a complication of hypercoagulable states such as antiphospholipid syndrome has been reported previously (1). These patients are usually presented with signs and symptoms of increased intracranial pressure (ICP) or hemorrhagic infarcts. Multiple Dural Arteriovenous fistula (AVF) is another rare condition that has been reported a few times and may have an association with multiple dural sinus occlusion (2). The exact pathomechanisms and the cause-and-effect relationship between extensive thrombosis and multiple dural AVF are not known (2). The association with movement disorder is very rare (3). Timing of disease progression has not been reported; however, we found no radiologic sign of thrombosis or dural AVF in MRI performed three years ago for other reasons. We introduce a rare case of idiopathic thrombosis of the whole venous sinus system and widespread dural AVFs. After a misdiagnosis of Parkinson's disease and dementia, the disease was finally discovered after an intracerebral hemorrhage occurred. Clinical examination A patient is a 57-year-old man presented with balance and gait disturbance, cognitive problems, and slowness of movement. The symptoms, slowness of movements, and gait disturbance had started insidiously for one month before the first presentation. Clinical diagnosis of Parkinson's disease is made, and oral treatment had been started. Later, rapid deterioration of gait disturbance, automatism, speech difficulty, and finally, a disorder of orientation with a fluctuating nature developed. At presentation, GCS was 11-12. The patient is hospitalized in neurology service with the first impression of Parkinsonism and rapidly-progressive dementia. There is no history of seizure, and other neurological examinations revealed no positive finding. Imaging and paraclinical findings Laboratory blood tests for infectious and rheumatologic factors, including antiphospholipid antibodies, coagulation factors, and even prion disease (because of rapidly-progressive dementia), were checked and were normal. Three years ago, he was presented with benign positional vertigo, and an MRI performed that time revealed no abnormality. However, the MR scan performed at the presentation revealed multiple tortuous vasculatures dispersed in whole-brain ( Figure 1). However, these findings were not followed by an angiography until a rapid deterioration of consciousness occurred, which was caused by a spontaneous intracerebral hemorrhage. At this time, neurosurgery consultation was requested, and an emergent craniotomy was performed ( Figure 2). Intraoperatively huge bleeding veins were abundant, and hemostasis was achieved hardly using vascular clipping and cottonoid buttress. A few days later, the cottonoids were removed, and the patient underwent an angiography. Obliteration of nearly all cerebral venous sinus was seen, and multiple extracranial bypass veins were evident. In addition, early filling of some extracranial veins proved the presence of Arteriovenous (AV) fistula ( Figure 3). After consultation with an endovascular interventionist, the obliteration of these AV fistulae was tried. The most important feeders were bilateral occipital and superficial temporal arteries that were obliterated in two sessions ( Figure 4). A few days later, the patient developed hydrocephalus and underwent a ventriculoperitoneal shunt procedure; however, after shunt placement, he developed subdural hematoma that was evacuated emergently. The patient was stable thereafter, and GCS was fluctuating around 8 to 10. Unfortunately, two months after admission, a severe sepsis occurred that led to his death. Discussion The mechanisms that underlie the thrombosis of several venous sinuses and the formation of several AV fistula are not completely known. Either of these two conditions may be the cause or the effect of the other(2). Congenital, infectious, traumatic, and autoimmune and hypercoagulopathic causes have been described elsewhere (4,5). Extensive and simultaneous occlusion of the venous sinuses has been reported in 71% of cases with multiple AV fistulas (6). Inflammatory pathomechanisms may be involved that cause the release of angiogenic factors (4). Therefore, venous sinus thrombosis triggers the inflammatory responses leading to the formation of AV fistula. On the other hand, AV fistula causes stagnation and turbulence of venous flow distant to the venous sinuses and pave the road for several venous sinus thromboses. Chronology and order are not well understood; however, in the current case. There was no clue to a venous abnormality in the MRI performed 3 years ago, showing that this time is the maximum time needed for the process to be completed. In patients with multiple dural sinus fistula, the occurrence of cortical venous reflux is common, and this may lead to cerebral ischemia or intracranial hemorrhage (7). Moreover, in multiple AV fistula, venous hypertension in the deep venous system occurs more frequently (8), and this may underlie the movement disorders seen in the current patient. Clinical presentation of multiple AV fistula is usually more aggressive than single AV fistula. Rapidly progressive venous hypertensive encephalopathy may cause hemorrhage, seizure, or neurological deficits. The rapid progression and extensive brain involvement differentiate it from simple AVF and cause a general decline in brain function and may cause memory loss, behavioral changes, and dementia (9,10). That is why the current patient was misdiagnosed as Parkinson's disease and had taken anti-Parkinson disease medication for several weeks. The diagnosis of multiple AV fistula is suggested by initial imaging. Non-contrast brain CT scans may not help in uncomplicated cases but sometimes may show blood stasis. However, MRI may give several clues. Signal abnormalities and hyperintense areas (leukoaraiosis) in white matter and multiple serpentine veins with fluid voids may be seen (11) (Figure 1). MRI can be used for monitoring of progression or improvement of venous hypertension (12). Digital Subtracted angiography (DSA) is considered as a gold standard of diagnosis. But, because of the complex dynamics of venous circulation in multiple AV fistula, visualization of all AVFs may be challenging (2). Due to the extensive involvement of the brain and marked abnormality in the venous circulation, treatment is essential. Obliteration of fistula can be achieved either by endovascular technique, microsurgically, or by stereotactic radiosurgery. Obliteration is usually performed in a staged manner, and the fistulas with cortical reflux with a higher Borden/Cognard classification have priority (2). In the current case, a combination of techniques was used. Because of intracerebral hemorrhage, we obliterated some venous channels with multiple clips. Then endovascular obliteration was performed in two stages. If hydrocephalus happened, it should be noted that shunting in such patients may be complicated by hemorrhage either by making injury to several veins in the passage of ventricular catheter or reduction of ICP that may cause changes in the venous circulation. The prognosis is usually poor, and the treatments are usually not able to stop the progression of the disease (13,14). However, improvement of symptoms, including dementia with treatment, has also been reported (11). This report introduces a case of multiple dural sinus thrombosis and multiple dural arteriovenous fistulae that was presented with a movement disorder and misdiagnosed to have Parkinson's disease and later, while he was evaluated for rapidly progressive dementia, intracerebral hemorrhage occurred. It can be highlighted the importance of high suspicion for venous disease in unusually presented patients. Although the treatment may not stop the progression of disease in extensive cases, it can postpone some complications such as ICH.
2020-06-25T09:07:34.524Z
2020-06-20T00:00:00.000
{ "year": 2020, "sha1": "979c141484c06bea3e100e27704fef8efd0f4ef6", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/acta.v57i11.3270", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d3431fe59789bb8d7ab97ab209ee3b9913e681a2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23988184
pes2o/s2orc
v3-fos-license
Manipulation of cytokine secretion in human dendritic cells using glycopolymers with picomolar affinity for DC-SIGN Dendritic cells bridge the innate and adaptive immune systems and they can be manipulated by star shaped glycopolymers. Introduction The recognition of repeating molecular patterns is a major function of the human immune system, enabling it to interact with and interpret various biological structures. These include the surfaces of pathogens such as viruses, fungi and bacteria, in addition to host structures such as glycoproteins and apoptotic cells. 1,2 C-type lectins are a major class of pattern recognition molecules in humans that interact with complex carbohydrates such as microbial polysaccharides and oligosaccharides present on human and viral glycoproteins and glycolipids. 3 In particular, the C-type lectin DC-SIGN (dendritic cell-specic intercellular adhesion molecule-3 grabbing nonintegrin; CD209; CLEC4L) is signicantly implicated in human disease through its interactions with viral carbohydrates on HIV and mycobacterial lipoarabinomannans (e.g. tuberculosis), in addition to its ability to transduce intracellular signaling events and inuence dendritic cell responses. [4][5][6][7] DC-SIGN is also important in the proper responses to specic apoptotic cell uptake by dendritic cells (DCs). 2 DCs are especially important components of the human immune system owing to their roles as highly efficient surveillance entities and professional antigen presenting cells. DCs are the only cell type with the ability to activate naïve T lymphocytes, making them very powerful bridges between innate and adaptive immunity. 8 NMR analyses of DC-SIGN-oligosaccharide interactions in solution suggest discrete binding modes of the carbohydrate-recognition domain for distinct glycan types that in turn are thought to drive different cell signaling pathways. 9 Furthermore, the distribution of DCs (and DC-SIGN-positive cells such as selected macrophage subpopulations) within discrete regions of tissues of the human body makes them attractive targets for site-specic therapeutic intervention in a broad range of diseases with immune system involvement. 10,11 Designed glycopolymers have been successfully generated for high affinity interactions with human DC-SIGN. [12][13][14][15] This has included the development of size-and sequence-controlled polymers, in addition to conformation-controlled polymers and glycoconjugates, collectively offering a diverse range of strategies for DC-SIGN targeting and exploitation. However, the concept of polymer architecture as a rational design strategy for DC-SIGN targeting with these classes of glycopolymer has yet to be employed and furthermore, the impact of advanced glycopolymers on dendritic cell function has yet to be investigated ( Fig. 1). Knowledge of the tetrameric structure of DC-SIGN, wherein four mannoside-selective carbohydrate recognition domains (CRDs) are clustered at the top of a coiled-coil stalk projecting from the cell surface, 16,17 allows for the rational construction of novel glycopolymers for further affinity enhancement and potential biological impact. Understanding of the role of DC-SIGN stimulation in dendritic cell responses holds potential for efficient, high affinity ligands to serve as immunomodulators as well as antiviral agents. The main text of the article should appear here with headings as appropriate. Materials and methods Instrumentation NMR spectra were recorded on a Bruker DPX-300 and DPX-400 using deuterated solvents. FTIR spectra were recorded on a Bruker Vector 22 FTIR spectrometer. For SEC a Varian PL 390-LC system was used, equipped with refractive index and viscosimetry detectors, using either CHCl 3 or DMF (LiBr 1 g l À1 ) as the eluent (1 ml min À1 ). Data analysis was performed using Cirrus soware and molecular weights were determined relative to Polymer Labs pMMA standards. The LCST of the glycopolymers was determined by measuring the cloud point of a 5 mg ml À1 solution of the polymer in water with a Stanford Research Systems OptiMelt MPA100. Typical atom transfer radical polymerization procedure A Schlenk tube was charged with initiator (1.0 mmol), monomer (50 mmol), N-(ethyl)-2-pyridylmethanimine ligand (2.0 mmol) and toluene (same volume as monomer). Two different methods have been used to remove the oxygen from the solution, which have been found to be equally effective. The rst method was a freeze-pump-thaw cycle, which was executed three times. The second method was bubbling the solution with nitrogen for at least 20 minutes. A second Schlenk tube was charged with Cu(I)Br (1.0 mmol) and a stirring bar and the oxygen was removed by evacuating and subsequently lling the tube with nitrogen three times. The solution was transferred to the second tube using a cannula and a vacuum to start the transfer. The solution was heated to 90 C while being kept under nitrogen atmosphere for the entire duration of the reaction. Samples were taken periodically using a degassed syringe in case of the kinetic experiments. The polymerization was stopped by cooling the mixture to room temperature and exposing it to oxygen. The mixture was diluted with CH 2 Cl 2 (approximately same volume as mixture) and passed through a basic Al 2 O 3 column. The polymer was precipitated into a 5 : 1 vol/vol methanol/water mixture (pTMSMA). The solid was isolated by ltration and dried in a vacuum oven overnight. In most experiments with the star-shaped initiators the amount of toluene used was higher to prevent star-star coupling. This was either twice (for 5-arm star) or three times (8-arm star) the volume of the monomer. Deprotection of pTMSMA The pTMSMA (0.5 g) was dissolved in THF (35 ml) and glacial acetic acid (0.22 ml, 1.5 equiv. with respect to the TMS groups) was added. The solution was bubbled with nitrogen and cooled with dry ice for at least 15 minutes. A 0.2 M solution of TBAF in THF (50 ml, 4.0 equiv.) was added dropwise with a syringe while stirring the solution and cooling with dry ice. The dry ice was removed aer 30 minutes and the mixture was stirred overnight at room temperature. The mixture was concentrated under reduced pressure, puried on a silica column and concentrated again. The polymer was precipitated in petroleum ether cooled with dry ice and acetone. The white powder was isolated by ltration and dried in a vacuum oven overnight. The deprotection was conrmed by the appearance of a peak at 2.5 ppm in the 1 H NMR and the disappearance of the 0.2 ppm peak. Synthesis of a-mannose azide Sodium azide (7.26 g, 111 mmol), D-(+)-mannose (2.00 g, 11.1 mmol) and triethylamine (15.5 ml, 111 mmol) were dissolved in water (40 ml) and cooled to 0 C. 2-Chloro-1,3dimethylimidazolinium chloride (5.61 g, 33.3 mmol) was added and the mixture was stirred for 1 hour at 0 C. The solvent was removed under reduced pressure and EtOH (40 ml) was added. The solids were removed by ltration and the solution was puried on a long Amberlite IR-120 column, using EtOH as the eluent. The mixture was checked with FTIR to conrm the removal of all sodium azide (n ¼ 2030 cm À1 ). The solvent was removed under reduced pressure, water (30 ml) was added and the mixture was washed with dichloromethane (5 Â 15 ml). The solvent was removed under reduced pressure, water (10 ml) was added and the solution was freeze-dried overnight to give a-mannose azide (1.59 g, 78 mmol, 70%) as an off-white solid. 1 Mannose azide clicking to propargyl polymers A solution containing propargyl polymer (100 mg, 0.81 mmol propargyl groups), mannose azide (213 mg, 1.05 mmol) and CuBr (116 mg, 0.81 mmol) in DMSO (10 ml) was bubbled with nitrogen for 20 minutes. Ethyl ligand (216 mg, 1.61 mmol) was added and the solution was bubbled with nitrogen for several more minutes. The solution was subsequently stirred at ambient temperature for two days. The mixture was puried by dialysis (MWCO: 1000) against distilled water for two days, while changing the water at least four times. It was then concentrated under reduced pressure and freeze-dried overnight. Glycopolymer synthesis and characterization Before the click reactions could take place, the propargyl groups of the pTMSMA polymers need to be deprotected by removal of the TMS group. This was performed by reacting the polymer with CH 3 COOH and TBAF in THF. The deprotection can be followed by 1 H NMR (Fig. S1 †). The peak of the TMS group at 0.2 ppm disappears and a peak appears at 2.5 ppm for the propargylic proton. Aer removing the water from the reaction mixture, EtOH was added to lter the unreacted sodium azide from the mixture. However, some of the sodium azide was still soluble in EtOH, as is shown in the FTIR spectrum in Fig. S2. † As sodium azide can form explosive compounds when it is brought into contact with halogenated solvents, it was necessary to remove this from the mixture before continuing with subsequent purication steps. It was found that this last trace of sodium azide could be removed on an Amberlite IR-120 column using EtOH as the eluent. TEA was also removed from the mixture with this column. The remaining DMC was removed by washing the mixture in water with CH 2 Cl 2 . Mannose azide was attached to each of the different synthesized propargyl polymers using click chemistry. The reaction can be observed using 1 H NMR through the appearance of a triazole peak at 8.30 ppm (Fig. S3 †). The peaks at 6.0 ppm and 5.4-4.3 ppm are from the mannose. The kinetic plot for the polymerization of TMSMA shows that it takes roughly 15 minutes for the polymerization to initiate (Fig. S5 †). Aer that the conversion is linear up to at least 95%. The increase of M n with conversion is almost linear within the margin of error. PDI values are all very close together around 1.3. The homopolymerization of TMSMA using the 8-arm initiator was performed in 75% solvent to avoid coupling of the star polymers. The rst-order kinetic plot is linear up to 50% conversion. A short initiation period was observed that was also seen in the homopolymerization of TMSMA with the disulde initiator. The increase in molecular weight is mostly linear with relation to the conversion (PDI $ 1.25). The theoretical molecular weight of the synthesized linear, 5 arm and 8 arm glycopolymers have been calculated as 35 700 g mol À1 , 37 100 g mol À1 , and 39 600 g mol À1 , respectively. The measured number average molecular weight of the linear, 5-arm, and 8-arm glycopolymers were found as 41 000 g mol À1 , 42 100 g mol À1 , and 45 600 g mol À1 , respectively. Protein expression and surface plasmon resonance Soluble recombinant DC-SIGN was generated in E. coli and puried via affinity chromatography using mannose-sepharose, as described previously and immobilized 16 on GLM sensor chips via amine coupling for use in the Bio-Rad ProteOn XPR36 instrument (Bio-Rad Laboratories, Hercules, USA). 12 Glycopolymer samples were dissolved in running buffer (10 mM HEPES pH 7.4, 150 mM NaCl, 5 mM CaCl 2 , 0.01% (v/v) Tween-20, 0.01% (w/v) NaN 3 ) and owed over sensor chips at a ow rate of 25 ml ml À1 at 25 C. Sensorgram data were collected and reference subtraction performed against a blank channel treated with amine coupling activators and subsequently blocked with 1 M ethanolamine. Kinetic parameters were determined using the Bia-evaluation soware (GE Healthcare) utilizing heterogeneous ligand tting models. Blood samples Peripheral blood mononuclear cells (PBMCs) were isolated from blood leukocyte cones (National Blood Transfusion Service, UK) by density gradient centrifugation with Lymphoprep™ 1077 (PAA Laboratories, GmbH, UK) followed by a 50% Percoll (Sigma, UK) gradient to obtain a high-density fraction enriched in lymphocytes and a low-density fraction enriched in monocytes. Fractions were aliquoted and stored in liquid N 2 until required. Generation of dendritic cells Monocyte derived dendritic cells (moDCs) were isolated from the monocyte rich fraction by magnetic bead separation using anti-CD14 coated magnetic beads (Miltenyi Biotec). Puried monocytes were cultured in complete medium (RPMI containing 10% FBS, 100 IU ml À1 penicillin, 0.1 mg ml À1 streptomycin, and 2 mM L-glutamine; Sigma-Aldrich) supplemented with GM-CSF (100 ng ml À1 ) and IL-4 (50 ng mLl À1 ; R&D Systems), this being replenished every other day. On day 7, DCs were harvested, washed twice and resuspended in serum-free RPMI-1640 prior to staining for ow cytometry. Flow cytometry Dendritic cells were harvested, washed and stained with either 5-arm polymer-FITC, 8-arm polymer-FITC, a linear polymer-FITC or gp120-FITC (generated from soluble recombinant gp120 produced in HEK cells as previously described) at the concentrations indicated in the results. To assess the ability of the polymers to bind to the gp120 site, DCs were incubated with unlabeled polymer (at the concentrations indicated in the results) for 45 minutes at room temperature and then treated with gp120-FITC (1 mg ml À1 ) for a further 30 minutes at room temperature. To determine binding of gp120 to CD4 + cells, DCs and lymphocytes were rstly incubated with unlabeled gp120 at 1 mg ml À1 for 45 min at room temperature and then stained with anti-CD4-APC, anti-CD3-Percp, and gp120-FITC for a further 30 minutes. Following all the incubations, cells were washed and xed with BD stabilizing xative (BD Biosciences). Unstained DCs were used as a control. One hundred thousand cells were acquired within a live gate using a 3-laser conguration LSRII ow cytometer (BD Biosciences, USA), and analysed by FlowJo (Tree Star Inc, USA). Cytokine assays Sets of dendritic cell cultures, derived from four separate donors, were incubated with 1 mg ml À1 gp120 and also dened quantities of the three species of glycopolymer (8-arm, 5-arm and linear; at 1, 3 and 10 mg ml À1 ). Aer 24 hours, one group of DC cultures was activated with 1 mg ml À1 lipopolysaccharide (LPS; Sigma, Poole, UK) and 20 ng ml À1 interferon gamma (IFN-g), the other le unstimulated. Supernatants (100 ml) from the assorted cultures were recovered aer a further 24 hours via centrifugation at 2000 rpm. Samples were diluted 1 : 1 in DMEM cell culture medium without supplements and duplicate measurements were made using Luminex bead immunoassay. A panel of nine cytokines was analyzed simultaneously using the commercial Bio-Rad Th1 and Th2 kit within the Bio-plex 200 Multiplex System in accordance with manufacturer instructions (Bio-Rad Laboratories, Hercules, USA). Quantitative readouts were obtained for the following cytokines: tumor necrosis factor alpha (TNF-a), IFN-g, interleukins (IL)-2, IL-4, IL-5, IL-10, IL-12p70, IL-13, and granulocyte/monocyte-colony stimulating factor (GM-CSF). Research using human material was carried out in accordance with institutional guidelines issued by the University of Warwick and Imperial College London, and with the principles expressed in the Declaration of Helsinki. Ethical approval was obtained from the Riverside Research Ethics Committee and informed written consent was obtained prior to patient blood collection. Data from human-derived material were processed anonymously. Results and discussion Carbohydrates in the form of oligosaccharides and polysaccharides represent a large, crucial and exciting family of molecules in the development of novel therapeutics. Uncovering the roles of complex carbohydrates in human physiology and disease coincides with the advancement of our understanding of carbohydrate-binding receptors such as DC-SIGN. Initially identied as a molecule exploited by HIV-1 in order to promote viral trafficking and within-host survival, it has since emerged recently that DC-SIGN is central to important physiological processes such as healthy responses during apoptotic cell clearance and foeto-maternal tolerance. 2,18 Studies into the impact of glycan ligands for DC-SIGN on immune cells indicate that this receptor is capable of driving anti-inammatory and immunosuppressive cell signaling with major effects on gene expression and cellular responses. 6,7 Whilst undesirable in the context of certain infections, these responses are benecial in the active processes of wound healing and the resolution of inammatory episodes. It is becoming increasingly clear that these tissue repair and protection mechanisms are highly active and require advanced immune system involvement. Of signicance is the association of molecules such as DC-SIGN with tissue environments such as placenta, uterus and gut, where tight control of tissue growth & stability and blood supply in the presence of abundant non-self material is essential. 11 Understanding the structure and function of DC-SIGN allows us to design and prepare advanced synthetic ligands for this molecule and investigate opportunities for exploiting its anti-inammatory properties. The strategy of generating dened star-shaped glycopolymers (GPs) to achieve enhanced binding to the tetrameric conguration of DC-SIGN has been very successful, showing increases in affinity exceeding an order of magnitude, in addition to demonstrating strong interactions with DCs. We speculate that the star-shaped architecture spans the four CRDs within the DC-SIGN tetramer, increasing the strength of the interaction and potentially inuencing the consequences of DC-SIGN engagement. It is assumed that DC-SIGN can only transduce signals when clustered in multimeric complexes with multiple glycan ligands expressed on a large surface area, although small ligands such as Lewis-x oligosaccharides have been shown to drive signaling and gene expression responses in DCs via DC-SIGN engagement. Furthermore, NMR analyses of DC-SIGN-oligosaccharide interactions in solution suggest discrete binding modes of the carbohydrate-recognition domain for distinct glycan types that in turn are thought to drive different cell signaling pathways. Therefore, the consequences of DC-SIGN engagement by glycopolymers could be inuenced and even tuned by variations in sugar composition, architecture and size. Star-shaped GPs bind to recombinant DC-SIGN with subnanomolar affinity. Interaction analysis in real time via surface plasmon resonance (SPR) demonstrated distinct and strong binding between immobilized DC-SIGN and all of the GPs examined (Fig. 2), consistent with previous studies. 12 As expected, the star-shaped GPs showed greater affinity compared with the linear GP of equal molecular weight and dissociation rates (k off ) were markedly slow, in keeping with strong multivalent interactions with the oligomeric DC-SIGN protein. Apparent K D values of 65.1 pM and 72.5 pM were determined for the star-shaped 5-arm and 8-arm glycopolymers, respectively (Table 1). Star-shaped glycopolymers show improved and dosedependent binding to human dendritic cells To determine whether the polymers bound directly to dendritic cells (DCs), DCs were stained with FITC-labeled polymers at a range of polymer concentrations. Fig. 3A-C shows that all GPs bound to DCs to some degree with the 5-arm and 8-arm glycopolymers binding better than the linear polymer. We also assessed the binding of gp120 to DCs and found that binding followed a similar prole to the star-shaped glycopolymers (Fig. 3D). In a direct comparison between the polymers and gp120, we found that the 5-arm and 8-arm GPs bound with a similar intensity as compared to the gp120 at the highest dose of 10 mg ml À1 (see ESI †). In addition to assessing the intrinsic binding of GPs to DCs we also determined whether they exerted any measurable inuence on the activation status of the treated DCs by analyzing their cell-surface markers. We found that following 24 hours of exposure to the polymers generated here, there was no signicant change in DC expression of the canonical activation markers CD80, CD83, CD86 or HLA-DR when examining cells from three separate donors. There is a likelihood that these mannose glycopolymers also interact with dendritic cell and macrophage subpopulations that express the mannose receptor (CD206), in addition to specialised endothelial cells that express receptors such as DC-SIGNR (CD299). Where the very high affinity interactions between the star-shaped glycopolymers and DC-SIGN could possess signicant selectivity would be in discrete tissue spaces such as inamed synovium associated with rheumatoid arthritis, where salient subpopulations of DC-SIGN-positive macrophages are upregulated. 19 Competition of glycopolymers with gp120 for binding sites on dendritic cells due to cellular CD4 expression To determine whether the polymers competed with the gp120 for the same cellular binding site, we incubated DCs with unlabeled polymers and then stained the cells with FITClabeled gp120 (Fig. 4). Our results show that although there was a slight shi in binding when polymer was present, none of the polymers signicantly abrogated gp120 binding. To assess whether this was due to the gp120 using more than one binding partner on cell surfaces, we looked at determining the expression of the major gp120 ligand -CD4on both DCs and T cells. Our results show that gp120 is able to bind to CD4 molecules on both DCs and T cellsa site to which gp120 binding is not carbohydrate-dependent (Fig. 5). Consequently our results demonstrate that the polymers bind to separate sites on cells from those utilized and possibly preferred by gp120. The ow cytometry studies involving HIV gp120 interactions with DCs raise a very important issue in the use of glycopolymers as agents for viral blockade. The presence of CD4 on moDCs (and also native peripheral DC populations) serves as a very effective adhesion route for HIV, such that polymer-based barrier preparations could be compromised in vivo. This would be consistent with in vivo ndings that DC-SIGN is associated with parenteral but not mucosal trafficking of HIV. 20 However, it is yet to be known whether the presence of glycopolymers could affect HIV infection of CD4 + T cells in trans and whether glycopolymers would be effective in blocking DC interactions with other dangerous glycosylated viruses that are known to interact with DC-SIGN such as hepatitis C virus (HCV), Ebola and Dengue. Glycopolymers modulate cytokine production by activated DCs Cytokine analysis was carried out on supernatants recovered from DCs exposed to glycopolymers and consensus activation stimuli (LPS plus IFN-g). Prior to supernatant harvesting, the DCs appeared healthy and normal in the presence of glycopolymers, suggesting that these materials were tolerated and not markedly toxic at the quantities used (1-10 mg ml À1 ). Measured cytokine secretion levels were negligible in all of the DC cultures that were not stimulated with LPS and IFN-g, suggesting that the GPs themselves do not invoke major secreted DC activation responses. In DC cultures activated with LPS and IFN-g, key cytokines such as TNF-a were up-regulated in all cultures, as expected, and the exposure of DCs to star-shaped GPs led to signicant changes in the secretion of two cytokines -IL-10 and IL-12p70. Levels of secreted IL-10 were signicantly increased only in supernatants from activated DCs exposed to the 8-arm and 5-arm GPs, and only at the highest concentration used (10 mg ml À1 ; Fig. 6A). Equivalent quantities of the linear GP did not invoke signicant changes in IL-10 secretion. Levels of IL-12p70 were signicantly reduced by all three species of GP in a dose dependent manner (Fig. 6B-D). Interestingly, the linear GP showed an effect in reducing IL-12p70 secretion (Fig. 6B). This suggests that whilst showing lower levels of binding to DC-SIGN and to DCs, linear GPs may still inuence certain DC responses. The changes in cytokine secretion by DCs invoked by the star-shaped polymers are consistent with those observed in mammalian wound healing. IL-10 is a potent immunosuppressive cytokine with broad effects on inammation and its mediators. 21 IL-12p70 is a pro-inammatory cytokine and a potent inhibitor of angiogenesis (blood vessel growth). 22 The combination of IL-10 increase and IL-12 decrease represents a rapid and effective means of reducing inammation and restoring organized blood ow to a site of tissue injury. In general, qualitative changes in secretion of these two cytokines are in keeping with the consequences of DC-SIGN engagement by biologically-sourced mannosylated ligands. However, a key difference is that in the case of the glycopolymers, the chemical denition is very precise. In contrast, biological high mannose ligands such as gp120, yeast mannan and mycobacterial 6 Effects of glycopolymers on cytokine secretion by activated DCs. Cells were incubated with GPs before activation with LPS and IFN-g and multiplex cytokine assay. IL-10 secretion was significantly increased in DC cultures from 4 donors incubated with 8-arm star P3 and 5-arm star P2 GPs at 10 mg ml À1 compared with controls, but not with linear GP P1 (Panel A). IL-12p70 secretion was significantly decreased compared to controls in cells exposed to increasing concentrations of linear GP P1 (Panel B), 5-arm star GP P2 (Panel C) and 8-arm star GP P3 (Panel D) (all P values calculated from paired t-tests). lipoarabinomannan are notoriously heterogeneous. [23][24][25] Furthermore, the low cost of polymer precursors and the simple synthesis workow make the glycopolymers described here an attractive and affordable means of generating abundant, watersoluble material with advanced immunological activity. Conclusions To summarize, we report the synthesis and characterization of highly water-soluble advanced glycopolymer sets of dened size and shape designed to interact profoundly with the key human lectin DC-SIGN. Biophysical studies demonstrate remarkable glycopolymer-DC-SIGN interactions with affinities in the picomolar range and positive binding to human monocyte-derived dendritic cells. Star-shaped glycopolymers invoke signicant changes in the production of two key cytokines, IL-10 and IL-12p70, when incubated with dendritic cells at pharmacologically realistic concentrations. Importantly, IL-10 levels rise and IL-12 levels fall in response to the glycopolymers and this cytokine release pattern is reminiscent of tissue environments associated with active wound healing, anti-inammation, and also healthy pregnancy at the feto-maternal interface. In our experiments, however, the glycopolymers do not affect the ability of HIV gp120 to interact with monocyte-derived dendritic cellspreviously a major direction for both DC-SIGN and glycopolymer research. We propose novel roles for glycopolymers of this kind, shiing the focus away from HIV prophylaxis towards the treatment of conditions that require immune modulation that promotes wound healing and inammatory resolution. Examples of such conditions could include burns & chronic skin lesions, in addition to chronic inammatory diseases with immunological involvement such as rheumatoid arthritis, multiple sclerosis and Crohn's disease. Conflicts of interest There are no conicts to declare.
2017-11-27T21:38:33.125Z
2017-08-16T00:00:00.000
{ "year": 2017, "sha1": "cf90a094a0a48fde08e029b985358e5a49aeaf8f", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/sc/c7sc01515a", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "35dadd1a90c1a9ad7508401dde6f0e0fa2462a5c", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
266287656
pes2o/s2orc
v3-fos-license
Burst and Memory-aware Transformer: capturing temporal heterogeneity Burst patterns, characterized by their temporal heterogeneity, have been observed across a wide range of domains, encompassing event sequences from neuronal firing to various facets of human activities. Recent research on predicting event sequences leveraged a Transformer based on the Hawkes process, incorporating a self-attention mechanism to capture long-term temporal dependencies. To effectively handle bursty temporal patterns, we propose a Burst and Memory-aware Transformer (BMT) model, designed to explicitly address temporal heterogeneity. The BMT model embeds the burstiness and memory coefficient into the self-attention module, enhancing the learning process with insights derived from the bursty patterns. Furthermore, we employed a novel loss function designed to optimize the burstiness and memory coefficient values, as well as their corresponding discretized one-hot vectors, both individually and jointly. Numerical experiments conducted on diverse synthetic and real-world datasets demonstrated the outstanding performance of the BMT model in terms of accurately predicting event times and intensity functions compared to existing models and control groups. In particular, the BMT model exhibits remarkable performance for temporally heterogeneous data, such as those with power-law inter-event time distributions. Our findings suggest that the incorporation of burst-related parameters assists the Transformer in comprehending heterogeneous event sequences, leading to an enhanced predictive performance. Introduction Temporal heterogeneity is frequently referred to as burst within the context of complex systems.Numerous natural and social phenomena exhibit bursty temporal patterns such as single-neuron firing (Kemuriyama et al., 2010;Chan et al., 2016;Metzen et al., 2016;Zeldenrust et al., 2018), earthquakes (Corral, 2004;de Arcangelis et al., 2006), solar flares (Wheatland et al., 1998), and human activity (Barabasi, 2005;Karsai et al., 2018).The term temporal heterogeneity rigorously implies that the distribution of inter-event times, which is the time intervals between two consecutive events, exhibits a heavy-tailed distribution such as a power-law distribution.Moreover, when the system is generally temporally heterogeneous, it implies the presence of temporal correlations among inter-event times.For example, the inter-spike interval distribution display temporally heterogeneous patterns, which cannot be simply interpreted as a random or regular process.Numerous studies have addressed temporal correlations between bursty spikes using approaches such as the non-renewal process (Shahi et al., 2016), intensity functions with voltage-dependent terms Lee et al. Figure 1 illustrates the distinction between temporally heterogeneous inter-event times and those that tend toward homogeneity.The event sequences in Figures 1A, D, F serve as examples of temporal heterogeneity with a power-law inter-event time distribution.The event sequences in Figures 1B, C, E, G present instances that exhibit a more homogeneous random characteristic with an exponential inter-event time distribution.Evidently, the bursty event sequence exhibits clustered events within burst trains, in contrast to the non-burst sequence.Such uneven event occurrences can affect the prediction of event sequences.Without properly accounting for the complicated correlation structure and heterogeneity therein, naive models may struggle to effectively discern hidden patterns. Event sequence data encompass the temporal occurrences of events spanning various domains, ranging from natural phenomena to social activities.Unlike time series data, event sequence data are defined by sequentially ordered timestamps that signify the timing of individual event occurrences.Numerous studies have focused on predicting the timing of subsequent events have been conducted using temporal point processes (TPPs) (Daley and Vere-Jones, 2008).One of the most widely employed TPP is the Hawkes process (Hawkes, 1971).This process embodies a self-exciting mechanism, wherein preceding events stimulate the occurrence of subsequent events.In contrast to the Hawkes process, the self-correcting process provides a feasible method for establishing regular point patterns (Isham and Westcott, 1979). The Poisson point process can be employed to generate entirely random and memory-less events (Kingman, 1992).In the FIGURE (A, D, F) Heterogeneous event sequences with a power-law inter-event time distribution.These event sequences exhibit a high burstiness parameter with significant temporal heterogeneity.(B, C, E, G) Event sequences with an exponential inter-event time distribution.These event sequences have burstiness parameters close to and memory coe cients clustered around . Poisson process, the inter-event time (IET) follows an exponential distribution.The Cox process is a generalized Poisson process in which the intensity function varies with the stochastic process (Cox, 1955); thus, it is also referred to as a doubly stochastic Poisson process.Cox processes are frequently employed to model and predict the arrival of insurance claims, enabling insurers to assess risk and manage resources effectively (Rolski et al., 2009).If the intensity function is not entirely random, as in the Cox process, but given as a deterministic time-varying function, it is referred to as an inhomogeneous Poisson process. Our research was primarily motivated by the idea that incorporating temporal heterogeneous characteristics into event sequence predictions yields a superior performance in forecasting events.We propose a Burst and Memory-aware Transformer (BMT) model, signifying its capability to train the Transformer by leveraging insights derived from burstiness and memory coefficient, both of which are associated with temporal heterogeneity.Notably, these two metrics were incorporated as embedding inputs for the Transformer architecture.Moreover, a loss function related to these metrics was formulated and employed, thereby enabling the model to naturally capture temporal heterogeneity.The overall schematic diagram of the BMT model is depicted in Figure 2. The main contributions of this paper is summarized as follows: • The BMT model was developed to integrate insights from the complex systems theory into the Transformer-based temporal point process model, enhancing the capability to incorporate temporal heterogeneity.This study offers a preliminary approach to connect these two distinct disciplines.• The BMT model surpasses state-of-the-art models by effectively integrating burstiness and memory coefficient into both the embedding procedure and associated loss functions.This is confirmed through extensive numerical experiments across a range of scenarios, including those with and without burstiness and memory coefficient embedding and related loss functions, using real-world datasets and synthetic datasets generated via a copula-based algorithm.• Our investigation revealed that the BMT model offers particular advantages when dealing with temporally heterogeneous data, such as datasets characterized by a power-law inter-event time distribution, commonly observed in bursty event sequences.• Our research indicates that excluding either burstiness and memory coefficient embedding or their corresponding loss functions leads to a noticeable reduction in performance.This emphasizes the imperative nature of integrating both elements to achieve optimal performance. In cases where the inter-event time distribution of the target event sequence exhibits a heavy-tailed distribution, such as a power-law distribution, or where the values of burstiness and memory coefficient significantly deviate from zero, the BMT model ensures superior performance compared to basic Transformerbased models. The structure of the paper is outlined as follows: Section 2 introduces the background pertaining to the temporal point process, temporal heterogeneity, and generating method for synthetic datasets; Section 3 introduces our Burst and Memory-aware Transformer model; Section 4 presents numerical experiments on synthetic and real-world datasets; Section 5 presents the performance evaluation results; and Section 6 presents the conclusion. Background . Temporal point process A temporal point process (TPP) is a stochastic process involving the occurrence of multiple events as time progresses.The foundational data employed to construct the TPP model consists of event sequence data, encompassing event times {t i } n i=1 along with optional marks {κ i } n i=1 .For example, spike train sequences of neurons are composed of timings of occurrences, along with action potential as associated marks. In this study, we examine the unmarked case to specifically investigate the effects of burst and memory phenomena, while excluding the influence of correlations with marks that do not align with the research direction.For the prediction of the marked TPP model, one approach involves the independent modeling of the target's marks by thresholding.Alternatively, based on contextual analysis (Jo et al., 2013), interactions with multiple neighbors within an egocentric network can be considered as marks and subsequently modeled. TPP encompasses the modeling of the conditional intensity function λ(t|H t ) given the history of event times H t ≡ (t 1 , ...t n ).The notation for the history of event times, H t will be omitted for convenience.The intensity function characterizes the instantaneous event rate at any given time by considering past event occurrences.The probability density function P(t) and cumulative distribution function F(t) can be derived based on the intensity function, as follows (Rasmussen, 2018): (1) . . Hawkes process The Hawkes process, also known as the self-exciting point process, is for a situation where a preceding event excites the occurrence of a subsequent event (Hawkes, 1971).The intensity function λ(t) of the Hawkes process is defined as where the base intensity ζ and η are positive parameters.When a new event occurs during this process, the intensity increases with η and immediately decays exponentially.The probability of the next event occurring is highest immediately following the incidence of the previous event, and it gradually decreases as time elapses.As a result, this process causes events to cluster together.This includes events that happen quickly in a short time and then long times when nothing happens.The generalized Hawkes process is defined as follows: where ζ ≥ 0, η > 0, and γ (t) is a density on (0, ∞). . . Self-correcting process In contrast to the Hawkes process, the self-correcting process generates regular inter-event time sequences with randomness (Isham and Westcott, 1979).The intensity function λ(t) for the self-correcting process is defined as follows: where ζ and η are positive parameters. . . Neural Hawkes process A limitation of the Hawkes process is that the preceding event cannot inhibit the occurrence of a subsequent event.To overcome these limitations, the neural Hawkes process, which considers the nonlinear relationship with past events using recurrent neural networks, was introduced (Mei and Eisner, 2017).The intensity function λ(t) for the neural Hawkes process is defined as follows: where f (x) = β log 1 + exp(x/β) is the softplus function with parameter β which guarantees a positive intensity, and h(t)s are hidden representations of the event sequence from a continuous-time LSTM model.Here, the intensity we refer to is not the marked intensity λ k ; Instead, our focus is on the inherent temporal heterogeneity structure, excluding any interference from correlations between event types and times. . . Transformer-based Hawkes process The Transformer is a deep learning architecture for sequence processing such as natural language processing, with a multihead self-attention module that captures long-range dependencies within sequences (Vaswani et al., 2017).The Transformer is used not only in language models but also in computer vision, audio processing, and time series forecasting (Lim et al., 2021;Wen et al., 2022;Ma et al., 2023).Recently, the Transformer architecture has also been applied to modeling temporal point processes.The Transformer Hawkes Process (Zuo et al., 2020) and the Self-Attentive Hawkes Process (Zhang et al., 2020) were introduced to model the Hawkes process with a self-attention mechanism to capture the long-range correlations underlying both event times and types. THP and SAHP differ in two aspects: their use of positional encoding and the form of the intensity function.SAHP employs time-shifted positional encoding to address the limitations of conventional methods, which solely account for the sequence order and neglect inter-event times.The intensity function of the THP model is the softplus function of the weighted sum of three terms: ratio of elapsed time from the previous event, hidden representation vector from the encoder, and base.Conversely, the intensity function of the SAHP model is formulated as a softplus of the Hawkes process terms, each of which is derived from the scalar transformation and nonlinear activation function applied to the hidden representation vector from the encoder. For both the THP and SAHP models, across synthetic and real-world datasets, their performances in event type prediction and event time prediction surpassed that of the baseline model: Hawkes Process as described in Equation (3), Fully Neural Network model (Omi et al., 2019), Log-normal Mixture model (Shchur et al., 2019), Time Series Event Sequence (TSES) (Xiao et al., 2017), Recurrent Marked Temporal Point Processes (Du et al., 2016), and Continuous Time LSTM (Mei and Eisner, 2017).Given the superior performance of THP over the remaining baseline models, this study refrains from direct performance comparison with the SAHP and baseline models (Zuo et al., 2020), opting to concentrate exclusively on performance comparison with the THP model. . Temporal heterogeneity Temporal heterogeneity or burst is characterized by various metrics.The most fundamental quantity is the probability density function of the inter-event times.The inter-event time is defined as the time interval between two consecutive events, that is, τ i ≡ t i+1 − t i , where t i is i-th event time of the event sequence. When the inter-event time distribution is heavy-tailed, the corresponding event sequence exhibits temporal heterogeneity.Specifically, the power-law inter-event distribution found in diverse natural and social phenomena is as follows: where a is a constant and α is a power-law exponent. . . Burstiness parameter Several metrics characterize the properties of temporal heterogeneity.Burstiness B measures the burst phenomenon (Goh and Barabási, 2008), and is defined as follows: where r ≡ σ/ τ is the coefficient of variation (CV) of the interevent time and σ and τ is the standard deviation and average of τ s, respectively.Here, B = −1 for regular event sequences, B = 0 for Poissonian random cases, and B = 1 for extremely bursty cases. When the number of events is sufficiently small, the burstiness parameter causes errors.The fixed burstiness parameter considering the finite-size effect is as follows (Kim and Jo, 2016): We employed the fixed burstiness parameter (9) to handle short-length event sequences throughout this study. . . Memory coe cient The memory coefficient M quantifies the correlations between consecutive inter-event times within a sequence consisting of n inter-event times, that is, {τ i } i=1,...,n , as follows (Goh and Barabási, 2008): where τ 1 ( τ 2 ) and σ 1 (σ 2 ) are the average and standard deviation of the inter-event times τ 1 , τ 2 , ..., τ n−1 (τ 2 , τ 3 , ..., τ n ), respectively.This is the Pearson correlation coefficient between consecutive inter-event times.Here, M = 0 indicates no correlation, and M > 0 indicates a positive correlation, which means that a large inter-event time follows after a large inter-event time and vice versa for small inter-event time.M < 0 indicates a negative correlation, which means small inter-event time follows after the large inter-event time and vice versa for a large inter-event time. . . Applications of B and M to BMT model When plotting M on the x-axis and B on the y-axis for datasets with various inter-event time distributions, it can be observed that event sequences with similar inter-event time distributions tend to cluster at similar positions (Goh and Barabási, 2008).Essentially, if the ranges of B and M values are known, a rough estimate of the inter-event time distribution can be anticipated.Building on this insight, we devised a BMT model to facilitate learning by designing a method in which the values of B and M were combined and fed into the encoder as inputs.Specifically, when the values of B and M exhibit temporal heterogeneity in their ranges, the encoder of the Transformer can produce inter-event time prediction values with a heavy-tailed inter-event time distribution. Moreover, B and M are not independent: they are intertwined and move in conjunction.For instance, even when attempting to alter only M by shuffling the inter-event times, B can also change.This serves as evidence that embedding both B and M concurrently yields superior performance compared with embedding either one of them individually. . Copula-based algorithm for generating sequence of inter-event times To comprehend the impact of burstiness and memory coefficient on the model, we generated synthetic datasets using a copula-based algorithm (Jo et al., 2019).The content of the copula-based algorithm in this study was obtained from Jo et al. (2019).For convenience, we provide a brief overview of the relevant content.The copula-based algorithm models the joint probability distribution of two consecutive inter-event times, that is, P(τ i , τ i+1 ), by adopting the Farlie-Gumbel-Morgenstern (FGM) copula (Nelsen, 2006).The joint probability distribution according to the FGM copula is formulated as follows: where Parameter r is used to control the correlation between τ i and τ i+1 and is in the range of −1 ≤ r ≤ 1. F(τ ) is the cumulative distribution function (CDF) of P(τ ).After applying the transformation method (Clauset and Shalizi, 2009), the next inter-event time τ i+1 can be obtained as Jo et al. (2019) where F −1 is the inverse of F(τ ), c i ≡ rf (τ i ), and x is a random number sampled from a uniform distribution within interval 0 ≤ x < 1.The copula-based algorithm has the advantage of generating event sequences with independent control of the inter-event time distribution and memory coefficient. Burst and Memory-aware Transformer . Discretization of B and M Given that the burstiness parameter and memory coefficient are real numbers within the range of [−1, 1], it is necessary to discretize them for embedding within the Transformer.We adopt the uniform discretization transform; the range [−1, 1] is divided into segments of fixed length by the number of bins b, respectively, and subsequently mapped to a single natural number.The continuous values of the burstiness parameter B and memory coefficient M are discretized into natural numbers d B and d M , respectively.For example, when the number of bins is b = 4, then When the number of discretization bins is b, the number of d B,M is b 2 , corresponding to the vocabulary size of the Transformer.Then, we can obtain the one-hot vector of the discretized . Embedding event times, and B and M The event sequence S = {t i } n i=1 of n events and discretized and one-hot B & M, d B,M are fed into the self-attention module after proper encoding.First, the event times are transformed using the positional encoding method (Vaswani et al., 2017) to embed the temporal order information into an event sequence.The j-th element of sinusoidal positional encoding for the i-th event time t i is calculated as: where ω k = 1/10, 000 2k/d , the embedding index k is the quotient when dividing j by 2, and z t (t i ) ∈ R d , where d is the encoding dimension.By multiplying ω k with the event time t i , it is converted into an angle, which is then mapped to sine and cosine functions, providing different positional information for each event time. For the given event times {t i } n i=1 , the inter-event times are τ i ≡ t i+1 − t i for i = 1, ..., n − 1.The burstiness parameter (9) and memory coefficient (10) were calculated for all partial sequences.This essentially implies that the input to the encoder is fed sequentially from t 1 , ..., t i ,..., t n , and for each of these instances, the B & M embedding incorporates the calculated B and M values up to t 1 (i.e., B 1 and M 1 ), ..., up to t i (i.e., B i and M i ), ..., and up to t n (B n = B and M n = M for the entire sequence).Note that, during the actual operation of the Transformer, computations are performed in parallel; thus, the sliding B & M embedding vectors form a lower triangular matrix. The B & M embedding vector z e (B i , M i ) for the one-hot vector of the discretized B i and M i , d B i ,M i , is calculated using a linear embedding layer as follows: where W E ∈ R d×b 2 denotes an embedding matrix.Then for the i-th event, the event time embedding vector z t (t i ) ∈ R d and the B & M embedding vector z e (B i , M i ) ∈ R d are summed together to acquire the hidden representation of the i-th event z i ∈ R d as: Then the embedding matrix for a whole single event sequence is given by: where Z ∈ R n×d and n is the length of the event sequence, that is, the number of events in a single sequence. . Self-attention module After acquiring the embedding matrix Z for each event sequence according to Equation ( 18), we propagated Z into the input of the self-attention module.The resulting attention output S is defined as follows: where Here, Q, K, and V represent the query, key, and value matrices, respectively, obtained by applying distinct transformations to Z. The transformation parameters are and W V ∈ R d×d V , respectively.In contrast to conventional RNN models, the self-attention mechanism enables an equitable comparison of not only recent values but also the significance of distant past values of the sequence.Consequently, this facilitates the learning of long-term dependencies. The BMT model employs multi-head attention, similar to other Transformers.Multi-head attention enables the model to manage diverse patterns and contexts of the input sequence.The multihead attention output S is given by S = [S 1 , ..., S i , ..., S m ]W O , where S i ∈ R n×d V /m is the attention output for the i-th multi-head and After the multi-head attention, the resulting attention output S is subsequently passed into a position-wise feed-forward network, yielding hidden representations h(t) for the event sequence as: where , and b 2 ∈ R d are the parameters of each neural network.The i-th event of the event sequence corresponds to the i-th row of the hidden representation matrix H, that is, h(t i ) = H(i, :).Furthermore, masks are employed to prevent the model from learning about the future in advance. The hidden representation H ∈ R n×d encapsulates insights regarding burstiness and memory coefficient for each event within the sequence, acquired through the self-attention mechanism.We further enhanced the incorporation of sequential information by applying LSTM to the hidden representation. . Training and loss function The BMT model employs five types of loss functions: (1) squared error of the event time, (2) event log-likelihood loss as described in Equation ( 22), (3) cross entropy of discretized B & M, (4) squared error of B, and (5) squared error of M. . . Event time loss The most crucial loss function within the model is how accurately it predicts the next event times.The next event time prediction is ti+1 = W t h(t i ), where W t ∈ R 1×d is the parameter of the event time predictor.To address this, the squared error loss function of the event times for the event sequence is defined as: where ti is the predicted event time. . . Event log-likelihood The typical approach for optimizing the parameters of the Hawkes process involves utilizing Maximum Likelihood Estimation (MLE).There are two constraints: (1) no events before time 0, and (2) unobserved event time must appear after the observed time interval.When the observed event sequences are t 1 , ..., t n ∈ [0, T), then likelihood of an event sequence is given by L ′ = P(t 1 ) • • • P(t n−1 )(1−F(T)), where F(•) is the cumulative distribution function, and the last term is for the second constraint.Using (1) and ( 2), and applying the logarithm function, we obtain the following log-likelihood: The first term denotes the sum of the log-intensity functions for the past n events, and the second term represents the non-event log-likelihood. Here, the intensity function λ(t) is defined in the interval t ∈ [t i , t i+1 ] according to the following expression: where β is the softness parameter, w λ ∈ R d×1 is a parameter that converts the term inside the exponential function into a scalar, and h is the hidden representation derived from the encoder.The essence of this intensity function aligns with that of the Neural Hawkes Process, as shown in Equation ( 6).The softplus function formulation was employed to guarantee non-negativity of the intensity. . . Discretized B and M loss The model predicts the discretized Bi & Mi , dB i ,M i , based on the hidden representations h(t i−1 ) as: where W B,M ∈ R b 2 ×d is the parameter of the discretized B i & M i predictor, and pi (d ′ ) is the d ′ -th element of pi .To measure the accuracy of B i & M i embedding, the following cross-entropy between the ground truth discretized B i & M i , d B i ,M i , and the predicted discretized Bi & Mi , dB i ,M i , is calculated: where is the ground truth one-hot encoding vector. . . B loss and M loss Additionally, the model utilizes the squared errors of the burstiness parameter directly as: where B i and Bi are the ground truth and predicted burstiness parameters, respectively.The squared errors of the memory coefficient value can be defined in a similar manner. where M i and Mi is ground truth and predicted memory coefficient, respectively. . . Overall loss By aggregating the aforementioned loss functions ( 21), ( 22), and ( 26)-( 28), the overall loss function of the model is defined as follows: where α 1 to α 4 are the hyperparameters that balance each loss function determined using the validation datasets.The overall framework of the BMT model is illustrated in Figure 3. Experiments . Synthetic datasets We generated synthetic data using the copula-based algorithm for two different inter-event time distributions.The model was tested for the exponential and power-law inter-event time distribution, which also have a different range of memory coefficient and burstiness, to directly understand the impact of temporal heterogeneity on the BMT model and other models.Along with the two synthetic datasets below, we tested the regular event sequences generated by the self-correcting process, as in Equation ( 5).The statistics of the datasets are displayed in Table 1. . . Power-law inter-event time distribution The power-law inter-event time distribution with a power-law exponent α is defined as , where θ (•) represents the Heaviside step function with a lower bound of 1.After substituting the inter-event time distribution into Equation ( 13), we obtain the next inter-event time τ i+1 from a given previous inter-event time τ i and random number x in 0 ≤ x < 1 as where ) (Jo et al., 2019). A total of 1,000 sequences with a power-law inter-event time distribution were generated with different parameters according to Equation ( 30).The power-law exponent α, memory coefficient M, and the number of events for each event sequence are randomly and independently drawn from 2.1 ≤ α ≤ 2.9, −1/3 ≤ M ≤ 1/3, and 50 ≤ n ≤ 500, respectively.The initial inter-event time was randomly drawn from 1 to 2. Depending on the power-law exponent and memory coefficient, the burstiness ranged from 0.297 to 0.962. As depicted in Figure 4, the power-law inter-event time datasets exhibit pronounced dispersion toward the region of larger burstiness and memory coefficients (B and M scatter plots).Moreover, these datasets show a power-law inter-event time distribution with exponent values α = 2.4 close to the average within the range of exponents 2.1 < α < 2.9. . . Exponential inter-event time distribution The exponential inter-event time distribution with mean µ is defined as P(τ ) = µ −1 e −τ/µ and the corresponding cumulative distribution function is F(τ ) = 1 − e −τ/µ and the relationship between the parameter and memory coefficient is r = 4M.After substituting the inter-event time distribution into Equation ( 13), we obtain the next inter-event time τ i+1 from a given previous inter-event time τ i and random number x in 0 ≤ x < 1 as follows: where (Jo et al., 2019).A total of 1,000 sequences with an exponential inter-event time distribution were generated using different parameters, according to Equation ( 31).The mean inter-event time µ, memory coefficient M, and the number of events n for each event sequence were randomly and independently drawn from 1 ≤ µ ≤ 100, −1/3 ≤ M ≤ 1/3, and 50 ≤ n ≤ 500, respectively.The initial inter-event time was set to µ for each event sequence. As illustrated in Figure 4, the B and M scatter plots of the exponential inter-event time datasets show that B values are concentrated in the lower range, whereas M values exhibit a broader distribution spread both above and below.This contrasts with the self-correcting process datasets, where the B and M scatter plots show that both B and M clustered at ∼0.Although both datasets have an exponential inter-event time distribution, their heterogeneity differs owing to variations in the relationship between B and M.Even with an exponential interevent time distribution, appropriately shuffling inter-event times can generate event sequences with temporal heterogeneity (i.e., burst) characteristics.We examine this difference further later, as it plays a role in generating variations in performance. . Real-world datasets We adopted four real-world datasets to evaluate the models: the Retweets, StackOverflow, Financial Transaction, and 911 Calls The dataset covers a five-year period, which is a relatively extensive time frame for prediction purposes.Therefore, we partitioned the data into monthly intervals.Additionally, to ensure statistical significance, we included only The dataset is available on https://www.kaggle.com/datasets/mchirico/montcoalert/. Frontiers in Computational Neuroscience frontiersin.orgthose locations where the number of events exceeded 50 in the data. Although there are other commonly used datasets, the burst and memory-aware characteristics assumed by the BMT model are applicable when the sequence length is sufficiently long.Furthermore, we sampled event sequences in quantities comparable to synthetic data while concurrently excluding sequences with short lengths.The time units for each dataset are as follows: Retweet and StackOverflow datasets are in days, Financial Transaction datasets are in milliseconds, and 911 Calls datasets are in minutes.The statistics of the datasets are displayed in Table 1. As shown in Figure 5, when comparing the Retweets datasets (or Financial Transaction datasets) to the StackOverflow datasets (or 911 Calls datasets), it is evident that the Retweets datasets and Financial Transaction datasets are more temporally heterogeneous.In the B and M scatter plots, the Retweets datasets (or Financial Transaction datasets) are concentrated in regions with larger values for both B and M, whereas the StackOverflow datasets (or 911 Calls datasets) are centered around values near 0 for both B and M.However, when compared to the self-correcting process datasets, the StackOverflow (or 911 Calls datasets) datasets exhibit greater dispersion.Additionally, the inter-event time distribution reveals that the Retweets datasets and Financial Transaction datasets follow a power-law distribution (exponent of 1.36 and 1.70, respectively), whereas the StackOverflow datasets and 911 Calls datasets follow an exponential distribution. . Impact of B and M embedding and losses While altering the combination of loss functions during the experimental process, there were five control groups. 1. BMT-NoE&NoL (BMT without B & M embedding and without corresponding losses).The simplest scenario occurs when α 2 = α 3 = α 4 = 0, utilizing only time and event losses.In this case, only event time and intensity were considered.2. BMT-NoE&L (BMT without embedding for B & M, but with losses for either B or M).To incorporate the effects of the B & M losses, we also consider the case of α 2 = 0, α 3 > 0, and α 4 > 0 with time and event losses.Note that the case for α 2 > 0 relates to predicting the discretized on-hot B & M, and hence it is not applicable in this scenario.3. BMT-E&NoL (BMT without losses related to B & M, but with embedding for B & M).The control group examines the impact of loss for B & M; the representation vector remains consistent with the BMT model, as shown in Equation ( 18), but without L B,M , L B , and L M , that is, α 2 = α 3 = α 4 = 0. 4. BMT-B (BMT with B embedding only and the corresponding loss).In the case where only B is embedded and the model is trained, the loss is also computed exclusively based on B as α 2 = 0, α 3 > 0, and α 4 = 0 with time and event losses.5. BMT-M (BMT with M embedding only and corresponding loss).In the case where only M is embedded and the model is trained, the loss is also computed exclusively based on M as α 2 = α 3 = 0, and α 4 > 0 with time and event losses. Results and discussion We tested several hyperparameters for both the BMT and THP models and chose the configuration that yielded the best validation performance.The hyperparameters are as follows: the number of bins for discretization (b) is set to 40, mini-batch size is 16, dropout rate is 0.1, embedding dimensions (d and d H ) are both 128, selfattention dimensions (d K and d V ) are 32, with eight layers in the encoder and 8 heads.For the loss function, hyperparameters were fine-tuned, mainly as follows: α 1 = 1e3, α 2 = 4e3, α 3 = α 4 = 1e4.We employed the ADAM (adaptive moment estimation) optimizer with hyperparameters β set to (0.9, 0.999).Regarding the learning rate, we utilized PyTorch StepLR, initializing it at 1e-4 and reducing the learning rate by a factor of 0.9 every 15 steps. The performance evaluation results for different models across diverse datasets are presented in Table 2.The results indicate that BMT achieves superior performance compared to THP and other control models in terms of the root mean squared error (RMSE) of the event times and log-likelihood.The main metric, RMSE, is a unit-adjusted value obtained by taking the square root of Equation ( 21).It measures how much predicted event times of the model differ from the actual event times.However, RMSE has a drawback, especially in the case of heterogeneous data, where it can perform well by accurately predicting large values while potentially struggling with smaller ones.To address this limitation, we introduce the event log-likelihood, defined in Equation ( 22), as a second metric.This metric arises when probabilistically modeling event sequences using the intensity function λ derived from Equation (1).A higher likelihood of the intensity function calculated with predicted event times of the model indicates that the model better probabilistically mimics the actual event sequence.Consequently, larger values of this metric correspond to better performance.Additionally, when considering the B and M losses in Equation ( 26), they represent how well the model captures discretized burstiness and discretized memory coefficients.Smaller values of these losses indicate better performance in replicating these aspects. In particular, as the data became more heterogeneous, performance improvement became more pronounced.In synthetic datasets, the performance enhancement of the BMT model was greater for power-law inter-event time data than for self-correcting data, which is a less heterogeneous exponential inter-event time distribution (see Figure 4).Similarly, in real-world datasets, the overall performance of the BMT model was superior in the Retweets dataset, which exhibited a more power-law inter-event time distribution, compared to the StackOverflow datasets with a less heterogeneous exponential inter-event time distribution (see Figure 5). When compared to the BMT-NoE&L model with respect to the RMSE of the event times, the BMT model shows that superior performance across all datasets except StackOverflow.This suggests that the inclusion of B & M embedding processes aids in augmenting the performance of the model by enabling the encoder to grasp the burst structure of event sequences.Compared with the BMT-E&NoL model, the BMT model demonstrates enhanced performance across all datasets, indicating that the integration of B & M losses into the overall loss function contributes to the Summarizing the aforementioned findings, it is evident that both B & M embedding and B & M losses contribute to performance enhancement.Excluding either of these components would likely impede the attainment of a substantial performance improvement, comparable to that observed with the BMT model.If either of the B embedding or M embedding is omitted, a significant performance improvement comparable to that of the BMT model cannot be expected.This was substantiated by comparing the BMT model with the BMT-B and BMT-M models, which revealed the superior performance of the BMT model across all datasets.These results can also be observed in the training curves shown in Figure 6. We also conducted experiments on mixed synthetic datasets, the results of which are presented in Table 3.The mixed synthetic datasets comprised a combination of three individual datasets: power-law, exponential, and self-correcting datasets.However, when separately examining the RMSE of event time and loglikelihood, the performance of the original BMT model appeared slightly inferior compared to some of the control BMT models, demonstrating an overall superior performance when considering both metrics together. In summary, the BMT model demonstrates improved performance on heterogeneous data owing to its capability to capture heterogeneous characteristics through the embedding of B & M, combined with the inclusion of corresponding loss functions. The BMT model has two limitations.First, in cases where the event sequence length is short, the incorporation of B and M into the BMT model may result in reduced effectiveness.This aspect originates from the statistical characteristics of B and M, because their meaningful representation is hindered by fluctuations and noise, particularly when the number of events is small.In the BMT model, during the calculation of sliding Frontiers in Computational Neuroscience frontiersin.orgThe bold value indicates the metric of the model with the best performance for each individual dataset. B and M values, masking was applied to exclude the first three events.However, considering that temporal heterogeneity becomes a meaningful characteristic only when the length of the event sequence is sufficiently long, this limitation can be viewed as unavoidable.The second limitation is the inability to consider event types, which will be addressed in future studies.To account for event types, it is necessary to reflect the correlation structure between inter-event event types to generate synthetic data and subsequently test the model using these data.In the context of performance enhancement, the improvement of the BMT model over the THP model can also be attributed to the fact that the BMT model does not embed event types.This allows the model to focus more on predicting the event times.Because the BMT-NoE&NoL model is analogous to a version of the THP model that does not consider event types, comparing the performance of the BMT-NoE&NoL model with the BMT model would provide a more equitable assessment.However, upon comparing the BMT-NoE&NoL model with the BMT model, it becomes evident that the BMT model exhibits superior performance across all datasets, except for StackOverflow. Conclusion Our study addresses the challenges presented by bursty temporal patterns in event sequences across various domains.By leveraging recent advancements in predicting event sequences using Transformer models based on the Hawkes process with selfattention mechanisms, we introduced a Burst and Memory-aware Transformer (BMT) model.This model effectively captures the nuances of burst patterns by embedding burstiness and memory coefficient within its self-attention module.The incorporation of a specialized loss function tailored for burstiness and memory coefficient further refines the model's predictive capabilities. Through comprehensive numerical experiments conducted on a diverse array of synthetic and real-world datasets encompassing various scenarios, we validated the outstanding performance of the BMT model by comparing it with the existing models and control groups.This is particularly evident in scenarios involving heterogeneous data, such as power-law inter-event time distributions.Hence, the explicit consideration of burstrelated parameters within the Transformer contributes to a deeper comprehension of complex event sequences, ultimately leading to an enhanced predictive performance.In future work, we will focus on integrating a multitude of insights from complex systems into the development of deep neural network models for temporal data. FIGURE FIGURE Schematic diagram of the Burst and Memory-aware Transformer model.Leveraging information from the preceding events, including burstiness B and memory coe cient M, the model predicts the timing of the next event through B & M embedding and the corresponding B & M loss. FIGURE FIGUREArchitecture of the Burst and Memory-aware Transformer model.IET; inter-event time; FF, feed-forward neural network; B, burstiness; M, memory coe cient. FIGURE FIGURERelationship between burstiness and memory coe cient (left) and inter-event time distribution (right) across three synthetic datasets: (A, B) power-law inter-event time, (C, D) exponential inter-event time, and (E, F) self-correcting process.For calculating the inter-event time distribution, logarithmic binning was employed. FIGUREFrontiers FIGURERelationship between burstiness and memory coe cient (left) and inter-event time distribution (right) for four real-world datasets: (A, B) Retweets, (C, D) StackOverflow, (E, F) Financial Transaction, and (G, H) Calls.For calculating the inter-event time distribution, logarithmic binning was employed. FIGURE FIGURETraining curves of RMSE for event times fitted on Financial Transaction datasets are presented for various BMT model scenarios: BMT-NoE&NoL, BMT-NoE&L, BMT-E&NoL, BMT-B, BMT-M, and the standard BMT model. Then, one can obtain the discretized pairs of B and M as (d B , d M ), where d B and To map the pair into a unique natural number, the Cantor pairing function was employed.The Cantor pairing function maps discretized d B and d M into a unique natural number d B,M as d M are ranging from 1 to b. TABLE Datasets statistics. IET, inter-event time; B, burstiness; M, memory coefficient; S.D., standard deviation.Frontiers in Computational Neuroscience frontiersin.org TABLE Performance evaluation results across diverse datasets for di erent models. TABLE ( improved performance of the model.Even in the prediction of one-hot discretized B & M, it can be observed that including B & M losses contributes to a reduction in cross entropy.No significant differences in performance were observed between the BMT-NoE&NoL and BMT-NoE&L models.This suggests that the incorporation of B & M losses is less significant in the absence of B & M embedding. TABLE Performance evaluation results for the mixed synthetic datasets: power-law, exponential, and self-correcting datasets.
2023-12-16T16:58:56.476Z
2023-12-12T00:00:00.000
{ "year": 2023, "sha1": "38580dab7fdb8d1d6c46e40ad2bfa088cc654063", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2023.1292842/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e79b6bfedaf231bc841567b7983dc0796425adc9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
871580
pes2o/s2orc
v3-fos-license
Hyperleptinemia is associated with parameters of low-grade systemic inflammation and metabolic dysfunction in obese human beings Leptin is an adipose tissue-derived hormone that has been involved in hypothalamic and systemic inflammation, altered food-intake patterns, and metabolic dysfunction in obese mice. However, it remains unclear whether leptin has a relationship with parameters of systemic inflammation and metabolic dysfunction in humans. We thus evaluated in a cross-sectional study the circulating levels of leptin in 40 non-obese and 41 obese Mexican individuals, examining their relationship with tumor necrosis factor alpha (TNF-α), interleukin (IL) 12, IL-10, central obesity, serum glucose and insulin levels, and serum triglyceride and cholesterol concentrations. Circulating levels of leptin, TNF-α, IL-12, IL-10, and insulin were measured by ELISA, while concentrations of glucose, triglyceride, and cholesterol were determined by enzymatic assays. As expected, serum levels of leptin exhibited a significant elevation in obese individuals as compared to non-obese subjects, showing a clear association with increased body mass index (r = 0.4173), central obesity (r = 0.4678), and body fat percentage (r = 0.3583). Furthermore, leptin also showed a strong relationship with serum TNF-α (r = 0.6989), IL-12 (r = 0.3093), and IL-10 (r = −0.5691). Interestingly, leptin was also significantly related with high concentrations of fasting glucose (r = 0.5227) and insulin (r = 0.2229), as well as elevated levels of insulin resistance (r = 0.3611) and circulating triglyceride (r = 0.4135). These results suggest that hyperleptinemia is strongly associated with the occurrence of low-grade systemic inflammation and metabolic alteration in obese subjects. Further clinical research is still needed to determine whether hyperleptinemia may be a potential marker for recognizing the advent of obesity-related metabolic disorders in human beings. INTRODUCTION Obesity is now considered a major health problem worldwide, with a growing prevalence in the Mexican population (Olaiz-Fernández et al., 2006;Popkin, 2011). Obese people show a higher risk to develop type 2 diabetes (T2D), coronary heart disease, stroke, arterial hypertension, non-alcoholic steatohepatitis (NASH), and other obesity-related metabolic disorders (Ritchie and Connell, 2007). These pathologies have been recently associated with a low-grade systemic inflammatory state (Odegaard and Chawla, 2008), characterized by altered circulating levels of inflammatory mediators in the obese subject, including tumor necrosis factor alpha (TNF-α), interleukin (IL-) 12, C-reactive protein (CRP), and IL-10 (Rush et al., 2007;Bremer et al., 2011). TNF-α has been shown to increase with adiposity in mice and human beings (Steinberg et al., 2006). Circulating concentrations of IL-12 and CRP are elevated in overweight and obese individuals, exhibiting a significant association with increased body mass index (BMI) and waist circumference, as well high glucose and triglyceride levels (Visser et al., 1999;Suarez-Alvarez et al., 2013). On the contrary, serum IL-10 levels have been shown to decrease in high-fat diet fed-mice (Gotoh et al., 2012). As it can be seen, circulating proinflammatory factors have received growing attention since they could play a major role in mediating the low-grade inflammatory state, which seems to decisively contribute to the advent of obesity-related metabolic disorders (Ritchie and Connell, 2007;Bertola et al., 2010). Leptin is an adipose tissue-derived hormone belonging to the class-I helical cytokine family (Trinchieri, 2003). Leptin has been shown to regulate food-intake and energy expenditure in both rodents and humans (Houseknecht et al., 1998). Interestingly, circulating levels of leptin have been shown to increase in high-fat diet fed-mice and obese subjects, leading to a state of hyperleptinemia (Maffei et al., 1995;Lin et al., 2000). However, although hyperleptinemia strongly correlates with parameters of low-grade systemic inflammation and metabolic dysfunction in animal models of obesity (Munzberg, 2008;Arruda et al., 2011;Stienstra et al., 2011), they show controversial results in human beings. For instance, it has been reported that serum leptin is augmented in obese individuals with metabolic syndrome (MetS) that also show an elevation in the plasma levels of CRP (Kim et al., 2006). In contrast, obese non-diabetic women subjected to a caloric restriction diet show decreased values in plasma leptin without exhibiting a significant diminution in the circulating levels of TNF-α (Agueda et al., 2012). In the same sense, in obese adolescents leptin has been shown to rise independently of the levels of insulin resistance and TNF-α (Aguilar et al., 2012;Cohen et al., 2012). As it can be seen, it is still unclear whether hyperleptinemia has an association with the occurrence of low-grade systemic inflammation and metabolic dysfunction in humans. We thus studied the serum levels of leptin in non-obese and obese Mexican individuals, examining their possible relationship with parameters of low-grade systemic inflammation (TNF-α, IL-12, and IL-10) and metabolic alteration (elevated serum glucose and insulin, increased level of insulin resistance, high triglyceride and cholesterol concentrations, as well as increasing waist circumference and body fat percentage). SUBJECTS A total of 81 apparently healthy Mexican adult volunteers from the south-central region of Mexico were included in the study. All of the participants provided written informed consent, previously approved by an institutional review board of the General Hospital of Mexico "Dr. Eduardo Liceaga," which guaranteed that the study was conducted in accordance with the principles described at the Helsinki Declaration. Subjects were excluded from the study if they had previous or recent diagnosis of diabetes mellitus, cardiovascular diseases, chronic hepatic or renal disease, blood pressure higher than 140/90 mm Hg, inflammatory or autoimmune disorders, acute or chronic infectious diseases, cancer, and endocrine disorders. We additionally excluded pregnant or lactating women, subjects under any kind of cardiometabolic medication including anti-inflammatory, antiaggregant, and anti-hypertensive drugs, and subjects without having an 8-12 h overnight fasting. All of the individuals enrolled into the study received full medical evaluation, including the achievement of clinic history and physical examination by a physician. MEASUREMENT OF ANTHROPOMETRIC PARAMETERS According to the World Health Organization criteria for BMI, all of the participants were divided into two groups: control nonobese subjects (BMI 18.5-24.9 kg/m 2 ) and obese subjects (BMI ≥ 30 kg/m 2 ), where BMI resulted of dividing weight by height squared (kg/m 2 ). Waist circumference was obtained from each study subject, considering the midpoint between the lower rib margin and the iliac crest, using a conventional tape in centimeters (cm). For women, abdominal obesity was considered when their waist circumference were 80 cm or higher, whereas for men it was considered when their waist circumference were 94 cm or higher. Percentage of body fat was individually recorded by using a body composition analyzer (TANITA®, Body Composition Analyzer, Model TBF-300A, Tokyo, Japan). MEASUREMENT OF METABOLIC PARAMETERS Blood samples were individually taken after overnight fasting, and collected into pyrogen-free tubes (VacutainerTM, BD Diagnostics, NJ, USA) at room temperature. Collection tubes were then centrifuged at 1000 g/4 • C for 30 min, and serum samples obtained and stored at −80 • C in numerous aliquots until use. Total cholesterol and triglyceride were individually measured in triplicate by an enzymatic assay according to manufacturer's instructions (Roche Diagnostics, Mannhein, Germany). Serum insulin levels were individually determined in triplicate by means of the enzyme-linked immunosorbent assay (ELISA), following the manufacturer's instructions (Abnova Corporation, Taiwan). Serum glucose levels were individually determined in triplicate by the glucose oxidase assay, following the manufacturer's instructions (Megazyme International, Ireland). The estimate of insulin resistance was individually determined by means of the HOMA-IR, as follows: fasting insulin concentration (mU/L) × fasting glucose concentration (mmol/L) divided by 22.5. MEASUREMENT OF LEPTIN AND LOW-GRADE SYSTEMIC INFLAMMATION PARAMETERS Blood samples were individually taken after overnight fasting, and collected into pyrogen-free tubes (VacutainerTM, BD Diagnostics, NJ, USA) at room temperature. Collection tubes were then centrifuged at 1000 g/4 • C for 30 min, and serum samples obtained and stored at −80 • C in numerous aliquots until use. Serum levels of leptin, TNF-α, IL-10, and IL-12 were determined in triplicate by ELISA, following the manufacturer's instructions (Peprotech, Mexico). STATISTICAL ANALYSIS Data from anthropometric and metabolic parameters were analyzed by using the Student's t-test for determining significant differences. Data from leptin, TNF-α, IL-10, and IL-12 were analyzed by means of using the Mann-Whitney U-test for determining significant differences. The Spearman's correlation coefficient was performed for examining the relationship of leptin with anthropometric, metabolic, and inflammatory parameters. All of the studied groups were matched by gender and age. Statistical analysis was performed using the GraphPad Prism 5 software. Differences were considered significant when p < 0.05. RESULTS A total of 40 non-obese controls and 41 obese subjects were included in the study. No significant differences were observed in age (for non-obese controls mean age 29.9 ± 10.35 years, whereas for obese subjects mean age 34.9 ± 10.24 years), and women/men proportion (22 women and 18 men in the nonobese control group, and 20 women and 21 men in the obesity group) in the studied groups ( Data are presented as mean ± standard deviation. Differences were considered significant when p < 0.05. serum concentrations of TNF-α and IL-12 were clearly increased in obese individuals as comparing with non-obese control subjects (Table 1). Concomitantly, serum levels of IL-10 were significantly lower in obese individuals than in non-obese subjects ( Table 1). It merits to mention that BMI was clearly correlated with waist circumference (r = 0.9297, p < 0.0001) and body fat percentage (r = 0.7655, p < 0.0001) in our study population. In a similar way, there was also a significant association between waist circumference and body fat percentage (r = 0.7655, p < 0.0001). In accordance with previous reports, circulating levels of leptin were significantly increased in obese subjects when comparing to non-obese control individuals. In terms of BMI, leptin exhibited a significant 1.5-fold increase in obese subjects as comparing with normal weight controls ( Figure 1A). In obese individuals, the mean value of leptin was 1256.1 ± 207.7 ng/mL, whereas in the non-obese group it was around 812.1 ± 417.8 ng/mL ( Figure 1A). Interestingly, serum values of leptin still showed a significant elevation when examining in subjects with abdominal obesity (Figure 1B). For this case, the mean value of leptin in subjects with abdominal obesity was 1141.8 ± 343.6 ng/mL, while it decreased to 858.3 ± 419.6 ng/mL in individuals exhibiting a normal waist perimeter ( Figure 1B). As expected, our results show that circulating levels of leptin increase with obesity-related anthropometric parameters. Indeed, leptin was significantly correlated with increased BMI (r = 0.4173, p = 0.0001), central obesity (r = 0.4678, p < 0.0001), and body fat percentage (r = 0.3583, p = 0.0010) (Figures 2A-C, respectively). Furthermore, leptin also exhibited a clear association with parameters of metabolic alteration. In this sense, FIGURE 1 | Serum levels of leptin in obese and non-obese individuals. (A) Circulating leptin levels were assessed in normal weight and obese subjects, defining obesity according to the World Health Organization criteria for body mass index. In our study population, serum leptin was also evaluated in terms of abdominal obesity (B). For women, abdominal obesity was considered when the waist circumference was 80 cm or higher, whereas for men, it was considered when the waist circumference was 94 cm or higher. Data are expressed as mean ± S. E. Differences were considered significant when p < 0.05. FIGURE 2 | Statistical correlation between serum levels of leptin and anthropometric parameters of obesity. Serum levels of leptin were positively associated with body mass index (BMI) (A), waist circumference (B), and body fat percentage (C). Coefficients (r) and P-values were calculated by using the Spearman's correlation model. The correlation level was considered significant when p < 0.05. serum leptin was significantly related with increased levels of blood glucose (r = 0.5227, p < 0.0001) and insulin (r = 0.2229, p = 0.0455) (Figures 3A,B, respectively). There was also a significant relationship between leptin and the level of insulin resistance, estimated by means of the HOMA-IR (r = 0.3611, p < 0.0009) ( Figure 3C). Interestingly, leptin had a positive association with increased triglyceride levels (r = 0.4135, p = 0.0001), but not with total cholesterol (Figures 4A,B, respectively). Besides having significant relationships with anthropometric and biochemical parameters associated with obesity-related metabolic alterations, hyperleptinemia also showed a strong relation with markers of low-grade systemic inflammation in the study subjects. In fact, circulating leptin was positively correlated with serum levels of TNF-α (r = 0.6989, p < 0.0001) and IL-12 (r = 0.3093, p = 0.0050) (Figures 5A,B), whereas a significant negative relationship between leptin and IL-10 was observed in the study population (r = −0.5691, p < 0.0001) ( Figure 5C). DISCUSSION As mentioned, it has been consistently shown that serum concentrations of leptin increase in high-fat diet fed-mice and obese humans. In mice, hyperleptinemia is related to hyperphagia and -12 (B). At the same time, hyperleptinemia exhibited a significant inverse association with circulating concentrations of IL-10 (C). Coefficients (r) and P-values were calculated by the Spearman's correlation model. The correlation level was considered significant when p < 0.05. fat depot augmentation, while it is associated with increased white adipose tissue (WAT) mass and body weight gain in human beings (Stanley et al., 2005). However, recent experimental evidence from animal models of obesity suggests that leptin is not only a marker of weight gain but also seems to have a relationship with developing a systemic state of low-grade inflammation and metabolic disturbance. In fact, obese mice exhibiting hyperleptinemia show increased plasma levels of inflammatory cytokines (Dube et al., 2008;Yang et al., 2010;Enos et al., 2013), accompanied by numerous metabolic disorders including hyperglycemia (Yang et al., 2010), hyperinsulinemia (Stienstra et al., 2011), hyperlipidemia (Kang et al., 2011), liver steatosis (Shih et al., 2010), and insulin resistance (Yang et al., 2010). Taking this experimental evidence into account, it is important to evaluate whether hyperleptinemia could also be associated to the establishment of systemic inflammation and metabolic dysfunction in human beings with high metabolic risk, such as obese individuals. In humans, the relationship of leptin with a state of metabolic dysfunction has been barely studied, showing inconclusive results to date. Indeed, a study conducted in a group of obese adolescents revealed that high serum levels of leptin correlate with increased BMI and waist circumference, without having a significant relation with the insulin resistance level (Aguilar et al., 2012). In contrast, recent data from a cross-sectional survey conducted in obese adults demonstrated that hyperleptinemia is clearly associated with BMI, hyperinsulinemia, and insulin resistance (Martins Mdo et al., 2012). Our results are consistent with the last study, since high levels of leptin were significantly correlated with hyperglycemia, hyperinsulinemia, increased HOMA-IR, and hypertriglyceridemia in our study population. A possible explanation to understand this apparently controversial finding may involve the age of the subjects included in each study. As it is widely known, the level of sex-steroid hormones reaches a plateau during maturity, in comparison to both childhood and adolescence where numerous hormonal variations are normally observed (Stanhope and Brook, 1988;Rogol, 2004). Interestingly, it has been recently reported that sex-steroid hormones are able to upregulate the leptin expression in human and rat cells (Feng et al., 2011;Gambino et al., 2012). Therefore, it is feasible to expect that synthesis of leptin may be enhanced during adulthood, which may contribute to decrease the leptin levels in children/adolescents in comparison with adults. However, additional clinical studies considering the influence of sex-steroid hormones upon the systemic levels of leptin are necessary in order to address major conclusions. An interesting finding in this cross-sectional study involves the relationship of hyperleptinemia with a systemic state of lowgrade inflammation in obese human beings. A recent study demonstrated that leptin is overexpressed in the subcutaneous adipose tissue (SAT) of obese individuals with MetS, as comparing with SAT from healthy obese subjects and non-obese individuals (Farb et al., 2011). Furthermore, increasing in the leptin expression is accompanied by macrophage infiltration and overexpression of proinflammatory cytokines such as IL-1β, IL-6, and IL-8 in the SAT of those same patients (Bremer et al., 2011;Farb et al., 2011). Consistent with this previous study, our results show that hyperleptinemia is significantly associated with high serum levels of TNF-α and IL-12, as well as reduced concentrations of IL-10 in subjects with central obesity, hyperglycemia, increased insulin resistance, and hypertriglyceridemia. IL-12 is a cytokine with the ability to induce synthesis of interferon-gamma (IFN-γ) in T cells and natural killer cells (Trinchieri, 2003). IFN-γ is a key mediator in releasing of TNF-α by classically activated macrophages (Odegaard and Chawla, 2008). Taking into consideration that serum IFN-γ has been shown to increase during obesity (Azar Sharabiani et al., 2011), it is conceivable to expect a positive relationship among TNF-α, IL-12, and leptin in our study population. Another intriguing issue concerning the positive association among leptin, TNF-α, and IL-12, involves the ability of leptin to regulate the expression of inflammatory cytokines. It has been recently reported that leptin is able to stimulate the in vitro production of TNF-α and IL-1β in human mononuclear cells (Tsiotra et al., 2013). Thus, it is now proposed that high levels of leptin may induce the production of proinflammatory cytokines in obese people, contributing in this way to the systemic inflammation observed in these subjects. Nevertheless, before establishing a possible cause-and-effect relation among leptin, TNF-α, and IL-12 in the scenario of obesity, additional prospective clinical research is required. Moreover, consistent with the installation of a systemic state of low-grade inflammation, we observed a significant reduction in the circulating levels of IL-10 in obese individuals as comparing with non-obese subjects. IL-10 is a cytokine with potent anti-inflammatory properties in mice and humans (Saraiva and O'garra, 2010). However, the role of IL-10 during systemic inflammation and metabolic dysfunction is still poorly understood (Formoso et al., 2012;Tajik et al., 2012). For this reason, it is important to mention that the present work is one of the first contributions showing a significant inverse correlation between serum IL-10 and hyperleptinemia in obese individuals. Collectively, these findings suggest that obesity-related hyperleptinemia is accompanied by a low-grade inflammatory profile, characterized by increased circulating levels of TNF-α and IL-12, and reduced concentrations of IL-10. Whether hyperleptinemia is cause or consequence of the systemic inflammatory milieu in humans, is a matter worthy of being considered in further basic and clinical studies. Present results demonstrate that high circulating levels of leptin are significantly associated with a systemic state of lowgrade inflammation and metabolic dysfunction in obese subjects. Additional prospective clinical studies are still required to evaluate whether hyperleptinemia may be used as a marker for recognizing the advent of obesity-related metabolic disorders in human beings.
2016-05-04T20:20:58.661Z
2013-05-27T00:00:00.000
{ "year": 2013, "sha1": "983319534b2777bec79035bd43c4feeda4beb905", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnint.2013.00062/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "983319534b2777bec79035bd43c4feeda4beb905", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
244600286
pes2o/s2orc
v3-fos-license
STATE AUTHORITY AND PUBLIC TRUST IN NATIONAL ZAKĀH MANAGEMENT: HISTORICAL LESSONS, FIQH DISCOURSE, AND INTERNATIONAL COMPARISON The aspect of zakāh management or administration is not regulated extensively in Islamic law. Since the dawn of Islam, zakāh management has become the field of ijtihād based on mashlaḥah. And today, the practice of zakāh management in contemporary Muslim countries has been incarnating a wider area of experiment. In contemporary Indonesia, the Law Number 23 Year 2011 concerning Zakāh Management has been passed. This law, which become effective since 2016, caused upheaval within national Islamic philanthropy sector since it regulates national zakāh management currently dominated by civil society, based on “classical fiqh opinion” that only the state has authority to manage zakāh. This paper lift up an important conclusion that zakāh management entirely by the state is not be in effect unconditionally, but with many of qualifications. Moreover, the effectiveness of zakāh management by state relies heavily on the level of public trust against government, not by enforcement of the state. Zakāh management by the state is merely an instrument, not the goal itself. The ultimate objective that must be pursued is the delivery of zakāh to those who deserve it with optimum benefits. INTRODUCTION Regarding its central position in Islam as one of the most important formal rituals ('ibâdah mahdhah), zakāh comes with comprehensive operational conditions, ranging from types of wealth on which zakāh is imposed on (mâl al-zakāh), the amount of zakāh (miqdâr al-zakāh), the limit amount of wealth before its zakāh is paid (nishâb), the time limit of wealth ownership before its zakāh is paid (haul), until the allocation of zakāh (mashârif al-zakāh). Nevertheless, the aspect of zakāh management or administration is not regulated extensively in Islamic law. The Prophet Muhammad is reported to have managed and regulated zakāh directly and treated it as part of public finance. But it happened in a time when the structure of the state was still trivial, the level of economy was low, and the territory was limited. In fact, Islamic history recorded that along with expanding of territory, growing of economy, and an ever-increasingly complex structure of government, policies related to zakāh management have also changed dynamically from time to time, which seems to follow the principle of tasharruf al-imâm 'ala ar-ra'iyyah manûth bi al-mashlahah (government's policy related to its people is bound to public interest). STATE AUTHORITY AND PUBLIC TRUST IN NATIONAL ZAKĀH MANAGEMENT: HISTORICAL LESSONS, FIQH DISCOURSE, AND INTERNATIONAL COMPARISON 3 every civil society groups that want to participate in national zakāh management, only allowed to "assist" BAZNAS with limited authority. With the main idea that zakāh management is the exclusive authority of government, the entire Law Number 23/2011 significantly strengthen and gives privileges to the government-owned zakāh operator (BAZNAS) as the sole holder of authority in national zakāh management. Under the Law Number 23/2011, Central BAZNAS acts as operator as well as regulator for other operators, while the Ministry of Religious Affairs runs the functions of supervision, coaching, and execution of sharia compliance for all operators, BAZNAS and LAZ alike. National zakāh governance is built through reporting and accountability to higher structure and regulator, namely Central BAZNAS, and administrative sanctions for non-compliance by the Ministry of Religious Affairs. At the same time, Law Number 23/2011 marginalizes and extremely limits civil society-based zakāh operators (LAZ) which currently are the main players in national zakāh, to the point that it's potentially "lethal". It is an irony, considering that in the last three decades, Indonesia has been experiencing a resurgence of zakāh after managed by civil society. With a transparent and professional conduct based on modern management principles, LAZ has been utilizing zakāh as an economic power for social change in Indonesia. Zakāh of Indonesia, which was originally circulated only in the realm of individual charity, has now been transformed to reach the realm of public empowerment. Along with the raising of public trust and the growing Muslim middle class in Indonesia, the potential of zakāh funds are explored becomes even greater. Amid this resurgence, Law Number 23/2011 then appeared, seeming to want to seize the enormous potential of national zakāh from LAZ. The national debate over Law Number 23/2011 was eventually brought to the Constitutional Court (Mahkamah Konstitusi). Judicial review over the Law made by civil society in the mid-2012 ending in disappointment: the main substance of the Law Number 23/2011 remained valid. In its decision on October 31, 2013, the Court rejected all major lawsuits against the Law Number 23/2011. Steep road now awaits Indonesia's zakāh sector, which currently relies heavily on LAZ. This study will challenge the main hypothesis of the Law Number 23/2011, that only the state has the authority in the management of national zakāh. The research adopts methodological pluralism, using historical, fiqh, and economic approaches. Besides being in line with the nature of the problem studied, the methodological pluralism will also guiding us closer to the meaning and purpose of Islamic law. Section 2 reviews the history of zakāh management particularly in early periods of Islamic civilization. Section 3 discusses fiqh discourse among jurist, across schools and ages, about management of zakāh entirely by the state. Section 4 analyses recent practices of national zakāh management in contemporary Islamic world. Section 5 concludes. ZAKĀH MANAGEMENT IN ISLAMIC HISTORY The collection of zakāh has been started since the dawn of Islam by the Prophet Muhammad, beginning in the 2 nd year of hijra (624 AD) according to the majority's opinion. Zakāh on the soul (zakāt al-fithr) is a voluntary act, closely associated with the feast of 'id al-fithr, and done individually. This is diametrically opposed to zakāh on wealth (zakâh al-mâl), which has been obligatory since the prophetic age. The collection of zakāt al-mâl from the very beginning has been regulated and managed directly by the With the increasing population of Muslim community and territorial expansion of Islamic State, the Prophet then appointed "a large number" of zakāh officers, including the famous companions of the Prophet such as 'Umar and 'Alî, to collect zakāh from the Muslim community. It can be said that the Prophet has appointed zakāh officers for the entire Arabia in his time. It thus becomes the general basis for the opinion, that since the time of Prophet Muhammad, zakāh has been an affair and duty of the government 4 . But what is clearer is, the appointment of "special officers" of zakāh by the Prophet marked a new era in which zakāh was no longer managed personally by the Prophet, but collectively managed by professionals who received a share of the collected zakāh revenue under the allocation of 'âmilîn. The Prophet Muhammad himself as zakāh organizer did not receive a share of zakāh revenue, neither did the Prophet's family and relatives during his lifetime. Thus, there has been a transformation of zakāh management which led to the formal, collective, organized and permanent structure since the time of the Prophet Muhammad. Some other characteristics of zakāh management in the time of the Prophet are the detailed regulations regarding the collection and distribution of zakāh, including the etiquette of zakāh officers and the ideal public attitude towards zakāh officers, separation of zakāh from other state revenues along with its separate distribution, the general principle of local collection and distribution where zakāh is distributed in areas in which it is collected without being deposited centrally, zakāh calculation which is generally done by zakāh payer (self-assessment), and compulsory zakāh collection by officer which is only applied to livestock and crops 5 . When the Prophet Muhammad demised, there were some who raised questions as to whether zakāh is paid to the Prophet personally or to the government. During the time of Caliph Abû Bakr, some Arab Bedouin tribes refused to pay zakāh on the assumption that zakāh is the Prophet's personal income, so that when the Prophet passed away, zakāh is no longer mandatory 6 . It is recorded in history that Abû Bakr declared war on those who refused to pay zakāh, an incident known as the riddah war. The event is often misunderstood by some people, particularly the orientalists, as the evidence that the nature of zakāh was still unclear at the time of Prophet Muhammad and, as the consequence, Abû Bakr was the person who responsible for the institutionalization of zakāh as a permanent obligation in Islam. Al-Qaradhâwî argues that the incident happened not because the concept of zakāh was unclear at that time, but rather because those tribes were recently converted to Islam and still highly affected by their previous Bedouin life 7 . This historical event -the decision of Caliph Abû Bakr to fight those who refused to pay zakāh -is also widely used as a justification for the forced zakāh collection by the 3 Amelia Fauzia. (2013). Faith and the State: A History of Islamic Philanthropy in Indonesia.Leiden: Brill Academic Publishers, page 45. 4 Yûsuf al-Qaradhâwî. (1988). Fiqh al-Zakâh (Indonesian Translation). Bogor: Pustaka Litera AntarNusa, page 738-739. 5 Monzer Kahf. (1993). "Zakah Management in Some Muslim Society", No. 11, Negative attitude of the Bedouins towards zakāh has been noticeable since the time of the Prophet Muhammad (p.b.u.h). They saw zakāh they paid as a loss or fine/penalty (maghram). See al-Qur'ân 9: 98. The nomadic life and its inherent difficulties made Bedouins harder to accept ethical obligations that were not directly related to their interests. 7 Al-Qaradhâwî. Fiqh al-Zakâh. page 92-93. 9 . This indicates that zakāh as part of public finance institution in Islam has a dual dimension, the ritual and political dimension. When the Prophet Muhammad passed away, the phenomenon of riddah (apostasy) erupted. There are two cases here, those who were apostates and claimed the status of prophethood, and those only who rejected the sharia law, including refusing to pay zakāh to the government. The act of "departing to obey the just ruler" as done by the revolters (ahl al-baghy) who refused to pay zakāh had different consequences from the apostates (ahl al-riddah) who left Islam. Ahl al-riddah were much more dangerous to Islamic State at that time, since they rejected not only the political authority but also religious authority. Hence, ahl al-baghy who refused to pay zakāh to the authorities hadn't been categorized as apostates. That's why 'Umar advised Abû Bakr not to fight them. If then Abû Bakr insisted to fight ahl al-baghy in the same way as he fought ahl al-riddah, it was his political decision as a ruler to keep the integration of the newly formed state, not an ideological one 10 . STATE AUTHORITY AND PUBLIC TRUST IN NATIONAL The political dimension of zakāh is also visible in the decision of Abû Bakr to fight only those who refused to pay zakāh on livestock (al-mawâsyî) while leaving those who refused to pay zakāh on money (shâmit). Livestock, which at that time consisted of camels (ibil), cows (baqar), and sheeps (ghanam), is a form of wealth that is clearly visible and not easily hidden. Since the socio-political purpose of zakāh is to transfer wealth from the rich to the poor, the government as political authority was justified to use its power to achieve this goal. In this spirit to realize the distributive economic justice, Abû Bakr fought those who refused to pay zakāh on livestock 11 . The political authority of government to implement forced zakāh collection is limited to visible wealth (amwâl al-zhâhirah) only. Whereas for the form of wealth that is invisible and can easily be hidden by its owner (amwâl al-bâthinah), the government does not have the political right to force people and should leave the zakāh payment for this form of wealth as a personal matter, unless the person voluntarily submits it to the government. If amwâl al-zhâhirah is within the political dimension of zakāh, then amwâl al-bâthinah is within the ritual dimension. Abû 'Ubayd confirmed that this is the sunnah of the Prophet Muhammad, where he sent zakāh officers to livestock owners to collect zakāh from them, either voluntarily (ridhâ) or forcibly (kurh). But there is no evidence that the Prophet has ever forced people to pay zakāh on currency 12 . This is the reason that made Abû Bakr only fought those who refused to pay zakāh on livestock and left those who did not pay zakāh on money. Abû Bakr did not want to go too far into the area where he as a ruler did not have authority. The concept that distinguishes amwâl al-zhâhirah and amwâl al-bâthinah which determines the political and ritual character of zakāh, and therefore determines the role and position of the government in implementing its political power, is generally accepted by Islamic jurists in the field of public law, such as al-Mâwardî. Al-Mâwardî clearly stated that the owner of amwâl al-bâthinah has more authority than the zakāh officers to pay its zakāh out 13 . Thus, through riddah war, Abû Bakr has an important role in saving the original character of zakāh. Had Abû Bakr did not fight those who refused to pay zakāh amwâl al-zhâhirah, zakāh would have lost its political character and only become a personal ritual. And therefore, zakāh would be meaningless as part of Islamic public finance institution. And had Abû Bakr fought all who did not pay zakāh without discriminating between the owners of amwâl al-zhâhirah and amwâl al-bâthinah, then zakāh would have been equated with tax in general, in which the basis is entirely political 14 . In the context of state's political power over the collection of zakāh amwâl alzhâhirah, we can understand the policy of Caliph 'Umar to give a dispensation of zakāh payment for livestock during the economic crisis known as the year of ramadah in 18 H. A long drought sweeping the whole Hejaz made 'Umar decided to postpone the collection of zakāh on livestock in the year of ramadah by not sending zakāh officers. A year after, when the long drought has passed, zakāh officers came to the livestock owners and collected their zakāh of two years 15 . 'Umar is also recorded as the person who responsible for the institutionalization of zakāh collection by placing officers on roads, bridges, and ports to collect zakāh on commerce from Muslim merchants at the rate of 2.5% while also collecting taxes from non-Muslim merchants, both local (dzimmî) and foreign (harbî), with a rate of 10% ('usyr) 16 . The implementation of state's political power over zakāh of amwâl al-zhâhirah faced great challenges after the reign of Caliph 'Utsmân. The dynamics of zakāh management in early days of Islam is narrated in detail by Abû 'Ubayd (d. 224/838). At first, zakāh was paid directly to the Prophet Muhammad or to someone who was trusted to manage it. In the time of Abû Bakr, zakāh was paid to Abû Bakr or to someone who was trusted to manage it. Similarly, during the time of 'Umar, zakāh was also handed to 'Umar or to someone who was appointed to manage it. The practice continued in the time of 'Utsmân, where zakāh was handed to 'Utsmân or to someone who was appointed to manage it. But after 'Utsmân was killed, starting from the reign of 'Alî, Muslim community began to divide in their opinion. Some still paid their zakāh to the ruler; others distributed their zakāh directly to mustahik 17 . After the era of khilâfah al-râsyidah (guided Caliphate), political conditions and public trust to government did not improve. The situation got worse with the growing perception in the wider Muslim community that government after khilâfah al-râsyidah did not have religious commitment anymore. During Umayyad Dynasty, rulers were portrayed as untrustworthy, failing to deliver zakāh to those who deserve it (mustahik). (alcoholic beverages), and appointed non-Muslims as zakāh collectors 18 . All these things only increased the Muslim community's reluctance to pay zakāh to the government. This reluctance to pay zakāh to the authority is reflected in the attitude of early Islamic jurists such as Ibn' Umar (d. 73/692). In spite of those facts, zakāh management by the government, both ritually and politically continued to run and evolve, while trying to maintain common pattern practiced in the time of the Prophet and khilâfah al-râsyidah. Caliph Mu'âwiya is recorded as the first person to collect zakāh by deducting it directly from the salaries of state employees. A special government office to receive the payment of zakāh, dîwân alshadaqah, was established during the time of Caliph Hisyâm (d. 125/743). However, Abû Yûsuf (d. 182/798) informed that zakāh management system at that time was corrupt and inefficient. Zakāh collection was conducted by kharâj officers who did not record zakāh revenue separately as specified by sharia, whereas zakāh on commercial goods were collected by 'usyr officers ('usysyâr) and managed separately from other types of zakāh. The joint management office of zakât and awqâf (dîwân al-birr wa al-shadaqah), introduced in 315/927, showed a decrease of zakāh revenue 19 . In the manner of a centralized management of public finance, all fiscal revenues (huqûq) in Bayt al-Mâl, like khums, kharâj, jizyah, 'usyr and zakât, were spent without differentiating the type of expense. This raised a strong suspicion that the process of zakāh distribution did not comply with sharia law. Sharia provisions, which require zakāh to be distributed only to 8 groups (ashnâf) and be prioritized to be spent at the local level, seemed to be abandoned. The only source of public revenue to be managed in accordance with sharia was waqf, which was not part of the fiscal revenue but under the state's control, generally through qâdhî 20 . By means of zakāh administration united generally in tax administration, then there are no case of zakāh being regarded as special management by state, which implies the existence of zakāh officers. Al-Ghazâlî (1058-1111) confirmed that in most countries in his time, one couldn't found anymore two groups of zakāh receiver, namely mu'allaf and 'âmil 21 . Ibn Taymîyya (1263-1328) reported that in Egypt during the Mamlûk Dynasty, zakāh management by the state contained a lot of violations of the provisions. Zakāh was often collected from improper types of wealth, even from types of wealth excluded by sharia. Rate of zakāh was not limited to the maximum rate of 2.5% of the wealth value in general, or 10% for agricultural products. Zakāh was also often collected before a year passed 22 . There's almost no adequate information regarding the details of zakāh management in Islamic history. But the practice of zakāh management in the Islamic worldin Arabia, Turkish Ottoman, and Indian Mughalshows several unique common patterns 23 : (i) zakāh collection by the state is only applied to "visible" wealth (amwâl al-zhâhirah) with or without being called as zakāh, while zakāh on "invisible" wealth (amwâl al-bâthinah) is paid voluntarily; (ii) zakāh collection by the state is not carried out by a special 18 Abû 'Ubayd. Al-Amwâl, page 686-689. 19 Zysow, "Zakât", page 409. 20 Cl. Cahen. (1986). "Bayt al-Mâl", in H.A.R. Gibb, et. al. (Eds.) It is seen clearly that the political character of zakāhthe payment of zakāh of amwâl al-zhâhirah to the authorityhas its ups and downs. It is not always observed by the Muslim community. The implementation of political zakāh depends much on the level of Muslim public trust to the state. While the ritual character of zakāh -the payment of zakāh of amwâl al-bâthinah and its delivery to mustahik-is always observed by the Muslim community throughout history as personal matter, without any state intervention. DISCOURSE The jurists in general agreed that the ruler has to appoint and send officers to collect zakāh. Since there are those who have wealth but do not know about the obligation of zakāh or those who know about the obligation but are too stingy to pay it, it is obligatory to have zakāh collectors 24 . The mention of those who administer zakāh using the term 'âmilîn 'alayhâ in al-Qur'ân 9: 60 indicates that zakāh should be managed in the best possible way. The majority of 'ûlamâ agreed that the instruction "khudz min amwâlihim" ("taking zakāh from their wealth") in al-Qur'ân 9: 103, is addressed to the Prophet Muhammad and to everyone who hold the affairs of Muslim community after him 25 . The case of riddah war corrected the misinterpretation of al-Qur'ân 9: 103 that zakāh collection is merely the Prophet's personal authority. The instruction "khudz min amwâlihim" (al-Qur'ân 9: 103) descended in the context (asbâb al-nuzûl) of acceptance of repentance of the Prophet's companions who didn't fight with him in the Battle of Tabuk. After Allâh accepted their repentance (al-Qur'ân 9: 102), they brought their wealth to the Prophet and asked him to donate it on behalf of them while asking forgiveness for them. But the Prophet refused to do it, causing the descent of al-Qur'ân 9: 103, which says: "Taking zakāh from their wealth by which you cleanse and purify them" 26 . Then, when the Prophet demised, some of Bedouin tribes thought that zakāh no longer compulsory since there's no Prophet's prayer anymore to cleanse and purify them. The policy of Caliph Abû Bakr to fight against those who refuse to pay zakāh of livestock has saved the political character of zakāh, a type of zakāh which has to be paid to the government to be managed. The majority of 'ûlamâ agree that the management of zakāh of amwâl al-zhâhirah is the sole authority of ruler, where the ruler has a right to collect it forcefully. While regarding the management of zakāh of amwâl al-bâthinah, there are differing opinions. Hanafî and Syâfi'î school considered the management of zakāh of amwâl al-bâthinah to be the domain of the wealth owner. Mâlikî school said that one should hands over all of their zakāh, zhâhir as well as bâthin to the ruler even though they're zhâlim, so long as they are trustworthy to manage zakāh. Meanwhile, Hanbalî school stated that handing over zakāh to the ruler is not obligatory, but is permissible, whether it's just or zhâlim (unjust) ruler, and whether it's zakāh of zhâhir or bâthin wealth 27 . Al-Qaradhâwî chose and supported two opinions regarding zakāh management in Islamic fiqh 28 . First, zakāh management is part of Muslim government authority, where the government is entitled to collect zakāh of all types of wealth, be it zhâhir or bâthin, especially when the ruler knows that their people abandon zakāh payment. Second, the government's failure to manage zakāh, by abandoning and not collecting zakāh from society, doesn't nullify the individual responsibility to pay zakāh, where muzakki still have to count the amount of zakāh they have to pay and distribute it themselves to mustahik. STATE AUTHORITY AND PUBLIC TRUST IN NATIONAL When defining the government's authority, and even making it imperative, to manage zakāh according to the provisions of sharia, al-Qaradhâwî required qualification that the government should let the wealth owner to share one third or a quarter of their zakāh obligation by themselves, in accordance with the tradition of the Prophet. Moreover, al-Qaradhâwî also required that the authority to collect zakāh only applies for Islamic government where Islam is defined as the legal basis of government and statehood, including political, economic, social, and cultural aspects. The secular government which based itself on non-Islamic ideology has no right and is prohibited from collecting zakāh 29 . Nevertheless, the information from Abû 'Ubayd suggests that the discourse of zakāh management by ruler cannot be separated from disagreements and is full of dynamics, even in the early days of Islam. The fiqh discourse regarding the handing over of zakāh to ruler is coined for the first time after the assassination of Caliph 'Utsmân. The discourse dynamics of handing over of zakāh to the state can be well observed in the attitude of Ibn 'Umar. At first, Ibn 'Umar clearly stated that zakāh must be handed over to the ruler, even though they no longer have religious commitment. So long as the ruler is Muslim (observing prayer), it is mandatory for people to submit their zakāh to them. However, after following the dynamics in society, Ibn 'Umar finally revised his opinion by no longer requiring people to hand over zakāh to the ruler, but rather distributing it directly to those who deserve it (mustahik) 30 . This clearly indicates that when the 'ûlamâ assert the obligation to hand over zakāh of amwâl al-zhâhirah to the ruler, they assume that the government has Islamic characteristics. When the ruler's religious commitment degrades significantly, they no longer require people to observe the political dimension of zakāh, but rather the ritual dimension of it by distributing zakāh directly to mustahik. This fiqh dynamics confirms the character of zakāh as a special public finance institution, in the sense that zakāh should be distributed to public, whether through the government or not. The distributive aspect of zakāh is much more important that its collective aspect 31 . Al-Qaradhâwî strongly asserted that the religious commitment of the ruler to Islamic teachings is the requirement for the handover of zakāh to them. But if the zhâlim ruler still collects zakāh according to the provisions of sharia, then it is considered as legitimate and the wealth owner may pay their zakāh to them. The wealth owner is also permitted to hand over their zakāh to ruler even if they're zhâlim so long as the ruler distributes zakāh to the right target group. Therefore, if the ruler is zhâlim and doesn't distribute zakāh to those who deserve it, then it is better for the wealth owner to distribute zakāh by themselves to mustahik, if it is not collected by the ruler 32 . Abû 'Ubayd required further qualification that the obligation to hand over zakāh to the ruler is limited only to zakāh al-amwâl al-zhâhirah, while the payment of zakāh of amwâl al-bâthinah is within the individual domain, unless they hand over it voluntarily to the ruler. Abû 'Ubayd asserts that it's the sunnah of the Prophet, where the Prophet Muhammad sent zakāh collectors to the owners of amwâl al-zhâhirah, that is livestock, and collected zakāh from them, either voluntarily or forcefully. But there's no record stating that the Prophet forced people to pay zakāh al-amwâl al-bâthinah, that is gold and silver coins 33 . Al-Qaradhâwî specifically refuted Abû 'Ubayd who distinguished between amwâl al-zhâhirah and amwâl al-bâthinah, and stated that zakāh collection by the ruler is limited only to zakāh al-amwâl al-zhâhirah while zakāh al-amwâl al-bâthinah is paid voluntarily by individuals. Al-Qaradhâwî argued that the Prophet didn't send officer to collect zakāh al-amwâl al-bâthinah because people have handed over their zakāh to the Prophet voluntarily and the abandonment of zakāh al-amwâl al-bâthinah was to raise the conscience of his companions. Furthermore, al-Qaradhâwî took the case of 'Umar as justification for zakāh collectors to collect zhâhir as well as bâthin of wealth and didn't leave it as personal choice of the owner. Caliph 'Umar is recorded to have collected a type of amwâl albâthinah, namely commercial wealth from Muslim merchants at the rate of 2.5% and also collect tax from non-Muslim merchants, both local (dzimmî) and foreign (harbî), each at the rate of 5% and 10% ('ushr). During the era of Caliph 'Utsmân, the state treasury in Baytal-Mâl became more abundant and there seemed to be difficulty in collecting zakāh al-amwâl al-bâthinah, so 'Utsmân decided to collect zakāh al-amwâl al-zhâhirah only while leaving zakāh al-amwâl al-bâthinah within the domain of its owner as a form of trust and also to ease them. Thus, the original law of this matter is for the ruler to collect all type of zakāh on wealth, be it zhâhir or bâthin 34 . However, al-Qaradhâwî's criticism to Abû 'Ubayd seems to contain several weaknesses. First, the distinction between amwâl al-zhâhirah and amwâl al-bâthinah in general is adopted by the jurists to define the implication over the government's authority in collecting zakāh forcefully. The government is entitled to forcefully collect zakāh alamwâl al-zhâhirah only. But this distinction doesn't imply that the government is prohibited from managing zakāh al-amwâl al-bâthinah. The government is still allowed to manage zakāh al-amwâl al-bâthinah, but its collection should be based on voluntarism, not coercion. Society could manage it themselves or hand the zakāh al-amwâl al-bâthinah over to the government voluntarily. Second, the use of the case of commercial zakāh collection by Caliph 'Umar as justification, that the ruler is entitled to collect zakāh al-amwâl al-zhâhirah and al-amwâl al-bâthinah forcefully, is not appropriate. The jurists, especially those of Hanafî school, justified the collection of commercial zakāh, which is part of amwâl al-bâthinah, based on the protection (himâya) provided by the government over that type of wealth. When the owner of commercial goods carries their wealth on public roads, they have brought their wealth into the area of government's protection. For the same reason, similar STATE AUTHORITY AND PUBLIC TRUST IN NATIONAL ZAKĀH MANAGEMENT: HISTORICAL LESSONS, FIQH DISCOURSE, AND INTERNATIONAL COMPARISON 11 treatment is also applied to commercial wealth owned by non-Muslim, in form of 'usyr 35 . By collecting zakāh on commercial goods on roads, bridges, or ports, 'Umar seemed to hold opinion that commercial wealth is no longer categorized as amwâl al-bâthinah when it's brought by its owner into public places. Furthermore, 'Umar's adoption of opinion that the ruler collects zakāh al-amwâl al-zhâhirah only is proven in the case of dispensation of zakāh payment to the state during the crisis of ramadah year which was applied only to zakāh on livestock, which is an amwâl al-zhâhirah 36 . The case of crisis in ramadah year clearly shows that when 'Umar made zakāh as an instrument of fiscal policy, a tool to fight against crisis (counter-cyclical policy), he only used a type of zakāh which is under the domain of government, that is zakāh al-amwâl al-zhâhirah only, which in this case refers to zakāh on livestock. Third, the policy of distinction between zakāh of amwâl al-zhâhirah and of amwâl al-bâthinah was not merely an ijtihâd from Caliph 'Utsmân, but rather the confirmation of what has been set by the Prophet and previous caliphs, Abû Bakr and 'Umar. The Prophet Muhammad is recorded to have collected zakāh forcefully on livestock only, and didn't do the same on gold and silver. Abû Bakr waged war against those who refused to pay zakāh on livestock and spared those who refused to pay zakāh on currency. Meanwhile, 'Umar gave dispensation during the time of economic crisis by postponing the payment of zakāh on livestock only. From the above fiqh discourse, it thus can be seen that zakāh is part of Islamic public finance institution, where the government has authority to manage it. But the authority to collect zakāh requires that the government should have Islamic characteristics and is not zhâlim. Furthermore, the government's authority is limited only to zakāh al-amwâl alzhâhirah. Therefore, the political character of zakāh, that is the handover of zakāh to ruler, depends on the level of the ruler's religious commitment and public trust to them. When most of Muslim nations in this contemporary era are secular, which don't use Islam as their national principle, and some are even under zhâlim authoritarian regimes, it is easy to understand why zakāh management in modern Muslim society becomes an arena of experiment. All of the above discussion shows that in exercising the authority of zakāh collection, Islamic government have to show their strong commitment to religious teachings, govern justly, collect and distribute zakāh according to the provisions of sharia, collect zakāh forcefully on amwâl al-zhâhirah only, and give a chance to zakāh payer to distribute one third or a quarter of their zakāh directly. If the above qualifications are not met, then the political dimension of zakāh is no longer compulsory, leaving only its ritual dimension: zakāh shall be distributed to public, whether through government or not. In other words, zakāh management by the state is not the goal, but rather a means. The ultimate objective of zakāh management is the delivery of zakāh to the right zakāh receiver (mustahik) with optimal benefit. This conclusion is in line with contemporary fiqh principle, al-'ibrah bi maqâshid al-syarî'ah (historical lessons has to refer to higher objective of law) and the intent and purpose of sharia. This conclusion, that zakāh management by the state is merely an instrument and not the goal, will be better at protecting mashlahah by encouraging the formulation of sharia-oriented policy (siyâsah shar'iyyah), which is focused on benefit (shalâh) and avoids harm (fasâd). NATIONAL ZAKĀH MANAGEMENT IN CONTEMPORARY ERA: INTERNATIONAL COMPARISON Most of Muslim states today are national-secular states, which do not use Islam as their national principles, and some are even governed under zhâlim authoritarian regimes. In relation to zakāh management by secular state, the condition is not discussed much in classical fiqh studies. It is not surprising, then, that zakāh management in contemporary Muslim societies becomes a subject of various experiments. Based on its collection, contemporary zakāh management in general can be divided into two categories 37 . First, the obligatory system where zakāh payment to the state is implemented forcibly and there is penalty for non-compliance. Such system is recorded to be implemented in six Muslim nations, namely Saudi Arabia, Pakistan, Sudan, Libya, Yemen and Malaysia, where these countries made Islam as their national principle. Second, the voluntary system where collection and distribution of zakāh is done voluntarily. Zakāh management is run by both government and civil society and there is no legal penalty for not paying zakāh. This system is applied in the majority of Muslim nations which are secular in general, not using Islam as their national principles, such as Kuwait, Bangladesh, Jordan, Indonesia, Egypt, South Africa, and non-Muslim countries where Muslim is the minority. The rise of Western Imperialism since 16 th century and the fall of the last Islamic caliphate, the Turkish Ottoman, in 1924, made almost all Muslim nations entered the 20 th century as colonies. Under the rule of non-Muslim colonialists, zakāh management in general disappeared from public sphere and entirely became a voluntary activity on individual level. After World War II, Muslim nations which gained their independence, started to pay attention again to zakāh management. Some Muslim nations, which generally made Islam as their national principle, chose obligatory system with collective management by the state such as Saudi Arabia (1951), Libya (1971), Yemen (1975), Pakistan (1980) and Sudan (1984. In three countries (Yemen, Sudan, and Pakistan), the implementation of zakāh is enshrined in the state constitution. Most of other Muslim nations, secular in general, chose voluntary system with several variations. At least there are three variants of zakāh management in this voluntary system. First, zakāh management by non-governmental charitable organization, which exists in many Muslim countries and communities. This charitable organization is characterized by donor's high level of confidence, strong local character, and high operational efficiency. Second, zakāh management by semi-governmental institution which collects zakāh voluntarily and distributes it to those who deserve it. The only example for this is Nasser Social Bank (1971) From the experiences of Muslim countries which adopted obligatory system, it can be seen that the implementation of the system varied widely. The scope of wealth on which zakāh is imposed on in Yemen includes zakāh al-fithr and zakāh al-mâl. In Saudi Arabia, Libya, Pakistan, and Sudan, the obligation of zakāh includes zakāh al-mâl only. Meanwhile in Malaysia, the obligation of zakāh applies only on zakāh al-fithr. Regulations regarding types of wealth on which zakāh al-mâl is imposed on also vary. In Sudan and Yemen, zakāh al-mâl is imposed on types of wealth contained in classical fiqh rules; in Saudi Arabia, zakāh al-mâl is imposed on agricultural products, livestock, and tradable goods; in Pakistan, zakāh al-mâl is also imposed on financial and monetary assets, as well as agricultural products 40 . The distinction also exists in the aspect of zakāh collection and distribution. Regarding zakāh collection, Saudi Arabia and Sudan base the amount of zakāh paid on self-assessment by muzakki. If the amount seems to be unreliable, official zakāh collector may recount it. Meanwhile in Pakistan, zakāh on financial assets will be collected directly by institutions which manage the assets (deduction at sources). Regarding zakāh collection on agricultural products and livestock, Saudi Arabia appoints zakāh officer to make calculations on zakāh obligation of muzakki, and distribute it directly to mustahik, with the exception applied on wheat, where the zakāh payment is performed at the marketing stage. On the other hand, Pakistan requires that zakāh calculation on agricultural products be paid through local zakāh committee which calculate and collect 38 Zysow, "Zakât", page 420. 39 Yusuf Wibisono. (2015). Mengelola Zakāh Indonesia: Diskursus Pengelolaan Zakāh Nasional dari Rezim Undang-Undang Nomor 38 Tahun 1999ke Rezim Undang-Undang Nomor 23 Tahun 2011. Jakarta: Prenada Media, page 152. 40 Abu Al-Hasan Sadeq. (1994. "A Survey of The Institution of Zakah: Issues, Theories and Administration", No. 11 Nature of Collection General pattern Special Pattern zakāh in form of cash. Meanwhile in Sudan, zakāh on agricultural products is collected by tax institution in form of goods or cash at its marketing stage 41 . The obligatory system, where the state's role in zakāh management is dominant and significant, theoritically has many justifications. First, to implement zakāh effectively in social life, a power is needed to force and manage. The state has power to force and manage. Second, the state has a system and resource needed to manage zakāh effectively and efficiently. Government's system and resource also spread all over the country which ensures that zakāh is run in a fair way. Third, the state can also provide legal certainty and harmonize between zakāh and tax. This will eventually strengthen zakāh institution. Comparison between the performance of compulsory system and voluntary system reveals ambiguous result. From the aspect fund collection, the performance of compulsory system is better than voluntary system, even though fund collection under compulsory system is still lower than the potential of zakāh that could be achieved. However, from the aspect of fund distribution, the performance of compulsory system is lower than voluntary system, which is caused by low capacity, weak initiative, and insufficient supervision. Regarding the work scope, the ability of compulsory system to reach muzakki and mustahik is indeed wider than that of voluntary system, since the system applies nationally and is supported by government's bureaucracy from the highest to the lowest level 42 . Nevertheless, the execution of compulsory system in some countries triggers lots of problems and technical complexities which cannot be ignored. In Pakistan, the execution of obligatory zakāh system creates segregation in Pakistani society since Shia Muslims are excluded from zakāh obligation. The integration of zakāh in Pakistani taxation system also creates difficulties for most of Muslims who are illiterate and not educated, who cannot understand many regulations and administrations of zakāh which become complicated under obligatory system. Zakāh avoidance is also rampant, committed by not saving wealth in form of assets on which zakāh is imposed on, or moving zakāh out of the institution which will collect zakāh right before the collection is performed every 1 st of Ramadhan. Obligatory zakāh system has also created a harsh political competition to control zakāh fund which is a significant economic resource for both the ruling and opposition parties 43 . In Kedah, Malaysia, there has been a peasant resistance against obligatory agricultural zakāh collection, especially zakāh of rice. The peasant resistance against obligatory agricultural zakāh collection is expressed by refusing to register their total cultivated land area to officer. They registered it but reduced their land area and amount of harvest. The actual rice zakāh paid is lower than the official report which also has been reduced, and rice zakāh paid is often defective or rotten, soggy, contains straw, mud, or gravel. The peasant resistance against "royal zakāh" under a centralized obligatory system, to distinguish it from "personal zakāh" which is based on voluntarism and decentralized on individual level, is rooted in the perception of unfairness in zakāh obligation regulation, indifference to local needs, and the perception of corruption in its collection 44 . Furthermore, obligatory and voluntary system can be sorted by judging its collection organization. From this perspective, there are five forms of contemporary zakāh management. First, obligatory zakāh collection system performed by the state such as in Pakistan, Sudan, and Saudi Arabia. The variant of this type is an obligatory system with the collection performed by religious authority like what happened in several states in Malaysia. Second, obligatory zakāh collection, but is performed by private company such as in Malaysia (Selangor). Third, voluntary zakāh collection performed by the state such as in Kuwait and Jordan. A variant of this is voluntary zakāh collection by religious authority such as in Singapore. Fourth, voluntary zakāh collection performed by private institution such as in Egypt. Fifth, voluntary zakāh collection performed by civil society such as in South Africa, Indonesia, and Algeria 45 . Contemporary zakāh management becomes more complex if we sort it based on the zakāh distribution organization. In Pakistan, zakāh is collected and distributed by the state. In Malaysia, zakāh is managed on state level with different institutional arrangements. In Kuala Lumpur and Negeri Sembilan, zakāh collection is performed by company while the distribution is performed by government. In Selangor, company even manages all zakāh operational activities. Meanwhile in South Africa, zakāh management is performed entirely by civil society through non-profit organizations 46 . CONCLUSION Zakāh management in Indonesia is unique. Before the enactment of Law Number 38/1999, zakāh in Indonesia was entirely voluntary on individual level. But since 1990s, it has been experiencing resurgence as socio-economic movement in the hand of civil society through various professional 'âmil organizations. After the passing of Law Number 38/1999, zakāh management in Indonesia is officially affiliated with the state authority, but still voluntarily and include the role of civil society groups widely. In this context, the Law Number 38/1999 was wise because the prevalent practice that has been running well for a long time was not disturbed and the state chose to strengthen the system. From this perspective, the centralization of zakāh management entirely by state promoted by the Law Number 23/2011 has many qualifications. First, the institutional centralization of zakāh by government doesn't guarantee performance improvement, and even can be a boomerang. In many Muslim countries, zakāh collection done by governmental institution is small compared to its potential 47 . Even when the centralization is followed by sanctions for negligent muzakki, it still cannot guarantee the improvement of zakāh revenue performance satisfactorily. In Pakistan, Sudan, and Saudi Arabia which implement compulsory system, zakāh collection is still relatively much smaller compared to its potential. But the performance of compulsory system is indeed better than that of voluntary system 48 . Second, management centralization to increase zakāh performance in Indonesia is invalid, ahistorical, and denies the role of civil society in a democratic nation. The performance of zakāh collection and utilization is determined more by the legitimacy and reputation of collection institution, not by institutional centralization by government. The transparent operational activities of company and non-profit organization are preferred and foster trust of muzakki. Trust is the keyword here. Amid the ugly reputation of bureaucracy and low level of public confidence, it is difficult for us to to hope that zakāh performance would increase after the centralization 49 . This centralization system is also a historical, considering the long track record of LAZ since three decades ago in Indonesia's zakāh management, and negates the participation of society which is an important component in national development. This conclusion is in line with the latest study findings that the success of zakāh management is not determined by whether the operator is from government or private sector. It is determined by credibility and trust of muzakki, which are the functions of integrity, transparency, and good governance. The coexistence of governmental and private zakāh operators is desirable because it will increase fund collection, foster efficiency, and provide wider options for muzakki. When governmental operator acts as regulator, then it should limit itself as regulator only, leaving its role as operator to private operators 50 . The main argumentation which paved way for the passing of Law Number 23/2011 in Indonesia is the idea that zakāh management is the sole authority of government. This notion is claimed to be based on al-Qur'ân and hadîth, as well as historical practices in the Islamic World, from classical to contemporary era. A profound research found that, apart from its central position and completeness of operational provisions in Islam, the practice of zakāh management is something dynamic, open to many fiqh interpretations and empirically became a subject of experiment throughout Islamic history. Zakāh, along with other instruments of Islamic philanthropy, which are moral obligations of Muslim to do good materially in the name of God, is determined more by individual willingness and faith than by enforcement of state authority. From historical perspective, zakāh management by the state depended much on the level of public trust to the government. According to fiqh judgement, zakāh management by the state is justified when its qualifications are met: a government which is committed to religious teaching, collects zakāh only from amwâl al-zhâhirah, and manages zakāh according to sharia provisions. In today's context, zakāh management in contemporary Muslim nations has become a subject of experiments, whether in obligatory system or in voluntary system. The article concludes that zakāh management exclusively by state is not generally applied without exception, but is full of qualifications. Moreover, the success of zakāh management by state relies heavily on the level of public trust in government, not state coercion. In other words, zakāh management by state is merely an instrument, not the goal itself. The ultimate objective is the delivery of zakāh to those who deserve it (mustahik) with optimal benefit. 49 Faiz showed that after the implementation of obligatory zakāh payment system in Pakistan, most of zakāh is received through deduction at the sources, and only a very little amount of payment is paid voluntarily to governmental zakāh institution. People keep paying voluntarily to credible charitable organization in a big amount. See Faiz Muhammad. (1990). "The Relationship Between Obligatory Official Zakah Collection and Voluntary Zakah Collection by Charitable Organizations". Proceeding on Third Zakah Conference, 14-17 May 1990, Kualalumpur, Malaysia, page 163-195. 50 See IRTI andThomson Reuters. (2015). Islamic Social Finance Report 2014, page 15.
2021-11-26T00:07:44.258Z
2021-09-03T00:00:00.000
{ "year": 2021, "sha1": "b0ec1d6668d341ce85fb437cdf102bf5851f30ff", "oa_license": "CCBY", "oa_url": "https://journal.afebi.org/index.php/aifer/article/download/170/191", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9d786fa2c1a82768eaf93ddcdcf2882786dc34c5", "s2fieldsofstudy": [ "History", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
14661632
pes2o/s2orc
v3-fos-license
Leucine-rich repeat kinase 2 mutations and Parkinson’s disease: three questions Mutations in the gene encoding LRRK2 (leucine-rich repeat kinase 2) were first identified in 2004 and have since been shown to be the single most common cause of inherited Parkinson’s disease. The protein is a large GTP-regulated serine/threonine kinase that additionally contains several protein–protein interaction domains. In the present review, we discuss three important, but unresolved, questions concerning LRRK2. We first ask: what is the normal function of LRRK2? Related to this, we discuss the evidence of LRRK2 activity as a GTPase and as a kinase and the available data on protein–protein interactions. Next we raise the question of how mutations affect LRRK2 function, focusing on some slightly controversial results related to the kinase activity of the protein in a variety of in vitro systems. Finally, we discuss what the possible mechanisms are for LRRK2-mediated neurotoxicity, in the context of known activities of the protein. INTRODUCTION In 2002, Funuyama et al. reported a new genetic linkage to dominantly inherited PD (Parkinson's disease) in a series of families from the Sagamihara region of Japan. The original evidence for a gene that caused PD was quite strong, but there were some unusual features. For example, the disease appeared to be dominant, but had decreased penetrance, i.e. people who had inherited the chromosomal region that tracked with disease did not always exhibit signs of PD. Also, a few autopsies had been carried out previously on members of the family and, whereas they had the expected neurodegeneration in the substantia nigra that is typical of PD and related diseases, they did not have Lewy bodies (Hasegawa and Kowa, 1997). Lewy bodies are intracellular neuronal aggregates made up in part of a protein asynuclein, and represent an important marker of typical, sporadic forms of PD (for a discussion of the distinction between Lewy body PD and Parkinsonism, see Hardy et al., 2006). It was therefore not initially appreciated that this family represented much more than an unusual, possibly even private, disease that resembled PD. However, less than 2 years later, not only had several families been found worldwide that were linked to the same chromosomal locus (Zimprich et al., 2004a), but also several mutations were found in LRRK2 (leucine-rich repeat kinase 2) (Paisán-Ruíz et al., 2004;Zimprich et al., 2004a). By the next year, LRRK2 mutations were shown to be relatively common, occurring in 1-30% of all PD depending on the population under study and whether familial PD is excluded or included (reviewed in Cookson et al., 2005). This mutation frequency is incredibly high for a disease that was considered to not be genetic, as PD had been for many years. All mutations reported to date are inherited in a dominant fashion, and homozygotes have similar phenotypes and age at onset as heterozygotes (Ishihara et al., 2007), indicating a true dominant effect. Importantly, the original Japanese family was also shown to have a mutation in LRRK2, confirming the correct identification of the gene (Funayama et al., 2005). In fact, LRRK2 mutations in general are similar to those in the original family. Penetrance is age-dependent, but still incomplete (Hulihan et al., 2008;Latourelle et al., 2008), as shown by mutation carriers surviving into their 80s without developing symptoms of PD (Kay et al., 2005), far past the typical onset of ,50 years of age. Also, examination of additional family members from the Sagamihara kindred revealed that some cases do have Lewy bodies . The variable pathological outcomes of LRRK2related disease was also emphasized by Zimprich et al. (2004b) and confirmed in a number of other studies. Even cases with the same mutation can have different outcomes, commonly having but occasionally lacking (Gaig et al., 2007) Lewy bodies, even though the clinical phenotypes are similar (reviewed in Cookson et al., 2008). The data discussed so far tell us that LRRK2 mutations are a surprisingly common cause of inherited PD. The decreased penetrance of many LRRK2 mutations means that the genetic contribution to lifetime risk of PD has probably been underestimated in the past. Furthermore, the variable pathological outcomes of LRRK2 mutations emphasizes that the clinical course of disease is not entirely synonymous with the underlying protein deposition, although it may be a useful clue as to mechanism. Overall, LRRK2 mutations represent a substantial advance in our understanding of lifetime risk of PD, while slightly complicating our knowledge of the relationship between symptoms and pathology. Given this, how do inherited LRRK2 gene mutations actually cause an adult onset neurodegenerative condition? One way to address this critical problem is to consider the intermediates between gene and phenotype, the altered proteins that are produced by a mutant allele. In the present review, we break this down into three apparently simple questions, namely what is the normal function of LRRK2, how do mutations change function and why might this altered function result in PD? At this time, many of the more honest answers to these questions are that we simply do not know, but it is our hope that, by discussing each in turn, we might be able to identify some of the key next steps for understanding LRRK2 biology. QUESTION ONE: WHAT DOES LRRK2 NORMALLY DO? At the time of sequencing genes in the linkage region on chromosome 12 that had been nominated to contain the gene responsible, LRRK2 was not the most attractive candidate. Not only was it poorly characterized, but also it was rather large, requiring a significant investment of time to sequence it. Overall, the full-length cDNA is 7.5 kb long, encoding an ,280 kDa protein. LRRK2 is named for its leucine-rich repeats and a kinase domain. This arrangement is shared by one other protein, LRRK1. In between these two regions is a GTPase sequence called a ROC [for Ras of complex proteins (Bosgraaf and Van Haastert, 2003)] domain and an adjacent COR (C-terminal of ROC) domain. This pair of domains is characteristic of the ROCO superfamily of proteins , which all contain tandem ROC-COR domains, but do not all contain kinase domains. In the human genome, there are four ROCO proteins, of which three [LRRK1, LRRK2 and DAPK1 (deathassociated protein kinase 1)] are also kinases, but one [MFHAS1 (malignant fibrous histiocytoma amplified sequence 1)] that is not (Lewis, 2009). In other species, there are variable numbers of LRRK homologues; Drosophila melanogaster and Caenorhabditis elegans have a single LRRK protein. It has been misstated that the kinase domain of LRRK2 is related to the MLKs (mixed lineage kinases), but analysis of all kinase domains throughout the human genome suggests that LRRK1 and LRRK2 form a small offshoot group of the RIPK (receptor-interacting protein kinase) family of kinases, which are somewhat similar to the IRAK (interleukin 1 receptoractivated kinase) family and rather more distant from MLKs (Manning et al., 2002). In contrast, the kinase domain of DAPK1 is quite distinct from either LRRK homologue (Manning et al., 2002). It has been proposed that, throughout evolution of the LRRK genes, the kinase domain has been acquired from different sources and is quite divergent in sequence (Marin, 2006(Marin, , 2008Marin et al., 2008). What makes LRRK2 different from LRRK1 is the Nterminus, which is much longer in LRRK2. The motifs in this region are not well annotated, but a number of repeat sequences can be found that have a limited homology with sequences found in the ankyrin family. Finally, near the Cterminus of LRRK2 is a WD40 domain that probably forms a b-propeller structure. The significance of the presence of both ankyrin-like and leucine-rich repeats and a WD40 domain is that they are very likely to be protein-protein interaction motifs and, with so many present, this indicates that LRRK2 may act as a scaffold for several other proteins. WD40 domains in other proteins can also interact with lipids (e.g., McArdle and Hofmann, 2008), raising the possibility that LRRK2 might be present at intracellular membranes. A diagram of the domain structure of LRRK2 is shown in Figure 1. An added layer of complexity arises because LRRK2 self interacts (Gloeckner et al., 2006) to form a dimer . Other ROCO family proteins also form dimers via COR-COR interactions (Gotthardt et al., 2008); whether the equivalent region of human LRRK2 has a similar structure is unclear as there is some evidence of ROC-ROC interactions (Deng et al., 2008). Regardless of the exact structural basis of LRRK2 dimerization, the key motifs are similar enough in LRRK1 to make heterodimers at least a possibility. This information leads us to a model of a large protein with a central catalytic GTPase/kinase region surrounded by protein-protein and perhaps protein-membrane interaction motifs, forming homo-and possibly hetero-dimers. To understand function requires demonstration of whether the kinase and ROC (GTPase) domains are enzymatically active. For the kinase domain, the answer is a slightly tepid yes. Several groups, including our own, have reported that fulllength LRRK2 or LRRK1 immunopurified from mammalian cells has measurable kinase activity Gloeckner et al., 2006;Greggio et al., 2006;Korr et al., 2006;MacLeod et al., 2006;Smith et al., 2006;Greggio et al., 2007;Ito et al., 2007;Jaleel et al., 2007;Li et al., 2007;West et al., 2007;Greggio et al., 2008;Imai et al., 2008). One small concern is that the apparent activity of LRRK2 might arise from an inadvertently co-purified kinase. This is unlikely, as several groups have used artificial kinase-dead variants that have 10-20% of the activity of wild-type protein. Furthermore, the LRRK2 kinase domain alone expressed in Escherichia coli can be active (Luzon-Toro et al., 2007), as can a larger fragment expressed in a baculovirus system (Anand et al., 2009). The major caveat here is that many of these studies used autophosphorylation as a readout for kinase activity. Such assays are commonly used to identify kinases because they work, but that does not prove that the autophosphorylation event is physiologically relevant. Sometimes autophosphorylation can be an important regulatory mechanism, e.g. in the receptor tyrosine kinases, which autophosphorylate when they form a dimer upon binding of their ligands. For full-length LRRK2, we have shown that phosphorylation occurs within each dimer molecule , although the isolated kinase domain can autophosphorylate in trans (Luzon-Toro et al., 2007). But at this stage, the autophosphorylation site(s) in LRRK2 have not been identified and so they cannot yet be proven in vivo. There are two sites (Ser 2032 and Thr 2035 ) in the activation loop that appear to diminish kinase activity when mutated to alanine residues, although these have not yet been proved to be authentically due to autophosphorylation . However, LRRK2 is known to be active using autophosphorylation assays when purified from transgenic mouse brain , where it is apparently more active than in other tissues or in cultured cell lines. An alternative to autophosphoryla-tion as an assay for kinase activity is to use generic substrates such as MBP (myelin basic protein) Iaccarino et al., 2007;Luzon-Toro et al., 2007). Again, LRRK2 is modestly active in these assays, which suggests that LRRK2 is a serine/threonine-directed kinase, but does not lead to the idea that MBP is a specific substrate for LRRK2, as many serine/threonine kinases will mediate the same reaction. Two heterologous LRRK2 substrates have been proposed: moesin (Jaleel et al., 2007) and 4E-BP [eIF4E (eukaryotic initiation factor 4E)-binding protein] (Imai et al., 2008). Moesin is one of three proteins collectively named ERM for the members ezrin/radixin/moesin. The major role of ERM proteins is to anchor the cytoskeleton to the plasma membrane, and thus influence processes in the cell related to cytoskeletal dynamics at the cell surface such as maintenance of neuronal growth cones (Paglini et al., 1998). The C-terminal region of moesin, which contains an actin-binding site, can interact with a FERM (4.1/ezrin/ radixin/moesin) domain in moesin's N-terminal region in a closed conformation. A shift to an open conformation is required for binding to the cytoskeleton. The site on moesin that is phosphorylated by LRRK2 (Thr 558 , with a minor site at Thr 526 ) is in the C-terminal domain and thus is normally relatively inaccessible and, probably because of this, Jaleel et al. (2007) found that pre-heating recombinant moesin to .60˚C was required to see activity. Quantification suggested that, even under these circumstances, phosphate was incorporated into moesin at a ratio of ,10%, i.e. even here, moesin is not a very efficient substrate. Short peptides Leucine-rich repeat kinase 2 mutations and Parkinson's disease containing the moesin Thr 558 motif, which would not be so structurally restricted, also act as LRRK2 substrates (Jaleel et al., 2007;Anand et al., 2009). Overall, these data suggest that moesin and related proteins are potential substrates for LRRK2. However, LRRK2 has not yet been shown to be an authentic kinase for moesin in vivo. The requirements to show this are necessarily quite high, but showing (for example) that LRRK2knockout cells or animals have decreased moesin Thr 558 phosphorylation and that transfection with active full-length LRRK2 increases the same event, or restores it in the case of knockouts, would be one way forward. As discussed in the next section, the pathogenic LRRK2 mutations may be helpful in teasing apart some of these problems and assessing their relevance for disease mechanisms. Although the steps needed to support LRRK2 being an authentic in vivo kinase for moesin are extensive, they are feasible as shown by the work of Imai et al. (2008) on a second proposed LRRK2 substrate. 4E-BP is an interactor of eIF4E, which in turn binds to capped mRNA species, promoting their translation. Binding of 4E-BP to eIF4E prevents the latter from being active, and therefore 4E-BP is a repressor of protein translation. Oxidative stress and other stimuli that have an impact on protein translation affect phosphorylation of 4E-BP. Imai et al. (2008) have proposed that LRRK2 modulates this system by phosphorylating 4E-BP. In mammalian cell culture, overexpression of LRRK2 increases 4E-BP phosphorylation at a number of sites; Imai et al. (2008) propose that LRRK2 first phosphorylates 4E-BP at Thr 37 /Thr 46 , which then acts as a stimulus for further phosphorylation by other kinases at secondary sites including Ser 65 /Thr 70 . There is a modest decrease in phosphorylation of 4E-BP Thr 37 /Thr 46 and Ser 65 when LRRK2 levels are knocked down with RNAi (RNA interference). Furthermore, overexpression of 4E-BP rescues the effects of LRRK mutants in vivo using Drosophila models, which show increased sensitivity to oxidative stress. Overall, these data are supportive of 4E-BP being an authentic in vivo substrate for LRRK2 or its Drosophila homologue, dLRRK. However, at the time of writing, no independent confirmation of the results of Imai et al. (2008) has been published, and details of the phosphorylation reaction, such as how efficient phosphorylation of 4E-BP by LRRK2 is or whether this activity is more efficient than autophosphorylation, are not yet available. Therefore several pieces of data support the idea that LRRK2 is an active kinase, although the data to show that this is true in vivo are more limited. What about the proposed GTPase activity of the ROC domain? Again, evidence here is mixed on whether this is enzymatically active, and some of the details are important. All published data support that full-length LRRK1 (Korr et al., 2006;Greggio et al., 2007) and LRRK2 (Smith et al., 2006;Guo et al., 2007;Ito et al., 2007;West et al., 2007) will bind GTP at millimolar concentrations. However, data on whether the protein is active as a GTPase are mixed. West et al. (2007) reported that they were unable to detect GTPase activity in full-length LRRK2 when expressed and immunopurified from mammalian cells, whereas Ito et al. (2007) could only see measurable activity if the protein was mutated to resemble the more active small GTPase, Ras. In contrast, we and others (Guo et al., 2007;Li et al., 2007) were able to detect GTPase activity under similar circumstances. Although small technical details may be critical for identifying why the experiment can give different results under different circumstances, the most likely explanation is that the apparent GTPase activity of full-length LRRK2 is quite weak. For example, we used an artificial mutant (K1347A) that cannot bind GTP and therefore should have no GTPase activity as a reference and found that wild-type LRRK2 was only slightly more active than the negative control. In contrast, the ROC domain is much more active when removed from the context of the full-length protein either when expressed in E. coli (Deng et al., 2008) or in mammalian cells . The simplest explanation is that sequences outside of the ROC domain modulate GTPase activity, perhaps by physical interaction or by recruitment of other cellular proteins. In prokaryotic ROC-COR proteins, dimerization is critical for GTPase activity, and the COR domain may provide at least part of the dimer interface (Gotthardt et al., 2008). If this is also true for human LRRK2, the COR domain would be a positive regulator of GTPase activity, although not absolutely required, and so inhibitory sequences would have to be present outside of the ROC-COR bi-domain. Again, GTPase activity could be regulated either by intramolecular interactions intrinsic to LRRK2 in the context of the dimer, and/or by recruitment of other proteins. We therefore have two enzymatic domains, each of which are at least potentially active. To complicate things further, several groups have noted that binding of non-hydrolysable GTP analogues {e.g. GTP[S] (guanosine 59-[c-thio]triphosphate) or p[NH]ppG (guanosine 59-[b,c-imido]triphosphate)} stimulates the kinase activity of LRRK1 (Korr et al., 2006) and LRRK2 (Smith et al., 2006;Ito et al., 2007;Li et al., 2007;West et al., 2007). In the currently accepted model, GTP-bound LRRK2 has a higher kinase activity than the GDP-bound protein and thus GTPase activity would be important to return LRRK2 kinase activity to basal levels. Whether this predicted kinetic outcome of GTP binding and subsequent hydrolysis occurs under physiological conditions has not yet been proved. It should also be noted that the effect of GTP binding is quite modest, increasing kinase activity ,2-fold, and whether there is regulation of GTPase by kinase, for example, is untested. It is possible that there are further intramolecular events that influence GTPase activity, as has been shown for the Dictyostelium ROCO kinase GbpC . There are additional regulatory sequences in LRRK2, as the C-terminal tail is required for full kinase activity (Jaleel et al., 2007), whereas the Nterminus of LRRK2 is inhibitory (Jaleel et al., 2007;. Collectively, these data suggest that the ROC-COR-kinase portion of LRRK2 is probably the centrally important regulatory region. What then is the function of the rest of the protein? As discussed above, the various repeat regions appear to be important for protein-protein interactions. Several studies have identified candidate proteins bound to LRRK2. The recessive Parkinsonism protein parkin was reported by Smith et al. (2005) to interact with LRRK2. Dachsel et al. (2007) reported several interactors for fulllength LRRK2 expressed in cells using MS approaches. Shin et al. (2008) used yeast two-hybrid screening with the LRR domain to identify Rab5b, a small GTPase involved in vesicle endocytosis. Hsp90 (heat-shock protein 90) also binds to LRRK2, perhaps in association with the co-chaperone Cdc37 (cell division cycle 37), and regulates its stability (Wang, L. et al., 2008). LRRK2 is also reported to interact with a-and btubulin, linking it to the cytoskeleton (Gandhi et al., 2008). In most of these cases, binding to interactors was similar for different LRRK2 variants, although mutant LRRK2 has recently been reported to enhance binding to the apoptosis protein FADD (Fas-associated death domain), which then recruits caspase 8 (Ho et al., 2009). Therefore LRRK2 appears to have a potentially large number of interacting partners, with the caveat that most of these experiments have used overexpressed LRRK2 rather than physiological levels of protein, which appear to be quite low in most cell types. In our hands, cells that appear to express higher levels of endogenous LRRK2 such as transformed lymphoblastoid lines (Melrose et al., 2007), have a high-molecular-mass (.1.2 MDa) complex including LRRK2, supporting their identification using non-denaturing techniques . There is also some evidence that regions outside the ROC-COR domain may contribute to the self-interaction of LRRK2 . Two things stand out about this list. First, several interactors may give important clues about LRRK2 function. For example, the Rab5a interaction is consistent with a role for LRRK2 in mediating synaptic endocytosis (Shin et al., 2008). This leads to the more important question of the normal physiological role of LRRK2. As well as synaptic endocytosis, LRRK2 has been proposed to have a role in sorting of vesicles between axons and dendrites (Sakaguchi-Nakashima et al., 2007). These two roles may be consistent with localization of LRRK2 to vesicles in the brain (Biskup et al., 2006), and possibly with localization to lipid rafts (Hatano et al., 2007). LRRK2 expression also influences neurite morphology in vitro and in vivo (MacLeod et al., 2006;Plowey et al., 2008;Wang, L. et al., 2008). Finally, LRRK2 or homologues in other species have been proposed to be important in maintenance of neuronal viability in the presence of oxidative stress (Imai et al., 2008;Liou et al., 2008), although some studies have not identified a role in cell survival (Wang, D. et al., 2008). The second aspect of this list is that no interactors have been identified that bind to the N-terminal region of LRRK2 before the LRR (leucine-rich repeat) domain. This is puzzling, as the very large N-terminal region is the most divergent part of the protein compared with LRRK1 and one might therefore expect that any LRRK2-specific interactors might bind here and be interesting for understanding function. Overall, these data do not answer the question of the normal function of LRRK2, but do give us the impression of a complex modular protein. The central enzymatic ROC-CORkinase core has regulatory functions at least within the context of the dimeric protein. Outside this are various domains that may recruit other proteins into a complex, making LRRK2 potentially a scaffolding protein, perhaps for cell signalling pathways. Several recent papers have proposed that LRRK2 or LRRK2 complexes act in ways that are important for neuronal function, although one has to wonder whether this is biased because of an expected role of LRRK2 in neurological disease. Although there is clearly much work needed to resolve the question of the physiological role of LRRK2, this outline should allow us to discuss how the mutations in LRRK2 affect function of the protein. QUESTION TWO: HOW DO MUTATIONS AFFECT LRRK2 FUNCTION? Before discussing how mutations affect function, we first have to outline causation as it applies to genetics. For many large genes, such as LRRK2, there are a large number of variants along the 7500 nucleotides of the coding sequence (Paisán-Ruíz et al., 2008). Many of these are probably innocuous, but some appear to be linked to disease and distinguishing causal from innocuous variants is critical. From a genetic perspective, there are two ways in which a variant can be assigned as pathogenic. This can be either segregation, where a phenotype is co-inherited with a disease-causing mutation, or association, where, at a population level, carrying a specific variant means an individual is at a higher incidence of expressing a given phenotype. A mutation is any variant that is rare (the classic definition is 1% frequency in a population), whereas a polymorphism is a more frequent variant. Because there are multiple family members who carry the mutation and develop PD, and in some cases there are several families that may be more remotely related, we can be very confident that five LRRK2 mutations are causal: R1441G/C, Y199C, G2019S and I2020T. There are other mutations that are less certainly pathogenic. Part of the problem is that PD is a very common disease, with approx. 1% of people over the age of 60 rising to 5% prevalence at the age of 80, so finding PD in any given family is not surprising. If the phenotype were extremely rare, such as having beetroot-coloured skin and a lisp, we might be more confident. In many cases, the families are relatively small and we cannot see generation-togeneration transmission of the expected dominant trait, perhaps due to missed diagnosis or incomplete penetrance. Therefore some mutations are genuine variants and some are also found in patients with PD, but will remain ambiguous, so we have to rely sceptically on indirect evidence. Of the reported variants, perhaps the only one that is very likely to be pathogenic is R1441H Spanaki et al., 2006), because of the two other clearly pathogenic mutations at the same residue that argue that this is a mutation hotspot . Others are less certain; for example, I1371V has been found in one case with a self-reported family history of PD, but without clear evidence of segregation such as an affected mutation-carrying parent (Paisán-Ruíz et al., 2005). Then there are a few variants that are frequent enough to be able to assess evidence for association with disease across populations. For example, there is a glycine to arginine substitution in the WD40 domain (G2385R) that is found only in Asian populations, specifically in persons of Han descent. Within these populations, G2385R is much more common in PD cases compared with controls and thus shows associations with lifetime risk of PD (Tan, 2006;Farrer et al., 2007;Chan et al., 2008;Lin et al., 2008;Tan et al., 2009). In summary, there are some mutations for which we have strong evidence of segregation in the central enzymatic/ regulatory portion of LRRK2 and at least polymorphisms for which we have evidence of association towards the Cterminus. Interestingly, there are very few convincing mutations towards the N-terminus of LRRK2 (Paisán-Ruíz et al., 2008), although the significance of this observation is unclear. Working from the N-to the C-terminus, the first set of convincing pathogenic mutations are those in the ROC domain, R1441C/G and maybe R1441H. In those studies where GTPase activity of LRRK2 could be measured, either R1441C (Guo et al., 2007;Lewis et al., 2007) or R1441G are associated with decreased GTPase activity compared with wild-type proteins. One study proposed an increased GTPase activity, but actually measured GTP binding and saw only small differences in this parameter; in our own hands, R1441C and wild-type bind GTP to the same extent . Interestingly, the effect of R1441C is less dramatic when placed into the isolated ROC domain (Deng et al., 2008) compared with the relatively strong effect (admittedly on a weaker GTPase activity) in the full-length protein . One read of these data is that Arg 1441 has a key role in interactions with other domains. This is slightly controversial, as two different models have been proposed for where Arg 1441 sits in the structure. Using the recombinant human LRRK2 ROC domain isolated from other regions of the protein, we have proposed that Arg 1441 stabilizes the interface of a ROC-ROC dimer (Deng et al., 2008). In contrast, the structure of a more complete ROC-COR protein from the prokaryote Chlorobium tepidum suggests that the equivalent residue is important in hydrophobic interactions between ROC and COR domains (Gotthardt et al., 2008). Resolution of these two models will require crystallization of larger protein fragments of the human protein, as there are several sequence differences around this region between the two homologues. But what the two models both agree upon is that Arg 1441 plays a small, but probably important, role in the dimer interface and that substitutions at this region decrease GTPase activity for the prokaryotic protein as much as the eukaryotic version (Gotthardt et al., 2008). Although it does not make the genetic evidence any stronger or weaker, it should be noted that, under either model, R1441H would also be defective in mediating the dimer formation, as arginine specifically forms two hydrogen bonds with other residues in the opposite chain, and no other side chain would be able to do this. Furthermore, both models support pathogenicity of the I1371V mutant, as the wild-type residue is in a hydrophobic pocket again near the dimer interface. Mutations in the COR domain itself have been less well studied, probably because the assays to do this are less obvious than for a GTPase homology domain. However, again working from a prokaryotic homologue, Gotthardt et al. (2008) have shown that the Y1699C equivalent (Y804C) also decreases GTPase activity. As for the ROC mutations, the very probable mechanism is that the substitution for the aromatic residue disrupts a key element of the dimer interface, in this case between the ROC and COR domains. Although an equivalent experiment in human protein has not yet been published, it is known that the ROC and COR domains of LRRK2 interact physically (Deng et al., 2008), making the prediction that Y1699C would have lower GTPase activity owing to a lower stability dimer reasonable. Therefore there is generally good agreement that ROC mutations lower GTPase activity, with a nagging uncertainty about the actual strength of activity, and a reasonable prediction for COR mutations. Where the real controversy starts is with the kinase domain. All studies published to date have agreed that the effect of the G2019S kinase mutation is to significantly increase phosphorylation activity in a variety of assays Greggio et al., 2006;MacLeod et al., 2006;Smith et al., 2006;Guo et al., 2007;Jaleel et al., 2007;Luzon-Toro et al., 2007;West et al., 2007;Covy and Giasson, 2008;Imai et al., 2008;Anand et al., 2009). Data on I2020T are more ambiguous, with some studies reporting small, but significant, increases in activity (Gloeckner et al., 2006;West et al., 2007;Imai et al., 2008), whereas others report no effect (Luzon-Toro et al., 2007;Anand et al., 2009) or even a slight decrease (Jaleel et al., 2007). Similar uncertainty exists for the ROC and COR mutations, with some studies reporting that all mutations increase activity up to 2.5-fold Smith et al., 2006;West et al., 2007), whereas others suggesting that mutations of Arg 1441 and Tyr 1699 have only minor effects (Greggio et al., 2006;MacLeod et al., 2006;Greggio et al., 2007;Jaleel et al., 2007;Anand et al., 2009) and that similar mutations in LRRK1 slightly decrease activity (Korr et al., 2006). These data are summarized in Figure 2, which we took from the original references that reported quantitative effects of mutations relative to wild-type LRRK2. These studies used several different assays with a variety of constructs from full-length through several N-terminally truncated versions to the isolated kinase domain alone. The picture that emerges is similar to the descriptive arguments above: only G2019S consistently increases kinase activity, whereas other mutations have inconsistent effects and generally only modestly influence activity if there is a difference. No obvious pattern emerges when considering different substrates, as the data from different measures overlap (Figure 2). Perhaps the place to start resolving some of these apparently contradictory data is with the one change that everyone agrees activates LRRK2, the common G2019S mutation. How might this mutation work mechanistically and/or structurally? Gly 2019 is part of a very highly conserved motif, D(F/Y)G, where the aspartate residue (Asp 2017 in human LRRK2) chelates a Mg 2+ ion that is required for cleavage of the c-phosphate from ATP and thus for kinase activity. The glycine residue (Gly 2019 in LRRK2) is absolutely invariant apart from a few rare examples, which happen to be serine residues (Jaleel et al., 2007). This residue marks the start of a conformationally flexible region, the activation loop, which is important for the control of kinase activity. For many kinases, phosphorylation of this loop shifts its orientation relative to the two lobes of the enzyme and thus allows or restricts substrate access. The glycine residue is probably invariantly conserved because the small side chain of this amino acid allows maximum flexibility and thus motion of the activation loop. We can speculate that a serine residue, with a negatively charged hydroxy-containing side chain and less conformational flexibility might 'lock' the kinase in a more active conformation. Support for this idea comes from largescale sequencing of somatic mutations in cancer where several equivalent glycine to serine changes were found in kinases where increased activity is thought to be the mechanism by which they are associated with excess cell growth (Greenman et al., 2007). Also, substitution of alanine, which also has a smaller side chain relative to serine, for this glycine residue restores autophosphorylation to wild-type levels (Luzon-Toro et al., 2007). One might also imagine that a Effects of LRRK2 mutations on kinase activity For this Figure, we took reported effects of LRRK2 mutations on kinase activity and expressed each relative to the wild-type LRRK2 reported in the same study, where the broken line across the graph51. Each study is given by first author and year and the different symbols are colour-coded by substrate used in the assay; black, MBP; red, autophosphorylation; blue, LRRKtide peptide; purple, 4E-BP. Of all the mutations tested, the one that consistently shows an increased activity is G2019S in the kinase domain; all of the others vary between experiments, and there is no clear pattern that relates to substrate used. threonine residue at the adjacent amino acid within the activation loop would have a similar effect, although this would not explain why estimates of the effect of the I2020T mutation are more variable compared with G2019S. Mutations outside of the kinase domain are harder to understand based on the above data. If the current model that GTP binding to the ROC domain increases kinase activity is correct, then decreased GTPase activity would mean that the stimulatory effect of GTP binding would last longer for ROC mutants, because the turnover of GTP to GDP would be slowed. However, in the absence of GTP, as most of the above kinase assays were performed, there should be no difference in activity, and it seems likely ahead of time that nonhydrolysable analogues would result in similar effects irrespective of GTPase activity. Therefore the reason(s) kinase activity measurements for mutations outside of the kinase domain are variable between laboratories is unclear. Perhaps there are small differences in the assay conditions that have a large impact on the results, or perhaps our model of regulation of kinase activity by GTP binding is flawed, but the most likely interpretation is that the current assays need refining. These issues are important to resolve, as, without understanding how mutations work, it is hard to develop clear ideas about mechanisms of neurodegeneration. QUESTION THREE: WHY DO MUTATIONS IN LRRK2 CAUSE NEURODEGENERATION? So far, we have established that LRRK2 is an active enzyme, at least in vitro and ex vivo, and that mutations either lower GTPase activity or raise kinase activity, and we believe that these two concepts are linked. However, none of this explains why it is that LRRK2 mutations lead to neurodegeneration, which is, in fact, a series of questions that are interlinked. Clues to how LRRK2 might lead to neuronal death come from where we started, from human genetics. It is worth restating that the mode of inheritance is dominant with incomplete penetrance and that homozygous cases have the same phenotype as heterozygous. There are two likely ways in which the LRRK2 protein could cause neuronal damage. The mutations could result in a toxic gain-of-function, which could be either misregulation of its normal function or acquisition of a novel toxic function. However, it is also possible that mutations are a loss of normal function: they might, for example, interfere with the wild-type LRRK2 activity and act as a dominant-negative. The tools to separate these possibilities are initially likely to be based around experimental models. Several laboratories have reported that high levels of overexpression of mutant LRRK2 in primary cultured neurons or SH-SY5Y cells can lead to cell death over a few days Greggio et al., 2006;MacLeod et al., 2006;Smith et al., 2006;Iaccarino et al., 2007;West et al., 2007;Ho et al., 2009). Under similar conditions, and at similar levels of expression, wild-type mutant LRRK2 has only minor effects on basal cell viability, although in one study, treating cells transfected with wildtype LRRK2 with hydrogen peroxide resulted in dramatic cell death . Overall, the consistent message is that mutant LRRK2 can cause cell death, at least in the context of cell culture models. Also consistent between studies is the observation that neurites are shorter after expression of mutant LRRK2 (MacLeod et al., 2006;Plowey et al., 2008;Wang, L. et al., 2008). Whether this is related to toxicity or not is a little unclear, but knockdown of LRRK2 causes a reciprocal increase in neurite length and is not reported to result in cell death (MacLeod et al., 2006). The mode of cell death related to overexpression of mutant LRRK2 is reported to be apoptotic, although evidence is mixed on whether this is a caspase 3 (Iaccarino et al., 2007) or caspase 8 (Ho et al., 2009) -dependent pathway. Some evidence of TUNEL (terminal deoxynucleotidyl transferase-mediated dUTP nick-end labelling) staining has been reported , although this could be apoptosis or necrosis as DNA strand breaks can be labelled by this technique in either mode of cell death. Finally, in two models, there was evidence of autophagic degradation of organelles, which might indicate a mixed mode of cell death (MacLeod et al., 2006;Plowey et al., 2008). What is interesting here, in the light of the discussion about kinase activity above, is that all mutations are equally toxic. Not only is the amount of cell death similar for all mutations, but also estimates of cell death are also remarkably similar across different models in different laboratories. This leads logically to the question of whether kinase activity is actually related to toxicity. We and others have reported that pathogenic LRRK2 mutants that also were engineered to be kinase-dead are much less toxic than kinase-active versions (Greggio et al., 2006;Smith et al., 2006). This suggests that kinase activity makes a substantial contribution to cell death, at least in these cellular models. This result would be simple to understand if all mutations lead to increased kinase activity, but requires some discussion if the effects of mutations on activity are variable. There are several reasons that there might be an apparent dissociation between the two measures. First, the kinase assays reported to date may not capture all aspects of the function of the enzyme. If, for example, there were a specific substrate for the kinase activity of LRRK2 that mediates its toxic effects, then measuring autophosphorylation may not capture this. Perhaps more likely is the second possibility, that some mutations work by regulating overall LRRK2 activity. Taking the current model that GTP binding stimulates kinase activity and the GTPase activity returns LRRK2 to basal levels, a mutation such as R1441C that lowers GTPase activity would only be revealed when LRRK2 is being regulated. Perhaps consistent with this idea, Smith et al. (2006) were able to show that introducing a K1347A mutation, which can bind neither GDP nor GTP and has lower kinase activity, minimizes the toxic effects of G2019S LRRK2. Measuring static kinase activity in vitro would miss this, but kinase activity might still be required for toxicity. It should be noted that the hypothesis could be reversed (kinase regulates GTPase) and the data would still be consistent, but only if we thought that kinase activity down-regulates GTPase activity. Another suggestion, stated explicitly by Ho et al. (2009), is that mutations might work in different ways to change a critical interactor that is not necessarily a substrate. In their experiments, mutations outside the kinase domain and I2020T increased binding to FADD, but G2019S does not (Ho et al., 2009). However, FADD interaction can be blocked by a kinase-inactivating mutation, suggesting that an enhanced LRRK2-FADD interaction can be achieved either by stronger physical interaction or by enhancing kinase activity (Ho et al., 2009). In this view, GTPase activity may not be especially crucial, or it may be that GTP influences binding to FADD. But there is also the possibility that kinase activity is really not that important for the toxic effects of LRRK2 mutations. Bear in mind that all of the above experiments rely on brief overexpression of very high amounts of LRRK2 in cultured cells, outside their native environment and potentially exposed to additional stressors such as reactive oxygen species, which can enhance LRRK2 toxicity ). An additional complication comes from the fact that some of the hypothesis-testing mutations may alter LRRK2 stability. For example, the GTP/GDP-binding-null K1347A mutation used to test the requirement for GTP-dependent activation (Smith et al., 2006) dramatically destabilizes LRRK2 protein, at least in our experiments , and are thus a little more difficult to interpret if toxicity is concentrationdependent. For all of these reasons, the proposal that kinase activity (or any other aspect of LRRK2 biology) is important in toxicity should be considered as only a provisional hypothesis until it can be tested rigorously in an intact brain. The first step to doing this will probably be the development of animal models, a few of which have been described recently. Loss of dopamine cells is seen in transgenic Drosophila expressing G2019S human LRRK2 . Similar phenomena have been reported using dLRRK if the equivalent mutations to Y1699C or I2020T are introduced (Imai et al., 2008). Whether neuronal loss occurs in transgenic mice is currently unclear, as only one BAC (bacterial artificial chromosome)transfected mouse has been reported and the phenotype of the mouse was not discussed in that study . Neuronal loss was reported in rats where a fragment of LRRK2 including the kinase domain was expressed transiently in the rat cortex using viral vectors (MacLeod et al., 2006). Clearly, in vivo models such as these will need to be developed further before we can adequately assess whether LRRK2 kinase activity is genuinely important in mediating neuronal cell death. The discussion of animal models highlights a question discussed briefly above, that of whether the mutations work as gain-of-(potentially novel) toxic function or as a dominant-negative. One way to resolve this would be to compare the phenotypes seen in knockout animals with those resulting from overexpression. Here the data are mixed. In Drosophila, although two groups found that high-level overexpression of mutant LRRK2/LRRK causes cell loss (Lee et al., 2007;Liu et al., 2008), another did not (Lee et al., 2007). There are two published studies using different knockout alleles reporting the LRRK is (Lee et al., 2007) or is not (Wang, D. et al., 2008) required for dopamine neuron survival in the same organism. Until these data are resolved, the loss-offunction against gain-of-function argument cannot be definitively answered. The detailed phenotype of LRRK2knockout mice has not been reported, although brains of such animals have been used as controls for antibody-based techniques , so presumably they are viable. Finally, it is worth discussing what the human pathology may tell us about mechanisms involved in LRRK2-mediated neurodegeneration. A more detailed tally of the various pathologies found in different LRRK2 cases has been published elsewhere , but it will suffice to say here that most cases are of Lewy-body-positive Parkinsonism as discussed above. Because we know that one of the major proteins found in Lewy bodies, a-synuclein, is also a gene for PD when mutated (Polymeropoulos et al., 1997;Kruger et al., 1998;Zarranz et al., 2004) or if expression is increased without any sequence variants (Singleton et al., 2003;Chartier-Harlin et al., 2004), a-synuclein fulfils the requirements for a toxic agent in PD (Cookson, 2005). By extension, if most cases of LRRK2 Parkinsonism have Lewy bodies, it is possible that a-synuclein is a mediator of the toxic damaged caused by mutant LRRK2. That some cases with LRRK2 mutations do not have Lewy bodies complicates the argument, but does not invalidate it if we accept the idea that the deposition of proteins into inclusion bodies is not a necessary part of the toxicity of aggregating proteins (Cookson, 2005). LRRK2 cases can also have inclusions of another potentially toxic protein, tau. If LRRK2 mutations can express themselves as different pathologies, a logical inference is that LRRK2 is 'upstream' in the neurodegenerative process that can progress either via a-synuclein or tau. If this were correct, then LRRK2 would be predicted to be an accelerant of a-synuclein toxicity. How LRRK2 could influence a-synuclein is unclear as, although a-synuclein is phosphorylated, we have not been able to demonstrate any direct phosphorylation with active LRRK2 (D.W. Miller, E. Greggio and M.R. Cookson, unpublished data). However, the idea of a relationship between the two dominant genes for PD should at least be testable as animal models are developed. understanding the protein. With some caveats, it seems likely that the protein is active as both a GTPase and a kinase, and that these two domains have some regulatory function. Progress is being made on understanding interactions with other proteins and on possible physiological roles of LRRK2. How mutations work is still a little unclear, both from the viewpoint of whether all mutations increase kinase activity and how mutant proteins trigger toxicity. The next clear challenges are to identify the cellular function of endogenous LRRK2 and to develop robust animal models in which to test ideas about pathogenesis that currently involve questions such as whether kinase or other activities really are critical for toxicity and the relationship to a-synuclein, another key protein in PD pathogenesis. The final thing to be said is that the reason for doing this is to find ways to prevent the neuronal damage in PD and eventually to develop new therapeutic modalities.
2014-10-01T00:00:00.000Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "6cf4e692a35978678a19981b023996e95bb492e4", "oa_license": "CCBYNC", "oa_url": "http://journals.sagepub.com/doi/pdf/10.1042/AN20090007", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6cf4e692a35978678a19981b023996e95bb492e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256047279
pes2o/s2orc
v3-fos-license
Tunneling decay of false vortices with gravitation We study the effect of vortices on the tunneling decay of a symmetry-breaking false vacuum in three spacetime dimensions with gravity. The scenario considered is one in which the initial state, rather than being the homogeneous false vacuum, contains false vortices. The question addressed is whether, and, if so, under which circumstances, the presence of vortices has a significant catalyzing effect on vacuum decay. After studying the existence and properties of vortices, we study their decay rate through quantum tunneling using a variety of techniques. In particular, for so-called thin-wall vortices we devise a one-parameter family of configurations allowing a quantum-mechanical calculation of tunneling. Also for thin-wall vortices, we employ the Israel junction conditions between the interior and exterior spacetimes. Matching these two spacetimes reveals a decay channel which results in an unstable, expanding vortex. We find that the tunneling exponent for vortices, which is the dominant factor in the decay rate, is half that for Coleman-de Luccia bubbles. This implies that vortices are short-lived, making them cosmologically significant even for low vortex densities. In the limit of the vanishing gravitational constant we smoothly recover our earlier results for the decay of the false vortex in a model without gravity. Introduction The Abelian Higgs model in three spacetime dimensions has soliton solutions known as Nielsen-Olesen vortices [1]. These objects have a localized magnetic flux proportional to their winding number; in four dimensions they correspond to magnetic flux tubes in type-II superconductors [2], while in the cosmological context they could appear as cosmic strings [3,4], possibly playing a role in structure formation or having other observable effects [5,6]. In the minimal model (with symmetry-breaking φ 4 potential), vortices are built out of the true vacuum and are stable both classically and quantum mechanically. (Their classical stability is easy to see: the potential and scalar field gradient energies favor collapse, while the magnetic field energy favors expansion. The stable configuration is a compromise between these two antagonistic effects.) In this paper, we wish to study vortices in non-minimal models. First, we consider a different potential where in addition to the symmetry-breaking vacuum there is a lowerenergy symmetric vacuum; these are thus false and true vacua, respectively. Second, we include gravity. Depending on the details of the potential and the strength of the gravitational coupling, vortices can be rendered classically unstable. If classically stable, they will be metastable quantum-mechanically. Their lifetime can be quite short; if so, they JHEP11(2017)028 could play an important role wherever they appear (phase transitions in the early universe, for instance). In earlier work [7], a metastable analog of the vortex dubbed a false vortex was studied. It owes its name to the vacuum structure permitting its formation. Namely, the false vortex is a topologically non-trivial solution built from a false vacuum which corresponds to the spontaneously broken sector of a modified abelian Higgs model. The presence of a symmetry-restoring true vacuum for scalar field φ = 0 explains the vortex metastability: this lower-energy phase of the scalar field is contained within the vortex core and spoils protection from expansion. For this reason, the vortex can lead to interesting consequences for the cosmological history of this model. We extend this work by considering the effect of gravity on the tunneling decay of false vortices in three spacetime dimensions. First of all, we must specify the zero of the potential energy, which becomes important in the presence of gravity. We choose the energy density in the symmetry-restoring true vacuum to be negative, while that in the false vacuum vanishes. This implies a modification of the spacetime. The exterior spacetime with a vortex confined within a finite radius is locally Minkowski with a conical defect [8]. The conical defect is the analog of the Schwarzschild mass parameter that is familiar in the 3 + 1 dimensional context. Inside the vortex, the situation is more complex with a varying magnetic field, a negative energy density and the scalar field's gradient energy. Nevertheless, an analytical understanding is possible in the limit of a large topological winding number, which implies a large magnetic flux inside the vortex. In this case, the scalar field makes a sharp transition from the core where φ = 0 to its asymptotic value; there is thus a thin wall separating the two vacua. Were it not for the magnetic flux, the interior of the vortex would then be exactly anti-de Sitter (AdS) spacetime. Instead, the exact solution of the 2 + 1 dimensional Einstein-Maxwell equations with cylindrically symmetric, covariantly constant magnetic field in asymptotically AdS spacetime corresponds to the solution studied by Hirschmann and Welch [9]. The case of thin-wall vortices is reminiscent of the thin-wall limit of false vacuum decay treated in the seminal papers of Coleman and collaborators [10][11][12] and as in those works the thin-wall nature of the vortex allows for an approximate collective-coordinate-like treatment of vortex tunneling. We adopt a circular disc of Hirschmann-Welch spacetime [9] separated by a thin circular transition region (the wall), from Minkowski spacetime with a conical defect on the outside. To understand if the vortex can expand dynamically after tunneling and to understand the tunnelling process itself, we construct the Israel junction conditions [13]. The paper is organized as follows: in section 2 we set up the basic framework with the equations of motion and appropriate boundary conditions. We also present the numerical solutions of the vortices coupled to Einstein gravity. In section 3, we specialize our study to thin-wall vortices, including constructing the junction conditions. In section 4, we compute tunneling rates for thin-wall vacuum decay and vortex disintegration. In section 5, we summarize our results and discuss possible future applications. Vortex solutions In this section, we derive the equations of motion for the gauge, scalar field, and metric functions and impose the appropriate boundary conditions. We present the numerical solutions for both thick-and thin-wall vortices. By varying the difference between the false and true vacuum states, we investigate how the existence of the solution can be affected by the strength of the gravitational constant. Setup and equations of motion We consider the action for Einstein gravity coupled to gauge and complex scalar fields: where g = det g µν , κ ≡ 8πG, R denotes the curvature scalar of the spacetime M, and h is the determinant of the first fundamental form. K is the trace of the second fundamental form of the boundary ∂M [14,15]. We adopt the sign conventions in ref. [16]. The field strength tensor is F µν = ∇ µ A ν − ∇ ν A µ , in which A µ is the electromagnetic potential. The covariant derivative of a complex field is given by D µ φ = (∇ µ + ieA µ )φ, where e is the coupling constant between gauge and complex scalar fields. The potential, shown in figure 1, is given by where λ is the self-coupling constant of the scalar field. The value of ǫ determines the shape of the potential [7,17]. The curvature scalar has mass dimension 2, 1/κ has mass dimension 1, the fields A µ and φ, the charge all have mass dimension 1/2, while the constants λ and ǫ are dimensionless parameters. We are interested in the case where the false vacuum state is located at φ = v and the true vacuum state is located at φ = 0, therefore 0 < ǫ < 1. The geometry for the true vacuum state corresponds to AdS spacetime with an effective cosmological constant Λ eff = κU(0) = −κλǫv 6 . In the remainder of this paper, computations will involve the absolute value of the cosmological constant, Λ = |Λ eff | = κλǫv 6 . By varying the action, we obtain the Einstein equation: 3) JHEP11(2017)028 where the energy-momentum tensor has the form The gauge field and scalar field equations are, respectively, We take the metric ansatz as to be where A(t, r), B(t, r), and D(t, r) are unknown functions representing a rotationally invariant solution. We now rewrite the equations in terms of dimensionless variables In what follows, we use these dimensionless variables, suppressing the tildes for notational simplicity. We look for solutions for φ and A µ in the coordinates (r, θ). The field ansatz is φ(t, r, θ) = f (t, r)e inθ , A µ (t, r, θ) = 0, 0, n(a(t, r) − 1) e , (2.9) where n is an integer, the winding number. Using this ansatz, different terms appearing in the action can be reduced as follows: where the dot and the prime denote differentiation with respect to t and r, respectively. JHEP11(2017)028 Using these results, the equations of motion are written out as a function of f, a, A, B, D fields. First, the (tt), (tr), (rr), and (θθ) components of the Einstein equations are As for the matter, the scalar field equation has the form while that of the gauge field is We wish to find the static configuration of the vortex with gravity. Even in this case, since exact analytic solutions have not been found even without gravity, we solve the equations numerically. Because the metric functions only depend on r, we are free to choose a gauge in which B(r) = 1 everywhere. This was the approach used in refs. [18,20]. We simultaneously solve the coupled Einstein, gauge, and scalar field equations with the following boundary conditions: The first conditions arise from the requirement that the solution be nonsingular at the origin and the second conditions are required for a solution of finite energy. In the absence of gravitation, the behavior for small r can be analyzed by linearizing the matter field equations, so that f (r) ∼ r n and a(r) ∼ r 2 as r → 0. For large r, we write f (r) = 1 − ξ(r) and linearize in ξ(r) and a(r), resulting in modified Bessel equations [21]. As for the field A(r) = √ g tt , multiple boundary conditions are possible with simple rescaling of time since the metric is time independent. A(∞) is fixed to 1 so that time is properly normalized for asymptotic observers. In any case, this is simply a matter of time rescaling, as the metric is time-independent. Numerical solutions We numerically solve the coupled equations of the gauge, scalar field, and gravity simultaneously. For the static configuration we solve (2.12)-(2.17), the terms without time derivatives, with the boundary conditions (2.18) and (2.19). Numerical solutions for f (r), a(r), A(r), B(r), and D(r) are shown in figures 2 and 3. The profile of f (r) is used to categorize two general types of vortex solutions. For small value of n, near 1, solutions are dubbed "thick": the transition from true vacuum to false vacuum shown by f (r) happens on a relatively large scale. Such solutions are shown for n = 1 in figure 2. On the other hand, for n ≫ 1 the solutions transit rapidly giving the thin-wall behavior described earlier, as can be seen from figure 3 for n = 50. In both cases, solutions obtained with different values of κ are plotted. The vortex profile for κ = 0 is in accordance with the previous study [7]. As gravity is added, matter fields do not change much compared to metric fields. For a conical spacetime, it is expected that D(r) = √ g θθ = (1 − 4Gµ) r where µ is the energy of the localized source [19]. This behavior is indeed observed: outside the vortex, D(r) is linear, with a slope which decreases as κ increases. As for A(r), it departs further from 1 at the origin for increasing values of κ. The actual behavior of A(r), getting smaller or bigger compared to its flat spacetime value of 1, should depend on the scalar field/gauge field mass ratio, as was observed in [20]. We also noted that there was a maximal value κ beyond which we could not find solutions anymore. In the previous paper [7], it was shown that the difference between the false and true vacuum state, ǫ, is crucial for the metastability of the vortices. They become unstable above a certain critical value ǫ c . This behavior defines a region in parameter space in which vortex solutions cannot be found. Figure 4 shows a scan of the parameter space for n = 1 and different values of κ. As κ gets bigger, the allowed region for the formation of metastable vortices (ǫ < ǫ c ) is also expanded. The equivalent behavior for thin-wall solutions is studied in detail in the next section, in which we formulate proper junction conditions at the surface of the vortex. Junction conditions for the thin-wall vortex The existence of static vortex solutions with gravity has been established. We now investigate more closely the metastability of thin-wall vortices. For this purpose, we employ the Israel junction conditions [13] to understand if the vortex can tunnel through a certain potential barrier and expand dynamically. We thus consider a thin wall partitioning bulk spacetime into two distinct three-dimensional manifolds, M + and M − , with boundaries Σ + and Σ − for the inside (−) and outside (+) of the vortex, respectively. To obtain the JHEP11(2017)028 single glued manifold M = M + M − , we demand that the boundaries are identified as follows: where the thin-wall boundary Σ is a timelike hypersurface with unit normal n µ . The bulk spacetime geometry, as eq. (2.7), is described by the metric The energy-momentum tensor T µν has a singular component on the wall where S µν (x i , η =η) is the surface stress-energy tensor of the wall where δ ≪η. The extrinsic curvature has only two components, k θ θ and k τ τ . The form of the stress-energy tensor on the wall is obtained using the covariant conservation. We introduce the Gaussian normal coordinate system near the wall where g τ τ = −1 andR(τ,η) = R(τ ). It must agree with the coordinate R on both sides of the junction. In this coordinate system, the induced metric on the hypersurface is JHEP11(2017)028 where τ is the proper time measured on the wall and R(τ ) is the proper radius of Σ. Given the metric defined in eq. (3.2), the following relation is satisfied where · denotes the differentiation with respect to τ in this section. The induced metric of the hypersurface is given by where the tangent vectors are The three-velocity of any point on the wall is which satisfies the relation u α u α = −1. The normal vectors are where we take the factor B −1 A to normalize the vectors, so that n α n α = 1. The extrinsic curvature then becomes where Γ θ θr = D ′ D . The relevant junction condition is given by where σ is the surface tension on the wall. Only the scalar field contributes to the tension. We ignore the contribution from the negligible magnetic flux on the wall. The junction equation We take the outside geometry to be flat Minkowski spacetime minus a wedge described by a deficit angle parameter ∆: (3.14) In the vortex core, we employ the geometry as the magnetic solution in AdS spacetime [9] JHEP11(2017)028 Here, Q m is related to the amplitude of the magnetic field flux. The peculiar notation is due to its association to a magnetic charge in [9]; this is somewhat misleading, as the magnetic field sourced by Q m is not oriented radially. Nevertheless, we maintain this notation for consistency. Λ is the absolute value of the cosmological constant, Λ = |Λ eff | = κǫ. The metric presented here is a bit different from the original formulation of this spacetime in [9]: factors of L in (3.15) appear after a rescaling of the variables that ensures that lim r→0 g θθ = r 2 , therefore avoiding the conical singularity at the origin. This geometry corresponds to a one-parameter family of solutions with a magnetic flux in three-dimensional AdS spacetime. For Q m = 0, the metric reduces to AdS spacetime. The magnetic field measured in an orthonormal basis is given by B = Q m Λ/ (1 + LΛr 2 ). The field is maximal at the origin and decreases monotonically until the boundary. We change the metric into the following since the two geometries do not have the same circumferential radius. After getting the equation with the effective potential, we will return to the original coordinate system. The outside geometry takes the form 17) and the inside geometry takes the form Then we can make the junction condition determining the motion of the thin wall which is located at position r = R (orR =r| r=R ) After squaring twice, we obtain where the effective potential takes the form We return to the original coordinate system by using the relation (3.22) The reason why L is expressed as a function of R shall be explained shortly. First, eq. (3.20) can then expressed in terms of R where the effective potential turns out to be Now, L(r) defined in (3.16) depends on Q m which in turn is related to R. The magnetic flux Φ is a constant because of its topological origin; as the vortex radius may change by quantum fluctuations, the following relation between Q m and R holds: This transcendental equation cannot be solved exactly for Q m . Making an expansion in κ, the first-order quantity is Q m ≃ Φ/πΛR 2 . Q m should then be expressed as a function of Q m (Φ, ǫ, κ, R) to replace L(R) in eq. (3.24). Figure 5a shows the shape of the effective potential with parameters Φ = 100, ǫ = 0.005 and κ = 0 (as for ∆ and σ, we show in the next section that they can be determined from the other parameters). The main feature of this potential is the presence of an energy-vanishing minimum, which is located at R 0 in figure 5a. According to eq. (3.23), this corresponds to a classically stable vortex. An energy barrier prevents the vortex from classically expanding to the escape radius R 1 from which it would explode; the vortex is quantum mechanically metastable. This feature of the system was observed in flat spacetime in [7] using different methods. We study how this effective potential is affected by gravity. Figure 5b shows how the shape of the potential changes as the gravitational coupling constant is increased. Values of κ are presented as a fraction of κ c = ǫ/σ 2 , a critical value of κ for which the vortex becomes completely stable. In the present case, we observe that the tunneling barrier of the effective JHEP11(2017)028 potential gets bigger as κ is increased towards κ c . The opposite behaviour is seen for κ > κ c ; the barrier becomes weaker as κ continues to increase. Eventually the barrier vanishes and no more thin-wall vortices can be formed. This agrees nicely with the absence of numerical classical solutions noted in section 2.2. We note that the vortex can also shrink through quantum fluctuations and collapse. However, using the effective potential to compute the probability of collapsing would be a stretch since the thin-wall approximation is only valid for large values of R. We concentrate on the expansion metastability as we wish to compare this process to vacuum decay. The conical defect As mentioned above, the vortex creates a conical defect (mass defect) in the angular coordinate. We examine the deficit angle in terms of the energy of the vortex configuration. The effective potential (3.24) contains five parameters. Three of them, Φ, ǫ and κ, are inputs. Parameters ∆, the deficit angle parameter, and σ, the surface energy density on the wall, are determined by the first three parameters. We will show the relation between them in the remainder of this section. Dependencies on R are not shown explicitly to improve readability. We first find ∆ using eq. (3.23) in the same way we found V eff Expanding everything to first order in κ, we find (3.28) Using this, ∆ can be approximated by where µ is the energy of the vortex configuration. As the conical defect should be conserved throughout the tunneling process due to energy conservation, we can simply evaluate it at the static vortex. Moreover, this energy of the static vortex should be minimized: Neglecting terms of order ǫ, one finds JHEP11(2017)028 In the non-relativistic limitṘ ≪ 1, the energy of the vortex is where the quantized magnetic flux is given by Φ 2 /4π = 4π 2 n 2 /e 2 . The factor 4π in the denominator is due to the convention of the field strength tensor in the action. This is almost the energy obtained in flat spacetime in [7] except for a missing kinetic term in the electromagnetic contribution. The reason for this discrepancy is that the interior metric written in (3.15) was intended to be static. As we let spacetime change with varying radius, the functional form should also be changed. As we will see later, the electromagnetic contribution plays a very small role in tunneling considerations, so that this issue is unimportant. The surface tension We now examine the energy density σ on the surface of the wall. We ignore the contribution from the negligible magnetic flux on the wall. In flat spacetime, a static wall simply has energy density where h is the induced metric on the surface of the wall. For a thin wall δ ≪ R so that the integration is on a relatively small scale. Given this, the Jacobian is approximately constant, √ h ≈ R. Furthermore, a large wall means the equation of motion for the scalar field is approximately where U FV corresponds to zero in this paper. Hence, σ reduces to As the wall expands, there is also a kinetic energy term. Furthermore we must account for a curved spacetime. These two complications are easily treated by working in the Gaussian coordinate frame defined by (3.6). The contribution of the scalar field to the wall Euclidean action will be Decay rate If the scalar field is in a metastable vacuum state, the tunneling process from the false vacuum state to the true vacuum state can occur via the potential barrier penetration, JHEP11(2017)028 which is the nucleation process of a vacuum bubble. In the semiclassical approximation, the decay rate of the metastable vacuum state per unit time per unit volume is given by In presence of a vortex, a similar process exists. As can be deduced in the collective coordinate approximation suggested in figure 5a, quantum fluctuations of the vortex radius can also lead to a phase transition. In this case, the relevant instanton describes the expansion of the wall. To understand the cosmological relevance of the vortex, we will compare the false vacuum lifetime in presence and absence of a vortex. We first proceed to compute the Euclidean action of the relevant instantons in both cases. Ordinary false vacuum decay We imagine the Universe is in a false vacuum state, say φ = 1 for definiteness. Ordinary false vacuum decay occurs when a critical true vacuum bubble nucleates and triggers a phase transition. This bubble is the non-trivial extremal path in configuration space which minimizes the Euclidean action, the so-called bounce. The relevant model is a complex scalar field minimally coupled to gravity, whose Euclidean action is given by We assume the bounce solution has O(3) symmetry (as is the case in flat spacetime [22]), which means φ = φ B (ρ(ξ)) where ξ = √ τ 2 + x 2 is the Euclidean radial coordinate, ρ(ξ) is the physical radius. We note that this assumption also applies to the trivial solution φ = φ FV . Also, for simplicity, we assume a real scalar field. With these assumptions, the equation of motion for the scalar field are where ′ denotes the differentiation with respect to ξ in this section. The metric associated with the bounce solution also shares this O(3) symmetry: JHEP11(2017)028 R and K are easily found from this metric, The action is then given by where we used integration by parts to cancel the boundary term. The action can be simplified further by using Einstein's equation G ξξ = κT ξξ to obtain The on-shell action can then be expressed as Now, the tunneling exponent B vac is obtained by subtracting the background from the action of the bounce (4.11) We study the specific case where the false vacuum energy density vanishes, U(φ FV ) = 0. B vac then simplifies to We now employ the thin-wall approximation, ǫ ≪ 1. In this limit, the bounce solution describes a bubble of true vacuum, centered on the origin ρ = 0, which is surrounded by false vacuum. The region of transition from true to false vacuum is an O(3)-spherical wall. Its large radiusρ when ǫ is small explains why its radial profile is dubbed "thin". In the thin-wall approximation scheme, the exponent B vac can be divided into three parts: Outside the bubble, the bounce coincides with the false vacuum background, hence B vac, out vanishes. For the other parts of the spacetime, it is useful to rewrite (4.9) as On the wall, ρ ≫ 1 and the damping term can be neglected in the equation of motion (4.4). We then find the first integral of motion (4.14) JHEP11(2017)028 This means dξ = dρ for the bounce in the wall region like in the false vacuum background, and only the potential term contributes to the tunneling exponent. As the wall is thin, the radius doesn't vary much in the wall region, ρ ≈ρ, and The on-shell action (4.10) inside the bounce is computed with (4.13) and using U T V = −ǫ, φ ′ = 0, As for the inner contribution coming from background, taking U = U FV = 0 in (4.10) yields Subtracting this background from the bounce action, we find Putting contributions (4.15) and (4.19) together, we find the following expression for the tunneling exponent: Taking κ → 0 gives the flat spacetime limit, as it must: where A and V are the flat Euclidean volume and area in three dimensions. The on-shell radiusρ =ρ 0 found by extremizing (B vac )| κ=0 is The tunneling exponent is then evaluated to The same quantities can be computed in curved spacetime using the full tunneling exponent in (4.20):ρ We denote by κ c a critical value of κ. At this value, the bounce radius is infinite, as is the tunneling exponent, shown in figure 6. Beyond κ c ,ρ would be negative, which is unphysical, so the bounce does not exist for such strong gravity. The vacuum thus becomes completely stable for κ > κ c . This phenomena, described as gravitational quenching of the vacuum decay, was also observed in 3+1 dimensions in [12]. It is easily seen thatρ falls back on ρ 0 as gravity is turned off. This is also true for B vac and B 0 , although it is somewhat less obvious. False vortex disintegration According to [7], we adopt the ansatz for the configurations representing a vortex of radius R, treating R as a variational parameter. For thick-wall vortices f (r) = r/R r < R 1 r > R , a(r) = (r/R) 2 r < R 1 r > R , (4.26) while for thin-wall vortices In the following, we focus on the thin-wall solution for which an analytical tunneling exponent can be obtained. The Euclidean action takes the form (4.28) The Ricci scalar can be reexpressed by using the trace of the Einstein equations. In three dimensions, JHEP11(2017)028 where we took the trace of (2.4). Inserting this in (4.28) we obtain the vortex's on-shell Euclidean action The final term, the Gibbons-Hawking-York term [14,15], will be written S GHY vort henceforth. It is related to the conical deficit angle as where µ is the vortex energy. This surface term is not important for tunneling considerations, as it is the same for the background vortex and the expanding vortex. Indeed, the conical defect is expressed by the energy which is conserved in the Euclidean evolution. Energy considerations are not so obvious once gravity is taken into account, but it makes sense at least for an asymptotic quantity like the surface term. Thus, as we move on and compute the tunneling exponent, the boundary term does not contribute Thus, only the bulk contribution of S vort contributes to B vort . For simplicity, we write S bulk vort = S vort in what follows. Given the nature of the thin-wall solution, it is better to separate the integral in two parts, that is, the core and the wall of the vortex. The exterior of the vortex does not contribute to the action. For the interior integral, we used the fact that the interior metric respects det(g µν ) = r 2 . As for the integral on the wall, we performed it in Gaussian normal coordinates defined by (3.6). The interior of the vortex is in true vacuum phase, U = −ǫ, so Furthermore, the bounce solution is determined by (3.23), as the radius expands between turning points R 0 and R 1 as shown in figure 5a. The integration over Euclidean time can be parameterized with the radius R(τ E ). whereṘ = dR/dτ E . The proper time is related to the coordinate time through JHEP11(2017)028 The action then reads (4.39) To evaluate F µν F µν , we first compute the value of the gauge field with a computation analogous to the definition of the magnetic flux calculated in eq. (3.25) The radial integral of the field-strength term then becomes where the time dependence originated from Q m which is a function of R(t). This integral is quite complicated and cannot be done analytically. Fortunately, we can simply neglect this contribution from the action. The general idea is that the action is of the form Since R 1 ∼ 1/ǫ is very large (and gets bigger as gravity is added), only the highest powers of R in the integrand will have significant contributions in the large R part of the integration. Terms with smaller powers of R will be negligible. We also take into account that ǫR 2 ∼ R for R ∼ R 1 . Based on this criterion, it was argued in [7] that the vortex action (in flat spacetime) is We should verify if this approximation is still valid in the presence of gravitational corrections. For simplicity, we restrict our demonstration to the static contribution to the curved spacetime integration of the electromagnetic field strength. We expand this expression in JHEP11(2017)028 powers of κ: where in the second step we have rewritten Q m in terms of Φ by solving (3.25) to lowest order in κ. All these terms are either inverse powers of R or are constant with respect to R and will contribute only weakly to the value of the action. To fully justify this approximation, we would also need to determine gravitational corrections for the kinetic analog of this quantity, and then compare them to the scalar field contribution. We will first compute the action completely ignoring the electromagnetic part. We will later verify our approximation with a numerical, non-perturbative, computation. With all this, B vort is reduced to We apply the same approximation, keeping only highest powers of R and using ǫR 2 ∼ R. The effective potential (3.24) related to the wall velocityṘ is simplified tȯ Note that for κ < κ c , the turning point R 1 is exactly the radius of the CdL bounce defined in (4.24). It is convenient to introduce a variable for the ratio R/R 1 , such thaṫ Finally, approximating Q m to 0 also yields J ≈ H ≈ L ≈ 1. In its final form, the tunneling exponent then reads where we used R 0 ≪ R 1 to set x ∈ [0, 1], and we used N 2 = 1+κǫR 2 from (3.16). Replacing R 1 by its value defined in (4.45), we get (4.49) -20 - JHEP11(2017)028 Proceeding with the integration and using B 0 defined in (4.22) to simplify the prefactor, we obtain we conclude from (4.25) that Thus, the vortex and vacuum decay rates share the same dependence on κ for κ < κ c . The comparison does not hold for κ > κ c . As we mentioned in section 4.1, regular vacuum decay is forbidden in this region as the radius of the bounce becomes negative. No such restriction applies to the vortex; as κ becomes greater than κ c , figure 5a and (4.45) show that the false vortex's escape radius R 1 remains positive. However, we do note a singular behaviour near κ c : The lower case gives an effect similar to the quenching of the CdL bounce as noted earlier: a change of sign happens as κ goes from κ − c → κ + c . As κ increases, this analytical result may not hold, since R 1 gets smaller and we may no longer neglect a portion of the integration on the interval [R 0 , R 1 ]. We must also emphasize that to even consider a tunneling exponent, a metastable vortex must exist in the first place. As was pointed out in section 2.2, there is a maximum value of κ beyond which static solutions of the equations of motion cannot be found. From figure 3, we see a vortex solution corresponding to κ = 0.04 respects κ > κ c since κ c = ǫ/σ 2 ≈ (0.005)/(1/2) 2 = 0.02. This means there is a portion in region κ > κ c where the CdL bounce becomes impossible, while the formation and decay of false vortices is still possible. In fact, the vortices in question are very short-lived since −B vort ≫ 1. The thought of a gravitationally stabilized false vacuum may have given us hope in the past; with the prospect of gravitationally enhanced vortex explosions, we have occasion for renewed anxiety, to put in Coleman's words [10]. We numerically verify the approximation scheme used to obtain (4.50). The numerical value of the action is obtained by inserting (3.16), (3.22), (3.23), (4.41) into the full bounce action (4.39). Parameters Q m and ∆ must also be replaced by minimizing (3.25) and solving (3.30), respectively. The result of the integration is shown in figure 7; there is excellent agreement between the analytical and numerical computations. Relevant contributions to the tunneling exponent are given by where where again we used (4.29) to reexpress the Ricci curvature. We already computed a combination of these terms to obtain (4.50). We repeat the procedure for each individual term: These terms are of the form B x /B 0 with x ∈ {surface, volume, curvature}. It is easier to compare (1 − κ/κ c ) 2 (B x /B 0 ) since the singular part is then gone. Figure 8 shows how these terms, as well as their sum, change as κ/κ c is varied. Not surprisingly, we see that the curvature part of the action is mostly responsible for the behavior and the change of It may seem surprising that the vortex tunneling exponent (4.50) is independent of the magnetic flux Φ. This is because this quantity only comes into play in the contributions we argued were negligibly small. Put another way, the vortex disintegration is essentially controlled by the wall surface tension and the vortex vacuum volume energy. In our approximation scheme, we really just computed the tunneling exponent of an O(2)symmetric bubble which starts from approximately null radius and grows to the escape radius in Euclidean time. Now, this O(2)-invariant tunneling event with lower action than the O(3)-invariant tunneling event does not contradict what we know from regular vacuum decay. In the absence of magnetic flux, the former is not a proper decay channel since it does not extremize the action. By breaking translational invariance, the vortex basically enables this mode. The magnetic flux is thus necessary for the existence of this event, but has a minor influence on its occurrence. Apart from the tunneling exponents which vary by a factor 2, the major difference between the O(2) and O(3) bounces is that only the former is still well-defined for κ > κ c . Tunneling rates We compare decay rates for κ < κ c where both vortex and CdL bounces are possible. For a dilute gas of instantons, the decay rate in the semi-classical approximation is given by (4.1). For the coefficient A, the change of variables gives rise to a Jacobian factor which is evaluated in [10] and yields the decay rate where we used B vort = B vac /2. N /V indicates the vortices density. Of course, let us recall that we assumed from the outset that ǫ ≪ 1 and that the vortex has a large winding number n. Since B vac is very large, and more so as ǫ → 0 and/or κ → κ c , this means the phase transition is largely dominated by vortex disintegration. Calculating the determinant factors A ′vac and A ′vort . is beyond the scope of this paper Summary and discussion We have extended the work [7] based on a modified Abelian Higgs model in which vortices can be formed in a U(1) breaking false vacuum. We found how gravitational effects can alter the formation and decay rate of vortices trapped in the false vacuum. As gravity is turned on, the spacetime becomes asymptotically conical, with a deficit angle clearly seen in numerical solutions. Matter configurations are also changed, albeit to a lesser degree. As for the decay rate, it decreases, both for conventional vacuum decay and for vortex disintegration, as κ is increased towards its critical value κ c . Neglecting the magnetic contribution in the vortex case, we find the vortex tunneling exponent is precisely half that for vacuum decay. Increasing κ up to this critical point κ c , both these events become more and more suppressed. Beyond this critical point, the CdL bounce is completely suppressed, while the vortex bounce becomes extremely favored. However, B vort remains a monotonic increasing function of κ. We compared tunneling amplitudes in the region κ < κ c . We found out that, as gravity is turned on, vortices remains the dominant factor determining the false vacuum's stability. As in flat spacetime, the overall decay rate and the vortex's dominance increase as ǫ → 0. Thus, in some cases, vortices may very well render an otherwise-acceptable theory incompatible with our Universe's cosmological history. We should stress out that gravity in the κ < κ c region stabilizes solutions of the model by decreasing the tunneling decay rate. It also increases the region of parameter space for which there are classically stable vortices, since the vacuum energy density limiting value, ǫ → ǫ c , is increased as gravity is turned on. Of course, the model in question has to appear in physical situations in the first place. One of our motivations was to study the interplay of symmetry breaking and false vacuum in a toy model, in which we have presented the numerical solutions and analytical calculations. These features are of general relevance in many theories. One can think of scalar potential false vacua appearing in string cosmology, or the existence of supersymmetry-JHEP11(2017)028 broken phases. More complete but similar models to ours for Grand Unified Theories were also considered at a qualitative level in [23]. Vortices have been studied in a variety of theories [24][25][26][27][28][29]. Especially in models with the sextic potential, we expect that the models can open up the possibility that various types of solutions exist both with and without gravity. Decaying cosmic strings, which generalize the false vortex in 3 + 1 dimensions, were also studied in earlier work [32]. For the classical instability the effect was first investigated in the context of Grand Unified Theories in [33]. In those studies, cosmic strings are analogous to vortex lines in type II superconductors or in superfluid liquid helium. The extension to the decay of those with gravity could be interesting. Different topological solitons such as metastable monopoles [17] and domain walls [30,31] were also studied in similar models with flat spacetime. Obtaining a generalization in curved spacetime for these defects would also be interesting. For example, early results in the monopole case show the same sign-changing of the tunneling exponent noted in (4.61). More investigations along these lines are under way.
2023-01-21T14:18:14.950Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "f4e66be6ce034c0b9c7f7995aa24e23ca9b0bd79", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2017)028.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f4e66be6ce034c0b9c7f7995aa24e23ca9b0bd79", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
27189260
pes2o/s2orc
v3-fos-license
A Machine Learning and Cross-Validation Approach for the Discrimination of Vegetation Physiognomic Types Using Satellite Based Multispectral and Multitemporal Data This paper presents the performance and evaluation of a number of machine learning classifiers for the discrimination between the vegetation physiognomic classes using the satellite based time-series of the surface reflectance data. Discrimination of six vegetation physiognomic classes, Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs, was dealt with in the research. Rich-feature data were prepared from time-series of the satellite data for the discrimination and cross-validation of the vegetation physiognomic types using machine learning approach. A set of machine learning experiments comprised of a number of supervised classifiers with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The performance of each experiment was evaluated by using the 10-fold cross-validation method. Experiment using the Random Forests classifier provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Accuracy metrics were found to be very sensitive to input features and size of ground truth data. The results obtained in the research are expected to be useful for improving the vegetation physiognomic mapping in Japan. Introduction Vegetation has been classified according to a number of criteria, such as climate [1], physiognomy [2], dominant species [3], combination of climate pattern and physiognomy [4], and physiognomic-floristic hierarchy [5]. Physiognomy means overall structure, physical appearance, and growth forms (herbs, shrubs, and trees) of the vegetation. It is descriptive of the size, leaf traits (needle-shaped or broadleaved), and phenology (deciduous or evergreen) of the dominant species [6]. Vegetation has been threatened by shifting of its zones and floristic decompositions under the influence of climate change [7][8][9][10][11][12]. Therefore, discrimination of the vegetation physiognomic characteristics is relevant to tracking the changes in vegetation structure and composition, thus understanding the vegetation responses to changes in environmental conditions. Different attempts have been made for the classification and mapping of vegetation by exploiting the remote sensing data. Major sources of the remote sensing data are the imageries obtained from satellites or aircrafts. Both the multispectral and hyperspectral satellite data have been used [13][14][15][16][17]. More recently, vegetation mapping by using near surface multispectral, hyperspectral, or lidar imaging from manned or unmanned aircrafts is growing [18][19][20]. Radar imagery from the satellites is another viable data source for the vegetation mapping [21][22][23][24]. The discrimination and classification of the vegetation using the remotely sensed imagery involve with multiple image processing and classification techniques. Though some researchers have reported satisfactory results using multiple spectral mixture analysis [25], digital image enhancements [26], temporal image fusion [27,28], and texture based classifications [29], supervised classification is probably the mostly used method for the 2 Scientifica vegetation classification. A number of supervised classifiers such as maximum likelihood method [30], decision trees [31], Support Vector Machines [32], fuzzy learning [33], Random Forests [34,35], and Neural Networks [36][37][38] have provided promising results in different regions. However, most of these studies have not dealt with the discrimination and validation of all kinds of vegetation physiognomic classes such as Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs in a study area. The performance of existing land cover maps is limited for the discrimination of vegetation physiognomic types [39]. The discrimination of vegetation physiognomic types from remotely sensed data, though immensely important for detecting changes in vegetation structure and composition, is challenging. The Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Terra and Aqua satellites provides a unique collection of time-series of the surface reflectance. This paper presents the performance and evaluation of a number of machine learning classifiers with respect to the time-series of the MODIS surface reflectance data for achieving an improved discrimination between the vegetation physiognomic types. Preparation of Ground Truth Data. The existing geolocation data of the plant communities accessed from the Nature Conservation Bureau of the Ministry of Environment, Japan, were used for the preparation of ground truth data. These data were originally collected by field inspection of the plant communities according to the association of vegetation, the diagnostic/dominant species occurrence in the uppermost (and understory) stratum. We converted the plant community types into vegetation physiognomic types by studying the physiognomic characteristics of plant communities. The geolocation points were visually inspected with Google Earth based very-high-resolution time-lapse images available between 2012 and 2014, and the points representing a large homogenous (at least a single MODIS pixel size) area were finally selected. In this way, 300 geolocation points for each physiognomic class were prepared. This research deals with six vegetation physiognomic classes: Evergreen Coniferous Forest (ECF), Evergreen Broadleaf Forest (EBF), Deciduous Coniferous Forest (DCF), Deciduous Broadleaf Forest (DBF), Shrubs, and Herbs. The classification scheme adopted in the research is described in Table 1. 2.2. Processing of Satellite Data. Terra/Aqua satellite based MODIS Surface Reflectance 8-Day Level 3 Global 500 m data sets (MOD09A1 and MYD09A1) available over Japan in year 2013 were processed and used in the research. The MOD09A1 and MYD09A1 products provide an estimate of the surface spectral reflectance of bands 1-7 (Red, Near Infrared, Blue, Green, Mid Infrared, Shortwave Infrared 1, and Shortwave Infrared 2) as it would be measured at ground level in the absence of atmospheric scattering or absorption. Three spectral indices, Normalized Difference Vegetation Index (NDVI; [40]), Enhanced Vegetation Index (EVI; [41]), and Land Surface Water Index (LSWI; [42]), were also calculated for each scene. The 8-day data sets containing surface reflectance in seven bands and three spectral indices were composited using monthly and percentile based techniques. Multiple percentiles (0, 10,20,30,40,50,60,70,80,90, and 100) and monthly median composites (January to December) data were calculated pixel by pixel for each dataset. Altogether, 230 features (input layers) were prepared and deployed for the machine learning and cross-validation. The input features are described in Table 2. Table 3. First of all, the given features were divided into 10fold of samples. For each fold of samples, the learning was carried out only for nine folds, whereas the remaining one fold was used for the validation. However, inside the crossvalidation loop, the best-scoring features (training) were selected based on univariate statistical test. We used the Analysis Of Variance -value between physiognomic classes and features (training) as the univariate statistical test. The features (training) were grouped into 1-230 set(s) of bestscoring features. For instance, the first set included a single highest scored feature, whereas the last set included all 230 features. For each set of important features, the machine learning model established with the training folds was used to predict the physiognomic classes with the validation fold. The least number of best-scoring features that provided the highest kappa coefficient was noted as the optimum number of features. The Standard Deviations of the overall accuracy and kappa coefficient across the 10-fold cross-validation loop in the case of optimum number of features were also calculated. Finally, the predictions were collected from crossvalidation loop; and the validation metrics, confusion matrix, overall accuracy, and kappa coefficient, was calculated with the given physiognomic labels. The overall accuracy-sum of true positives and true negatives divided by number of validation points-measures correctness of the classification. Kappa coefficient measures interrater agreement by counting the proportion of instances that predictions agreed with the validation data (observed agreement) after adjusting for the proportion of agreements taking place by chance (expected agreement) [43]. The same processing was conducted for each experiment. Cross-Validation Results. The performance of 10 experiments conducted in the research is summarized in Table 4. Total features 230 Naive Bayes (algorithm = Gaussian) 4 Random Forests (trees = 10) 5 Random Forests (trees = 50) 6 Random Forests (trees = 100) 7 Support Vector Machines (kernel = linear) 8 Multilayer Perceptron (hidden units = 100; hidden layers = 1) 9 Multilayer Perceptron (hidden units = 100; hidden layers = 3) 10 Multilayer Perceptron (hidden units = 150; hidden layers = 5) For instance, highest kappa coefficient (0.78) was obtained by using 160 input features out of 230 total features in the case of Experiment number 6. Therefore, time-series of the spectral features is important for discriminating the vegetation physiognomic classes. The selection of optimum features not only provides the best features required for discriminating the classes but also reduces the computation time and efforts [44]. Discrimination between Physiognomic Classes. We computed the confusion matrices using the 10-fold crossvalidation method. All experiments used the same size of ground truth data sets (300 for each physiognomic class). The ground truth data sets were well distributed all over the country. Six physiognomic classes evaluated under the research were Evergreen Coniferous Forest (ECF), Evergreen Broadleaf Forest (EBF), Deciduous Coniferous Forest (DCF), Deciduous Broadleaf Forest (DBF), Shrubs, and Herbs. The confusion matrices computed with the optimum number of features for each experiment are 163 plotted in Figure 1. The confusion matrices showed that none of the experiments could discriminate between DBF and DCF, between EBF and ECF, and between Herbs and Shrubs efficiently. Among 10 experiments conducted in the research, experiments using Random Forests, Support Vector Machines, and Neural Networks provided better discrimination between the challenging classes. It is still difficult to discriminate between the coniferous and broadleaved forests though the phenological discrimination between coniferous and broadleaved forests: DCF versus ECF or DBF versus EBF could be enhanced by utilizing time-series of the MODIS data. Effect of Input Features. The variation of the kappa coefficient by increasing the number of input features in the case of ground truth data sets with size 300 for each experiment is shown in Figure 2. Kappa coefficients increased by increasing the number of important features to some extent in all experiments, after which it saturated. Kappa coefficients were not highest by merely using all input features. Therefore, a combination of important features was found to be crucial for achieving the highest accuracy rather than just using the large number of features. Similar results were obtained in the case of ground truth data sets with sizes 200 and 100. Large impact of feature selection on classification accuracy has also been reported in other land cover classification researches [44][45][46][47][48]. Since the countrywide discrimination of vegetation physiognomic types is challenging, selection of the important features should not be neglected. Effect of the Ground Truth Data Size. The size of available ground truth data is usually limited as the collection of field data requires lots of time, efforts, and costs. The classifier providing highest accuracy metrics by using less size of the ground truth data would be preferred. To analyze effect of the ground truth data size on the accuracy, the ground truth data sets of size 300 available in the research for each physiognomic class were randomly sampled into 12 sets: 25,50,75,100,125,150,175,200,225,250,275, and 300. For each set, 10 experiments were conducted and the accuracy metrics were calculated using the 10-fold cross-validation method. The maximum kappa coefficients obtained from each experiment with respect to different data size are plotted in Figure 3. As demonstrated in Figure 3, kappa coefficients are generally increased in all experiments by increasing the size of ground truth data. This analysis implies that large size of ground truth data is crucial to obtain higher accuracy. However, kappa coefficients did not increase in all experiments just by increasing the size of ground truth data. Therefore, optimized selection of the size of ground truth data with respect to the classifiers is important. The impact of the ground truth data size on the classification accuracy has also been discussed in other studies [49,50]. Uncertainties and Limitations. Results obtained in the research may be prone to a number of uncertainties arising from ground truth data, satellite data, and computation efforts. Discrimination of vegetation physiognomic types using moderate resolution satellite data such as from the MODIS is affected by mixed pixel effect. The ground truth data sets used in the research were prepared from the large homogenous areas. Therefore, the cross-validation accuracies obtained in the research solely based on homogenous physiognomic classes may be lower in the field application. Utilization of high-resolution satellite data in future could minimize errors pertaining to homogeneity of the ground truth data sets and mixed pixel effects. Only highest quality surface reflectance data from MODIS was used by masking out the pixels affected by clouds, cloud shadows, cirrus, and large solar zenith angles using separate quality band descriptions available in the data sets. The seamless highest quality data may not be available throughout the country due to atmospheric effects. Comparison of the supervised classifiers as conduced in the research is not complete as only commonly used classifiers and model parameters were assessed. Comprehensive comparison of the supervised classifiers is certainly out of scope of the research. Nonetheless, evaluation results are consistent to other large studies. For example, Fernández-Delgado et al. 2014 [51] found Random Forests as the better classifier than Support Vector Machines or Neural Networks after rigorous comparison of 179 classifiers (machine learning algorithms) using 121 different data sets. Conclusions A set of machine learning experiments comprised of a number of supervised classifiers ( -Nearest Neighbors, Gaussian Naive Bayes, Random Forests, Support Vector Machines, and Multilayer Perceptron) with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The cross-validation method showed that the Random Forests provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Optimum number of features, the set of lowest number of input features yielding highest kappa coefficient, were large (more than 92) in all experiments. The large number of optimum features required by the experiments implied that multitemporal satellite data are crucial for discriminating the vegetation physiognomic types. Kappa coefficients were not highest by merely using all input features. Therefore, combination of the important features was found to be crucial for achieving the highest accuracy rather than just using the large number of input features. Generally, the kappa coefficient increased in all experiments by increasing the size of ground truth data sets. Still, discrimination of some classes especially between the coniferous and broadleaved forests was not adequate which requires further exploration in future. Conflicts of Interest The authors declare that they have no conflicts of interest.
2018-04-03T05:58:21.355Z
2017-06-11T00:00:00.000
{ "year": 2017, "sha1": "fb5ecd4caf75d67079f372e8c776bf7d1b39bb9e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/scientifica/2017/9806479.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b351eeba0195c5e9694511798c98d669b7100ec3", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
266809949
pes2o/s2orc
v3-fos-license
NON-SPECIALIZED STUDENTS’ BENEFITS AND CHALLENGES IN USING ELSA SPEAK APPLICATION FOR PRONUNCIATION LEARNING : The noteworthy proliferation of English language applications, exemplified by advancements such as the Elsa Speak Application, specifically designed for the pedagogical aspects of English instruction, underscores this trend. This study seeks to scrutinize the benefits and difficulties that non-specialized students in the High-quality program at Can Tho University (CTU) have when they learn English pronunciation through the Elsa Speak Application. To conduct this examination, semi-structured interviews were undertaken with 10 participants. The findings reveal that these participants encountered specific advantages and challenges in effectively navigating and utilizing the application. In response to these identified challenges, a set of recommendations has been proffered to enhance the effectiveness of the Elsa Speak Application, with the overarching goal of optimizing English language learning outcomes for students enrolled in the High-quality program at CTU. This research contributes valuable insights into the nuanced experiences and challenges associated with the integration of AI-driven language learning tools within the educational landscape of Industry 4.0. Introduction Mastering pronunciation is a fundamental skill essential for effective communication, and a lack of proper guidance can lead to reduced confidence and difficulties in understanding a foreign language.The effectiveness of English communication hinges significantly on pronunciation, playing a key role in expressing meaning, improving learning materials are generally considered complete and suitable for various proficiency levels, although some struggle with the British accent used in the application. Pedagogical Design The pedagogical design of Elsa Speak Application focuses on personalized teaching methods, analyzing individual competencies, and designing a tailored learning journey.Studies by Samad & Aminullah (2019), Silaen & Rangkuti (2022), and Darsih et al. (2021) consistently show that students find Elsa Speak Application's pedagogical approach effective.The use of phonetic symbols, transcriptions, and a game approach enhances engagement and progress in pronunciation. Assessment or Flexibility Design Elsa Speak Application provides automated evaluations of learners' progress, offering detailed feedback to help users identify and improve pronunciation aspects.While most students appreciate the assessment and flexibility features, there are concerns about internet usage and connectivity issues, as highlighted in studies by Silaen & Rangkuti (2022) and Darsih et al. (2021).Overall, the assessment tools contribute positively to learners' pronunciation improvement. Multimedia Design Elsa Speak Application employs multimedia materials, including animated diagrams, to create an engaging learning environment.Research by Samad & Aminullah (2019) and Silaen & Rangkuti (2022) consistently reveals positive perceptions regarding multimedia design.The application's complete library of words, IPA transcriptions, placement tests, and Automatic Speech Recognition enhance the overall learning experience. Automatic Speech Recognition Design The Automatic Speech Recognition Design of Elsa Speak Application utilizes advanced technology to analyze and evaluate learners' responses, helping them identify and improve pronunciation aspects.Studies by Samad & Aminullah (2019) and Kholis (2021) show that students have a positive perception of this feature.The immediate feedback and calibration options contribute to students' confidence and motivation in learning pronunciation. In brief, Elsa Speak Application proves to be an effective, engaging, and userfriendly tool for English language learners, as supported by various research studies.The positive perspectives from students highlight the application's strengths in Content Design, Pedagogical Design, Assessment or Flexibility Design, Multimedia Design, and Automatic Speech Recognition Design.While some challenges, such as the difficulty with British accents and internet usage concerns, exist, the overall impact on learners' pronunciation skills is positive and encouraging. Related Studies Elsa Speak Application is an online English learning tool that has been gaining attention and development in recent years.With the combination of Artificial Intelligence and advanced voice technology, Elsa Speak Application offers users a unique and effective learning experience.With Elsa Speak Application, you can improve your pronunciation skills through lessons. Hafizhah et al. ( 2023) conducted a study with the aim of finding out students' perceptions of Elsa Speak Application, which was a tool for teaching students how to pronounce words in English and providing phonetic transcriptions.It is intended to be seen from at least five different perspectives, including Content, Pedagogy, Assessment or Flexibility, Multimedia, and Automatic Speech Recognition Design.Some students said that it was sometimes annoying because of interruptions to signals or a poor internet connection.Furthermore, it lacks more conversational tasks and uses unclear terminology. Another study conducted by Darsih et al. (2021) aimed to investigate students' voices regarding the use of a mobile application named Elsa Speak Application classes.This study employed a quantitative survey method to obtain data about students' voices or perspectives towards the use of Elsa Speak Application classes.The total questions on the questionnaire are 25 items, while there are 5 questions for each aspect, which aims to find out the EFL learners' perspective on using Elsa Speak Application in their pronunciation ability.In addition to all the benefits obtained from the Elsa Speak Application, some students had trouble because there was some level that was still locked and should be charged.The Elsa Speak Application's audio response is still not strong enough to filter out the noise from the outside, so users have to repeat their voices. According to Becker (2019), Elsa Speak Application's basic interface is simple and provides the ability to navigate between topics and skills, levels, reports, and other features.However, when using Elsa Speak Application to learn pronunciation, users may encounter some difficulties.In fact, students must have a smartphone to download and install the application.Besides, they have to pay fees for the premium version of Elsa Speak Application.It has to be noted that Elsa Speak Application's premium version does not offer features that the free version does not; the only difference is the larger number of exercises the premium version includes.The application requires users to be connected to the internet to use it.Elsa Speak Application uses voice recognition technology to convert spoken language into text.However, the system may have difficulty with background noise or unclear pronunciation. A study by Meri-Yilan et al. (2019) aimed to look at students' perspectives on learning language through two technology-based speech recognition programmes, Immerse Me and Elsa Speak Application.This study involved five Turkish participants, aged between 19 and 22, studying in the preparatory class in the Department of Interpretation and Translation at Agri Ibrahim Cecen University.The results shown in the Elsa Speak Application just focused on their pronunciation and correct use of stress and indicate that the Elsa Speak Application only has a mobile program, no video, and no feedback about the assessment scores. In conclusion, although there may be difficulties while using Elsa Speak Application to learn pronunciation skills, the advantages far outweigh the challenges.Learners can overcome these challenges and become more fluent English speakers with patience and confidence in their abilities, as well as regular practice employing this method of learning.Elsa Speak Application provides a structured and method framework for developing pronunciation abilities. Material and Methods The researchers conducted individual interviews with ten students to gather further insights into their advantages and challenges of using Elsa Speak Application.The objective of the interview was to investigate how students perceive learning English pronunciation on Elsa Speak Application.Ten students, evenly distributed between males and females, took part in interviews.Most of them had studied English for more than six years, with only three having less experience.All employed Elsa Speak Application for the Pronunciation in Practice course and most of them used it for less than 6 months.Interviews were administered with a strict adherence to ethical principles to guarantee confidentiality and anonymity.Ethical considerations were a top priority throughout the research process.Ten students (Table 3.1) participated in online semi-interviews through Google Meet and recorded, lasting 12-15 minutes each.Conducted in Vietnamese due to the participants being English non-majored students, the interviews aimed to capture diverse ideas and opinions about Elsa Speak Application.While five questions were prepared in advance, the flexibility allowed additional exploration based on student responses.The questions were adapted from Martins et al.'s (2016) work on EFL/ESL pronunciation teaching software, ensuring quality through a pilot test with three non-majored English students. Subsequently, interview data from the ten students were transcribed and organized into five themes on Microsoft Excel Office 2019.Detailed analysis of each student's answers within each theme was performed, and the interview results were presented, providing insights into students' perceptions and experiences with Elsa Speak Application in learning English pronunciation. Students' perceptions towards the advantages of the Content Design Most of the participants agreed that the Content Design is good because it not only provides a variety of exercises to help the students distinguish between sounds or consonants, and vowels but also it provides a variety of exercises that support the students practice of intonation when speaking English. One student shared her perceptions towards the Content Design: "At first, I feel nervous when I study pronunciation because there is a variety of knowledge such as diphthongs, consonants, vowels, or ending sounds.However, Elsa Speak Application provides a variety of exercises with detailed instructions, which helps me deal with my difficulties in studying new knowledge."(Student 2) Another student agreed that the Content Design is good for studying pronunciation on the Elsa Speak Application: "To be honest, firstly, I used the Elsa Speak Application because it is an obligatory task in my pronunciation class.However, the more time I spend practicing on the Elsa Speak Application, the more motivation I have to learn and practice new vocabulary.The Elsa Speak Application has a large source of vocabulary in different topics."(Student 1) One male student also agreed that the Content Design helps him feel more confident when communicating with other students: "When I did not know about the Elsa Speak Application, I always found it difficult to check if I pronounced and stressed a word correctly or not.Also, it is hard for me to pronounce a new word until I know the Elsa Speak Application.It has detailed instructions on how to pronounce each sound, and it also provides many exercises about the syllabic constituent.Now, I feel more confident when I communicate with other students, which motivates me to study pronunciation on Elsa Speak Application more and more."(Student 9) In conclusion, many students agreed that they get many advantages of the Content Design of the ESA.The students feel more confident when speaking English because they practice a lot on the ESA with a variety of exercises for each sound.Moreover, the Content Design provides the students detailed instruction for each sound they pronounce incorrectly, so the students practice and improve pronunciation easily. Students' perceptions towards the advantages of the Pedagogical Design According to many students who were answering the interview, the Pedagogical Design plays a crucial role in supporting the students' practice of pronunciation, especially in distinguishing between ending sounds, vowels, or consonants. Two students agreed that the Elsa Speak Application uses phonetic symbols to present and practice vowel and consonant sounds, which are easy for students to understand. "After using Elsa Speak Application, I am inspired to study and practice more and more on it because it is useful in correct pronunciation and it provides activities on the production of the sounds, which helps me practice pronunciation more easily and effectively."(Student 5) "In my point of view, studying pronunciation is difficult because I do not have someone who is willing to help me correct pronunciation whenever I study.The Elsa Speak Application supports me a lot in practicing pronunciation.Moreover, it uses phonetic transcriptions to present and practice rhythm, stress, and intonation which helps me speak English more fluently."(Student 7) Another student also shared that she likes the Pedagogical Design of the Elsa Speak Application because the program has a variety of activities and exercises on the distinction of intonation patterns. "I cannot remember which part of the sentence I should raise my intonation or which part of the sentence I should lower my intonation. Besides, it is difficult for me to practice because there are many intonation patterns. The Elsa Speak Application supports and helps me deal with my difficulties. The application has activities that allow me to practice intonation until I feel comfortable speaking English with suitable intonation." (Student 3) It can be concluded that the Pedagogical Design of the Elsa Speak Application brings many benefits to the students while they practice pronunciation.To be more specific, the instructions for the application are clear and easy to understand.There are a variety of activities that work on the perceptions of rhythm, stress, and intonation, which motivates the students to practice pronunciation on the Elsa Speak Application more and more. Students' perceptions towards the advantages of the Assessment or Flexibility Many participants had positive attitudes towards the Assessment or Flexibility of the Elsa Speak Application.To be more specific, most of the participants are interested in the comments on the correct and incorrect answers, which help them recognize their errors and correct them. There are three students who have similarities in their opinion towards the advantages of the Assessment or Flexibility of Elsa Speak Application. "I feel interested in studying pronunciation on the Elsa Speak Application because it asks users to try again when they make an error.Also, it explains my errors, which helps me pay attention to these errors to practice more and more.The more I practice my errors, the fewer errors I make in the future."(Student 3) "The Elsa Speak Application not only gives feedback on my error but also gives a notion so that the user may have the opportunity to redo the activity until they correct their error.The feedback is useful because I know which sounds I pronounce correctly and which sounds I pronounce incorrectly."(Student 6) "I love studying pronunciation on Elsa Speak Application because it gives me feedback for each word or sentence that I pronounce.To be honest, it is hard for me to find a person who is willing to correct my errors, so Elsa Speak Application is one of the best applications to practice pronunciation.I can use the application whenever I have free time."(Student 10) All in all, students agreed that Assessment or Flexibility of Elsa Speak Application is good not only for giving the students feedback but also it allowing students to correct their errors until they pronounce them correctly.Moreover, the Elsa Speak Application explains each error for the students to help them understand clearly their errors, so they can improve their pronunciation. Students' perceptions towards the advantages of the Multimedia Design Multimedia Design is one of the most important parts of the application because it affects the user's usage process.Some students shared that the Multimedia Design of the Elsa Speak Application is useful and easy to use, especially for someone who is using this application for the first time. One student expressed her opinion towards the advantages of the Multimedia Design of Elsa Speak Application: "Even though this is the first time I have used Elsa Speak Application; I feel it is easy to use.It has detailed instructions for me to choose the suitable level with my pronunciation and the symbols on the ESA are easily comprehensible.Besides, the Elsa Speak Application uses automatic speech recognition to record my voice, and it also uses animation to demonstrate the production of sounds while I speak."(Student 5) Another student also had a positive perception towards the Multimedia Design of the Elsa Speak Application: "I used the Elsa Speak Application for studying pronunciation for more than 4 months, and I like the multimedia design not only based on the installation, the animation, the images, the sounds but also the IPA transcription of the words.Moreover, the multimedia design supports and motivates me to practice pronunciation."(Student 2) In conclusion, the participants showed a positive perspective towards Multimedia Design.The participants agreed that Multimedia Design of the Elsa Speak Application is useful both in the animation, images, sounds, and detailed instructions for the user to improve and correct their errors. Students' perceptions towards the advantages of the Automatic Speech Recognition Design Some students agreed that the Automatic Speech Recognition Design of the Elsa Speak Application is good because it supports the students in learning, correcting their errors and practicing pronunciation. One student expressed his opinion towards the advantages of the Automatic Speech Recognition of the Elsa Speak Application: "The Elsa Speak Application uses Automatic Speech Recognition to provide immediate feedback on my pronunciation.Therefore, I know which sounds that I pronounce correctly or incorrectly after I pronounce a word or a sentence.Moreover, the Elsa Speak Application provides me detailed feedback for each sound, such as consonant, vowel or diphthong."(Student 2) Another student also had some similarities in her opinion on the Automatic Speech Recognition design: "The Automatic Speech Recognition design is good. In my opinion, I feel more comfortable and motivated to learn pronunciation because the Elsa Speak Application provides detailed feedback that helps me improve my pronunciation. Moreover, it gives me instructions on how each sound is pronounced." (Student 8) It can be concluded that Automatic Speech Recognition Design is highly effective in supporting students while they are practicing pronunciation.Most students had a positive perspective on Automatic Speech Recognition not only because of feedback but also because the students can recognize their errors and improve their pronunciation. All in all, the students had a positive perspective on 5 aspects such as Content Design, Pedagogical Design, Flexibility or Assessment Design, Multimedia Design and Automatic Speech Recognition Design.Based on the interview data, it can be concluded that there are many benefits the students get when using the Elsa Speak Application to study pronunciation.To be more specific, the Elsa Speak Application provides a variety of exercises to give students more chances to practice their pronunciation.Moreover, the students feel comfortable and interested in using the Elsa Speak Application because it uses Automatic Speech Recognition Design which gives them immediate feedback, and Flexibility or Assessment Design is easy to use. Students' perceptions towards the disadvantages of the Content Design Besides some positive attitudes about the Content Design.There are two students who expressed their attitudes towards the Content Design of the Elsa Speak Application: "I agree that Elsa Speak Application supports me in practicing my pronunciation.However, sometimes many exercises were repeated, which made me feel uncomfortable.I had to do these exercises many times, so it was a waste of my time."(Student 1) "I cannot deny that Elsa Speak Application motivated me to practice pronunciation for 4 months, and I also feel more confident and comfortable when speaking English with my classmates and my teacher.However, it is annoying that many exercises are repeated more than twice, so I have to spend more time to complete them again."(Student 10) The students get many benefits when they use the Elsa Speak Application to improve their pronunciation.However, a minority of students shared that they had to face some difficulties while using the Elsa Speak Application.In other words, the students spent a lot of time completing the exercises that they were used to completing. Students' perceptions towards the disadvantages of the Pedagogical design Although the Elsa Speak Application provides a variety of exercises, sometimes there are some exercises that have no detailed instructions.Therefore, a minority of students feel uncomfortable while using and completing the exercises. One student shared her uncomfortable feelings when she faced some problems on the Elsa Speak Application: "I love practicing pronunciation on the Elsa Speak Application because it is convenient and useful with automatic speech recognition and immediate feedback.However, there were some exercises that had no detailed instructions, so I did not know which part of the assignment I needed to do." (Student 7) Another student expressed that he faced some difficulties on the Pedagogical Design: "I have used the Elsa Speak Application for more than 5 months and I had to face the problem that the exercises were repeated many times and there were no instructions for these exercises.Therefore, I did not meet the requirement of the exercise to complete it."(Student 10) Many participants had a positive experience when using the Elsa Speak Application; however, a minority of them felt uncomfortable and disappointed when the exercises were repeated without detailed instructions. Students' perceptions towards the disadvantages of the Multimedia Design Many students agreed that the Multimedia Design of the Elsa Speak Application is good, but there are two students who expressed their perceptions on the Multimedia Design of the Elsa Speak Application when practicing pronunciation for a long time: "I feel eye strain when I look at the screen for a long time because of the bright screens.It made me feel uncomfortable, and I lost my energy for practicing pronunciation."(Student 3) Another student also shared her opinion about the Multimedia Design: "I am interested in practicing my pronunciation on the Elsa Speak Application, so in my free time or weekends, I spend about 40 minutes to an hour practicing on the Elsa Speak Application.Nevertheless, the screen is too bright for me to focus on the lesson for a long time."(Student 5) The Multimedia Design brings many benefits that supports the students while practicing their pronunciation, but the screen of the Elsa Speak Application is too bright, which leads to the participants feeling uncomfortable. Recognition Design The Automatic Speech Recognition Design plays an important role in supporting students to improve their pronunciation, but there are two students who expressed their view towards the disadvantages of the Automatic Speech Recognition Design. One female student shared her opinion: "I cannot deny the benefits of using Elsa Speak Application in improving my pronunciation.However, I feel uncomfortable when I have to repeat the words many times because the Automatic Speech Recognition cannot recognize my voice."(Student 4) Another participant also expressed that she faced some problems with the Automatic Speech Recognition Design: "Elsa Speak Application is a good application for learning to speak and to pronounce, but sometimes it takes me a long time to repeat some words because Automatic Speech Recognition cannot recognize my voice even though I pronounce clearly and well enough".(Student 6) In conclusion, besides the advantages of using the Elsa Speak Application, students also shared the difficulties they encountered in the process of using it.Some common problems that students sometimes face is that the Automatic Speech Recognition cannot recognize their voice, exercises are repeated many times or the screen is too bright to focus for a long time. Students' suggestions for better application development After spending time training pronunciation through Elsa Speak Application, students self-assessed their foreign language ability slightly positively ranging from 8.0 to 7.0 in a total score of 10.This showed that they felt more confident in their ability with the assistance of Elsa Speak Application.However, they also would like to improve certain points of Elsa Speak Application as well as enhance the quality of the application. Students' suggestions for the Pedagogical Design improvement Some students suggested that Pedagogical Design should be more flexible in providing some more features of interaction as well as contributing to encouraging students to practice pronunciation. "I highly recommend that there should be an additional form of interaction with foreigners such as video calls, or some kind of online learnings, I think it is a great opportunity for learners to interact with foreigners and got used to obtaining native speakers' language skills.Besides, it is also necessary to improve the grading and speech recognition system of Elsa Speak Application."(Student 8) Another student remarked: "It is supposed to have a Vietnamese interpreter when learning new words, which is easier for learners to study conveniently and understand the meaning of words quickly."(Student 4) Students' suggestions for the Multimedia Design improvement There were some students' suggestions for better application development and user needs.The interview results showed that most of the students wish to improve the Multimedia Design of Elsa Speak Application.To be more specific, they suggested that the application needed to be fixed in terms of the Elsa Speak Application's interface and fixed the problem of not showing the exercise.There were three students sharing that: "In my opinion, the application interface should be improved, because the background color is too bright for us to see the content, causing eye strain for learners if they have focused on the screen for a long time, and the application needed to be improved the error of repeated exercises."(Student 6) It was also recorded that seven students were willing to recommend Elsa Speak Application to their friends and their acquaintances for enhancing pronunciation skills in communication. The results showed that Elsa Speak Application has a positive effect on learners.It not only enhances their pronunciation skills but also assists in holding conversations through its precise pronunciation feature.These findings emphasize the importance of incorporating Elsa Speak Application into English learning, as it aids in improving pronunciation and designing adaptable E-learning resources and pronunciation exercises that cater to a wide range of abilities. To answer the research questions, interviews were conducted, showing that students have a positive attitude towards learning pronunciation on the Elsa Speak Application.However, the results of the interviews showed that students still have difficulties using the Elsa Speak Application to learn English pronunciation.It is recorded that students encountered certain problems related to Internet connection, especially Elsa Speak Application consumed a lot of Internet data to download documents and required mobile devices to always be connected to the Internet to start the application.At the same time, they also found that the application still did not process the sound clearly; there was still noise during the recording process.It also showed that the Elsa Speak Application did not process the video image showing motion when speaking.The above results were statistically tested through descriptive tests, and the outcome showed that students lack motivation to study and maintain their pronunciation training.The findings of the interviews with students who expressed difficulty in learning English show the common problems that students often face when using the Elsa Speak Application to learn English pronunciation. The results are consistent with those revealed in previous studies by Darsih et al. (2021), who conducted a study that revealed a limitation of Elsa Speak Application's audio response feature.The findings indicated that the application was not sufficiently effective in filtering out external noise, requiring users to repeat their voices for accurate assessment.Similarly, Meri-Yilan et al. (2019) found that Elsa Speak Application primarily focused on pronunciation and stress usage but lacked video capabilities and feedback on assessment scores.Furthermore, Hafizhah et al. (2023) reported that some students faced challenges with interruptions during their use of Elsa Speak Application, making it occasionally difficult to utilize effectively. In conclusion, while Elsa Speak Application offers potential benefits for improving pronunciation skills through its technological features, there are also limitations to consider.Continuous improvements should be made based on user feedback to enhance user experience and maximize the application's potential as an effective tool for pronunciation training. Recommendations The findings of the study propose several recommendations to improve the effectiveness of incorporating Elsa Speak Application into English pronunciation learning.Firstly, taking advantage of Elsa Speak Application's capability to monitor individual student progress, educators can utilize this feature to evaluate and supervise students' advancements in pronunciation skills.This facilitates a thorough comprehension of each student's strengths and weaknesses, enabling the implementation of personalized and efficient teaching approaches. Furthermore, educators and the School of Foreign Languages can encourage activities associated with Elsa Speak Application, such as competitions or the organization of exchange sessions.These initiatives offer students opportunities to apply the skills acquired through Elsa Speak Application, fostering practical application and fortifying their pronunciation capabilities. To nurture a positive learning atmosphere, instructors or the School of Foreign Languages can actively promote the exchange of experiences and feedback among students.Elsa Speak Application can function as a platform for communication and sharing progress, contributing not only to the enhancement of pronunciation skills but also to the cultivation of confidence and a collaborative spirit in the English learning journey. Building on the revelations and discussions emanating from the research, it is evident that student perspectives on the application of Elsa Speak Application in pronunciation training are positive.A robust endorsement is provided for the incorporation of Elsa Speak Application into pronunciation education.Future research initiatives within this realm are urged to probe into the viewpoints of students with differing proficiency levels regarding the utilization of Elsa Speak Application.Specifically, an in-depth examination of the experiences and attitudes of learners at introductory, intermediate, and advanced stages could serve as a focal point for such investigations.This exploration aims to elucidate the suitability of Elsa Speak Application for learners at distinct junctures in their language development trajectory. Conclusion The research delves into students' viewpoints regarding the utilization of Elsa Speak Application in pronunciation classes, concentrating on Content, Pedagogical, Flexibility, Multimedia, and Automatic Speech Recognition dimensions.Employing a mixedmethods approach, the study utilizes a 43-item questionnaire and semi-structured interviews.Findings reveal favorable perceptions of Elsa Speak Application's Content Design, landed for its varied exercises encompassing vowels, stress, and diphthongs.Pedagogical Design earns acclaim for precise instructions aiding vowel and consonant distinctions, rhythm, stress, and intonation exercises.Multimedia Design, encompassing installation, animation, images, and sounds, garners positive feedback, fostering student motivation in pronunciation practice.The Automatic Speech Recognition System is deemed advantageous for error correction with detailed sound guidance. Despite an overall positive outlook, challenges surfaced.Some students expressed discomfort due to repetitive word repetitions stemming from Automatic Speech Recognition issues, and a minority reported eye strain from a bright screen.To conclude, the majority of non-English major students endorse Elsa Speak Application for enhancing pronunciation, highlighting its positive effects on vocabulary, feedback efficacy, and user-friendly interface.The study recommends integrating Elsa Speak Application into pronunciation practice, with students expressing a willingness to recommend it to others.
2024-01-07T16:52:50.380Z
2023-12-28T00:00:00.000
{ "year": 2023, "sha1": "2c3f4aa32d4fb62ed935221e4ab4637cdc9aeb02", "oa_license": "CCBY", "oa_url": "https://oapub.org/edu/index.php/ejae/article/download/5149/7781", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "60fafcc34cf71341312308784b3886a2a36aa7ef", "s2fieldsofstudy": [ "Education", "Linguistics", "Computer Science" ], "extfieldsofstudy": [] }
60440625
pes2o/s2orc
v3-fos-license
Consistency models in distributed systems: A survey on definitions, disciplines, challenges and applications The replication mechanism resolves some challenges with big data such as data durability, data access, and fault tolerance. Yet, replication itself gives birth to another challenge known as the consistency in distributed systems. Scalability and availability are the challenging criteria on which the replication is based upon in distributed systems which themselves require the consistency. Consistency in distributed computing systems has been employed in three different applicable fields, such as system architecture, distributed database, and distributed systems. Consistency models based on their applicability could be sorted from strong to weak. Our goal is to propose a novel viewpoint to different consistency models utilized in the distributed systems. This research proposes two different categories of consistency models. Initially, consistency models are categorized into three groups of data-centric, client-centric and hybrid models. Each of which is then grouped into three subcategories of traditional, extended, and novel consistency models. Consequently, the concepts and procedures are expressed in mathematical terms, which are introduced in order to present our models' behavior without implementation. Moreover, we have surveyed different aspects of challenges with respect to the consistency i.e., availability, scalability, security, fault tolerance, latency, violation, and staleness, out of which the two latter i.e. violation and staleness, play the most pivotal roles in terms of consistency and trade-off balancing. Finally, the contribution extent of each of the consistency models and the growing need for them in distributed systems are investigated. Introduction Nowadays we are faced with an enormous amount of data which give birth to concepts like big data. Big data, is a sort of too big, massive and extensive data [1]. These floods of digital data are generated from variant sources such as sensors, digitizers, scanners, cell phones, the Internet, emails, and social networks. The diversity of these data covers the pictures, films, sounds and a combination of each of them [2]. The evolution of technology and human knowledge about data is formed by analyzing its development from the static traditional to the rapid form with respect to some of the characteristics of big data. The common concept between the researchers in that the big data infers to the set of data with characteristic of volume, variety, and velocity [3]. Amongst, some the researchers and specialists refer to some other properties like the value [2,4,5], veracity [2,5], variability [1,5], and complexity [1]. The challenges of these data is the problem that the data and database technicians are come fronted for many years. What is learned through these years, is the way to face the challenges of the big data in small scales. Challenges like the data durability, data access, data availability, and fault tolerance [1] which are generally solvable with the replication mechanism. Replication is a crucial challenge in the big data. The guarantee of the consistency is the challenge that the replication mechanism brings about. The replication and caching process are used as techniques to make the scalability achievable. Data replication [6] is meant to generate a number of indistinguishable copies of the original i.e., the replicas. One of the major problems is keeping the replica consistent, though the interaction between the replica is inevitable. In other words, the replication in the distributed systems demands the consistency's guarantee and is one of the major problems in large scale storage systems [7]. From the viewpoint of the researchers, consistency is meant to have multiple processes have access to common data. At this point, consistency means that each process have knowledge of the other processes have access to the resource (whether they read or write) and also know what to expect. The main reason of the replication is the concurrent access to the replica [6]. Consistency is a part of the system behavior, in order to make concurrent execution or system failure predictable [7]. The tradeoff between the performance and consistency has made the researchers to look for consistency policies like consistency level and technology. The consistency policy is based on the principle of what should be written or read. The placement policy shows the demand-caching, prefetching, push-caching and full replication that the nodes store which data from which local copy. The technology policy, like the client-server is hierarchical or ad hoc and the directories along which the communicative data is streamed are defined [8]. Up to now, we have introduced a variety of consistency models which have been used in the transaction less distributed systems [9]. However, the proceeds of the consistency models alter based on the application in which they are employed. Different criteria and services have been considered for data-sharing in the distributed systems, out of which, the five most important criteria are [10]: • Concurrency: the degree at which the conflicting read/write access is tolerated. • Consistency: preservation of the update dependency and stale read data tolerance. • Availability: the access method to the replica is their absence. • Visibility: How should we have a global view, when the local changes have been applied on the replicated data. • Isolation: when the remote updating must be observed locally. The mentioned criteria, show the different aspects of the consistency application requirements. With different combination of these criteria, a majority of consistency semantics for reading or updating the shared data will be emerged [10]. Totally, the data consistency can be grouped into to two categories of the data-centric and client-centric. Data-centric consistency is the model where there exists a contract between the data-center and the processes. This models says that if the processes agree to obey certain rules, then the resource is committed to work correctly. The client-centric consistency insures that a client does not to come front with an inconsistency while having access to a resource. However, if multiple clients gain access to the same resource simultaneously, then the consistency is not guaranteed [6]. Consistency models in the distributed systems are executed with different methods on variant machines, accordingly they use different methods of consistency i.e., data-centric, client-centric, and combination of the both [11,12]. For example, the data consistency model can be applied on the distributed shared-memory [13]. [14,15,16,17,18,19,20], or it might be executed on the shared data-center or the distributed database [21], or the consistency model on the read and write operation of the clients or the data stored in the cache memory [22], consistency can even be presented to create a session and its relation with the other sessions [23]. Message send and receive [24] also needs the consistency to be performed. In spite of diverse applications in the scope of the consistency of the distributed systems, researchers seek to discriminate the performance of the different types of consistency. Little research has been done on reviewing and the promotion of different consistency types. Furthermore, with respect to the concepts of consistency, they have analyzed all of the data-centric and client-centric consistency from two perspectives. The introduced consistency models are analyzed based on the consistency model presentation time with respect to the strong to weak consistency and the capability of the consistency. Finally, they have mapped the definitions of consistency on variant specific systems [7]. By analyzing the data-centric and client-centric models or a combination of both based their usage, they are divided into the three categories of architecture based, distributed database and the distributed systems [25]. The concepts of the consistency models introduced in this research are the traditional consistency models with the emerge of the distributed systems. Contribution In this section, in order to show the models' behavior without the need for the implementation, we will present the mechanism of these models in mathematical terms. Then, we will introduce the novel consistency models based on the new requirements in the distributed computing systems. As mentioned before, the consistency models based on their application are divided into three categories. Our main focus in this research is to define different consistency models which are used in the distributed systems. Different traditional consistency models are proposed in Fig.2. In this new categorization, with respect the focus of this research on distributed systems, our goal is to introduce various types of consistency models such as data-centric, client-centric, hybrid, novel and extended followed by their implementation and evaluation environments between 1979 to 2018. Also with respect to this categorization we have shown the cruciality of consistency in the distributed systems. With respect to Fig.2, the traditional consistency model has been proposed from 1979 to early 2000. By time and with the turn of the century (year 2000) and with the extense and improvement of the distributed systems, consistency models become mandatory. From the year 2000 to 2006 researchers proposed different models of data-centric consistency like the linearizability [26] and the extended consistencies such as the timed causality consistency [27] and a session and causal consistency [28] as a hybrid consistency have been studied. Researchers have proposed novel consistency models with respect to the specific needs. One the newly emerged consistency models is the fork consistency [29]. However, by time and with changes in essentialities, distributed systems gained more tendency in using the hybrid consistency mod-els. Whilst, some others have proposed novel consistency models such as view consistency (VC) [30] or the RedBlue consistency [11]. Amongst which, consistencies like the fork consistency [31] and causal + consistency (CC+) [32] are also introduced as extended consistency models. But, at this stage the need of the great distributed systems to the eventual consistency to reach appropriate accessibility has been observed. During the years 2012 through 2018 the researched have proposed a variety of different consistency models. For instance, the data-centric consistencies are the causality consistency [24]. [33,34], linear constancy [7,35], and weak consistency, eventual consistency model [36], hybrid consistencies [37], and some other novel consistencies such as the VFC3 [38]. What is shown in Fig.2, is the growing demand of the distributed systems to the eventual consistency from 2006 to this date. As it can be seen, the distributed storage systems are proposed in order to facilitate the development and to have more precise evaluations on the consistency models. These storage systems have had lots of growth in the recent years. During 2006 through 2012 the environments such as Sanfonia [39], COPS [32], G-Store [40], pahoehoe [41], Gemmi [11], ALPS [32], and from 2012 to this date, the environments like Eiger [42], Orbe [43], FaRM [44], Pileus [45], RamCloud [7], H-Store [7] are the novel storage systems that are proposed by the researchers. In this research, different types of consistency from traditional to novel consistency models are analyzed. We have described the performance of the traditional consistency systems in mathematical terms. The performance of different models used based on its application in the distributed systems environments is determined by the traditional consistency models section, then the novel consistency models which are proposed in the distributed systems discipline are defined in the novel consistency models. What is expected in this research is to review the challenges and issues in consistency on distributed systems. Traditional consistency model As discussed before, the consistency model is one of the most important issues in the in the design of the storage systems in large scales [7]. Generally, the data consistency could be divided into two categories of the data-centric and client-centric. However, today the researchers seek for the ways to present a hybrid of the consistency models with respect to the requirements of the applications. One of the main goals of this research is to introduce some of the consistency models which in contrast to the hybrid models not only are capable of ensuring the consistency in distributed storage systems, but also are able to cover those needs of the applications which are usually answered by the hybrid consistency models and not to mention have less weak points in comparison with the other models. The traditional consistency models are categorized in a specific class. The behavior of the processes is conveyed by the consistency model based on the model type and with the data items in the shared-memory through mathematical terms. Before we analyze and introduce the other types read operation on replica x A the value of the data written on or read from the replica B the value of the data written on or read from the replica O any read or write operation. S the server that contains replica X C the client that executes the operation on the replica OW the primary set of operations on the shared data OS the set of operations which are conducted by the server on the replica OC the set of operations which are conducted by the client on the replica P the process which executes the read or write operations on the replica T P the absolute global clock or the physical clock on the servers T S the logicalal clock [15], based on the occurrence by the process, server/client the elapsed time, immediately after the precedent operation ( < δ < γ) δ the elapsed time after the precedent operation ( < δ < γ) γ the long elapsed time after the precedent operation ( < δ < γ) LS the process which posses this lock can perform the read or write operations on the replica x L the process which posses this lock can perform the read or write operations on the replica x Acquire(x, l) the acquirement of the lock L for the execution of the operation on the replica x Release(x, l) the release of the existing lock L on the replica x after the termination of the operation on the specified replica L x the lock for the execution of the operation on the replica x L y the lock for the execution of the operation on the replica y e1 si − → e2 operation e1 is initially performed on the server S i and then the operation e2 is performed on the same sever e1 Pi e2 the process P i executes operation e1 initially and then executes operation e2 a ⇒ b if the set of operations a occurs, then the set of operations b occurs a −→ b if any operation a occurs, then operation b must occur of the consistency, let us have a brief but complete introduction on the notations used in this paper: Strict consistency model Strict consistency is the strongest consistency model which requires permanent global synchronization. This synchronization is done by using an absolute global time. The creation of this synchronization by the physical time among the servers is to some extend impossible [6]. In other words, the replicas must be synchronized globally and constantly. This model is so costly, while the system does not need to be always synchronized globally. However, in the distributed systems this straightforwardness imposes lots of expenses. With respect to the behavior of consistency model on the replicas, we can express the behavior of this system, using Rule 1. with respect to Rule 1, if the write operation a, done in time T p on the server S i is terminated by a process, it could also be executed by the read operation a in time T p + by the same or other process, in a time immediately after the write operation a is performed. If a new value of b is occurred by a process on the server |S i in time δ after the write operation of a, then the read operation of a in time γ after the write operation of b is performed, shows the strict inconsistency. In order to get more familiarized with the interaction of this model concept and process with data items in the shared-memory, the strict consistency model is shown in Fig.??. When the process P 1 executes the value b by writing on the servers in time δ (a time after each reading or writing process of the value a from the shared-memory), but in time t 8 by having access to the server s 1 or in time t 9 by having access to each of the servers s 1 and s 2 , by execution of the read operation, again the stale value of a is placed in the value b's place to be read by the operation p 3 . In this case, the deficiency in the shared-memory is uncovered by the strict consistency. Sequential consistency model Another variant of the consistency models is the sequential consistency. This model was firstly defined by Lamport in 1979 [15]. In the discipline of the shared memory for the multi-processor systems. This model is a simpler variant of the strict consistency model. In the sequential consistency model the need to the absolute global time and the full-time relation of the replica are not the same as the strict consistency model. In this model, the write operation is viewed between the processes with an equal order, however the read operation done by the other processes is not observable [6]. The concurrent execution of multiple write operations without having the causality relation by different processes is observed with different orders [46]. Thus, this model guarantees that the values are read by the clients [27,47]. The malicious server, is a server in the distributed environment in which the sequence of the operations is not preserved. In the systems where it is probable for the malicious server to be existent, the execution of the operation is by a healthy service provider is acceptable [37,29,12,48,49,31,50,51], when its proportional order is preserved for each client, however it cannot guarantee the truth and the complete execution of the whole operation by the clients [52]. With respect to the CAP theorem [53], the restrictions in the distributed systems (consistency, availability, partition tolerance) have been described in a way that the sequential consistency could be provide in the system [8]. The sequential consistency have shown better performance on the history of the operations in the conflict serializability [54]. Finally, the confirmation of the sequential consistency [6] is considered by the properties like the atomicity of NP-complete [55]. Based on the behavior and the performance of the operations on the replica in the sequential consistency model, the following equation could be expressed: With respect to eq. 1, the time parameter does not affect the behavior of the process, however, the sequence of the operations on the server has a considerable effect on the shared-memory. We have expressed the performance of the processes on the data items in the shared-memory with the sequential consistency in terms of Rule 2. If the write operation of the values a and b on the server S i based on Rule 2 be a and then b. Also, the read operation of these values by accessing the server S i be a and b respectively. If the whole processes do not read a and then b consequently, then they would not have the same vision of the whole operation and values stored in the shared database. Therefore, the sequence of observation is different and the inconsistencies are revealed. The interaction of the processes with replica in the shared database with the sequential consistency model is shown in Fig.?? A process like P 5 causes inconsistencies, as it has a different view in reading from the shared-memory. In such a way that this process in contrast with the other processes (e.g., P 3 and P 4 ), with the execution of the reading process, firstly read the stale value of a, and then reads the value of b. The execution of this process by the process P 5 causes inconsistencies in the shared database with sequential consistency. Linearizability model One other type of the data-centric consistency models is the linearizability, which is also known as the strong consistency model. This model is considerably better than the sequential consistency which is introduced in 1990 by Morris Herlihy and Wing in 1990 [56]. This model also needs a global synchronization clock. As this clock is not as trust worthy, it is replaced by logical clock in this model [15], which is called the global logical time. The behavior of this model is like the sequential consistency model. However, the order of the operations is set by the whole processes based on their time of occurrence. In this case, by putting a limitation on the sequential consistency model, e.g., the event time, this model changes to the linearizability, which has a significant effect on the write operation on the shared-memory. Therefore, the whole processes based on this consistency have a solid view of the operations. This behavior is like the serial and the sequential behavior by a server [57,11,58,59]. In order to state the sequential operation and find a solution to avoid the controversial operations, the linearizability model is used instead of the sequential consistency [60]. The consistent services require the linearizability in order to make sure of the complete execution of the operations on the untrustworthy systems of the Replicated State Machine (RSM) [61]. This model uses the Wait-Freedom approach in case the system is untrustworthy [62]. Linearizability is a criterion to analyze the accuracy of the operations and the degree of reliance the clients can have on the distributed storage systems when they are faced with an untrustworthy server [58,62]. In order to guarantee the linearizability, the secure network protocols, the automatic repetition of the failed operation, and the two-phased commit protocols are used [7]. In spite of the fair-loss, the linearizability, in a system with network partition is an acceptable algorithm as it might wait for the partition to get better which never does happen. Therefore, the system model with unlimited partition is acceptable [35]. In this case, when the system is faced with network partition the system would have an acceptable access limit. This consistency model requires that each of the read or write operations to be done in an interval between the operation execution request and response [36]. Finally, the verification of the linearizability is an NP-Complete problem [55]. What we stated up to now about the behavior of the processes in the linearizability model is according to Rule 3, trough which the whole behavior Failure Figure 3: The behavior of the processes on the data-items based on the linearizability model. and the manners of the processes are considered thoroughly. The order and priority of the execution is equal for the whole processes according to Rule 3, and is based on their logical event time. When the write operation of b is executed in time , after the write operation of a on the sharedmemory in the sever S i , then it is expected that after having access to the data storage, each process by the execution of the read process read the value a in time δ, and subsequently, the value b in time γ from the server. If trough the update process of the data storage, the new value of b is written in time , and then in time δ, with the execution of the read operation, the server return the same new value (i.e., the value of b), then the controversy is emerged in the linearizability. To have a deep understanding of this consistency model, its operating process is illustrated in Fig.3. Based on what is depicted in Fig.3, if the process P s act contradictory to the priority of the write process by the processes P 1 and P 2 , firstly by the execution of the read operation in time δ return the value b, then the data storage has violated the linearizability model. Causal consistency model This model has been first proposed for the shared distributed system [20] and is weaker than the sequential consistency model which was proposed in 1987 based on the Happened -Before by Lamport for the distributed systems [13]. This consistency model discriminates between the events which have cause and consequence relationship and those which does not have. In case the read operation is the result of multiple write operations on the replica, then the read operation does not execute until the write operation has been terminated [57]. This model guarantees that a client does not read two related write operations in a wrong order and in case the client has read the latest value, then it does not read the stale value [50]. If the process updates a replica and the proceeding process return the updated value, this relationship between the two processes is provided by the causal consistency [63]. This model could be defined as a combination of the causal arbitration and causal visibility [64], which in spite of the partitioning of the network has satisfaction of the availability [64,53]. To implement this consistency model, we need the logical time to record each event [15]. As the global time is not considered in this model, then the causal consistency model is not solely capable of providing the convergence. To solve this problem, the time factor has been added to this model which is called the timed causal consistency [65]. In all models Complete Replication and Propagation protocol is used [66] Specifically the full replication simplifies the causal consistency on data-items, operations and replicas. By converging the conflict operations, the causal consistency is called the Causal + Consistency (CC+). In geo-replicated systems or online applications like the social networks where the operations must be executed completely and with low latency, the CC+ insures the clients that they see the cause and effect relation correctly, without confliction and with steady process in the storage system [32]. This relations between the clients' operations, the data or the keys stored in the shared memory, the sessions which the clients use and the log files existing in the replica have causality. The operations in this model are expressed as in eq. 2 Eq. 2 shows the concentration of this model on the conflicting operations. The goal of this model is to express the type of relation between these operations. This consistency model is determined based on the type of the cause and effect and the priority of the relation. In case the relations according to eq. 3 have dependency to each other, then the whole processes, with respect to the event time and the dependency of the operations [24,67] will execute the operations with a united perception. With respect to eq. 3, the whole processes initially observe the first operation and then the second. In case the operation o occur between o1 and o2 and play the role of an intervener between the two operations, then the following equation stands [28]: In eq. 4, the operation o indicates the intervening operation which establishes the cause and effect relation between the two operations o1 and o2. In this case the concurrent operation could be expressed as in eq. 5. However, if the operations are independent to each other, then the whole processes could behold the operations with their own vision. If two processes simultaneously and automatically write to different data-items or read a shared data-item, then the processes will not have a cause and consequence relation and the operation is called concurrent [68]. If the operation o1 and then o2 are executed by each process P i or vice versa, then this kind of behavior implies that there are no cause and effect relations between the operations and therefore they could run simultaneously. Thus the causal consistency model could be expressed by the Rule 4 which shows the behavior and the performance of this model on the shared-memory. This rule expresses that in case the execution of the operation o1 is the cause of the execution of the operation o2 on the replica in the sever S i , then the other processes must first observe the operation o1 and then the operation o2 on their own server [28]. An example of the interaction of the processes with the data-items stored in the shared-memory under the coverage of the causal consistency is shown in Fig.4. Fig.4 illustrates two different operations based on the cause and effect relations between the operations by the causal consistency model. First, the event of simultaneous write operation of b and c by two different processes in time t 6 where there are no cause and effect relations between them. In this case, these two operations could be perceived differently by the other processes. However, the main point is the relation between the read operation of the value a by the processes P 2 and then is the start of the write operation of the value b on the server S i . Thus, the whole processes need to have access to a server to read the value b, on which the value of a is already written and then value of b is written Figure 4: The behavior of the processes on the data-items based on the causal consistency. on the same sever in order to finally read the value b. Otherwise, there would be a violation in data storage in terms of the causal consistency. The first-in, first-out consistency model The first-in, first-out (FIFO), also known as the pipelined random access memory (PRAM) is another type the data-centric consistency models. This model was proposed by Lipton and Sandberg in 1988 [14], in which the sequence of the write operations of a process is desired when multiple operations start to write on the data storage by multiple processes, the other processes see the operations with equal sequences which are done by a process and might see the operation of the other processes differently in terms of sequence. One of the applications of this model is to give privileges to the pipelined write operations [14]. The healthy servers send messages to the clients and this message sending is done with the order and sequence of the FIFO consistency model. Each client, communicating with the server, could receive the messages through the asynchronous secure channel according to the FIFO consistency model [69,37,62]. Therefore, the client receives the message by reliance on the server; however, the malicious server rearranges the messages which causes a delay in sending the message and as a result its deletion. This time delay and deletion testify the existence of a malicious server [33]. according to eq. 2, Rule 5 could be written as the behavior of the FIFO consistency model: If the operations o1 and o2, according to Rule 5 are executed by a process and with respect to the priority first the operation o1 and then the operation o2 are preformed, on the side of the server S i , first the operation o1 and then the operation o2 are observed. The interaction of the processes with the data-items in the shared-memory using the FIFO consistency model is depicted in Fig.5. One of the important points in this consistency model is its independence of time in prioritizing the operations. From what is illustrated in Fig.5, the sequence of the writing of the values b and c by the process P 2 is to the write operation of b at first and then the operationc on the server S i . The goal is to observe this sequence between the whole processes. In case like the process P 4 this sequence is different with the observation of the other processes, then there would be violations in the data storage in terms of the FIFO consistency. Weak consistency This consistency model is another variant of the data-centric models which was proposed in 1986 by Micheal Dubois [17]. The weakness of this consistency model is the reason behind this difference. This avoids the global synchronization. To do so, it defines a synchronization variable which plays the role of token. The process which possesses the this token could preform the read or write operations on the shared resource. In this model, the accessibility to the resource is done by the sequential consistency within the synchronization variable. If the process does not possess this token neither of the read or write operations are privileged on the shared resource [14,17]. This model is established under the following conditions [17,1,46]: • Accessibility to the synchronization variable by the whole processes (i.e. nodes or processors) is not done with an accordant sequence. • The other accessibilities might be observed differently by the other processes. • A set of both read and write operations during the different synchronization operations is consistent in each process. In this consistency model if multiple processes want to merely preform the read operation form the shared memory, then all of them can have the synchronization variable at their service. However, if a process want to preform the write operation on the shared memory, then until the termination of the write operation by the process, the other processes cannot have the synchronization variable up until the termination of the write operation. The behavior of the process when they are served by the synchronization variable is expressed by eq. 6: The characteristic of the eq. 6 is based on the synchronization variable LS. Eq. 6 shows that in order to execute any kind of operation O(s.LS)b either of read or write on the replica the possession of the synchronization variable is necessary. This variable is attainable only after the write operation is terminated by the process and the synchronization variable is free. In this model the local replica updates the other replicas after the termination of the final changes. What makes this consistency model to be at odds with the other types of consistency is in its toleration of the violence in consistency in the intervals between each two updates. By considering the receiving medium of the synchronization variable by the processes and the behavior of this consistency model Rule 6 can be expressed as: Using the weak consistency model, Rule 6 expresses that, in order to preform any operation on the memory, the violation occurs in data storage if process has not received the synchronization variable. When the process asks the server S i to read the value of b without the synchronization variable, then the returned value might not be valid. Fig.6 depictss the behavior of this consistency model. The red points in Fig.6 indicate the release of the synchronization variable after the termination of the operation on the replica existing in server S i . The point on which this figure concentrates is that the synchronization variable Figure 6: The behavior of the processes on the data-items based on the weak consistency. during the read operation is shared between multiple processes simultaneously. However while a process employs the synchronization variable for the write process, no other processes are able to have access to it. If a process like P 5 release the variable after the read operation, then if it wants to read a without receiving the variable, subsequently the read value of a is not valid and there would be violation in terms of the weak consistency in the shared-memory. Release consistency The release consistency is one of the data-centric weak models which was proposed in 1990 by K. Gharachorloo [18]. This consistency model again uses the lock or the synchronization variable. In order to have access to the synchronization variable, two steps must be paved. In the first step it is necessary to ask for the lock to have access to the memory. Therefore, the process waits to have access to the replica stored in the memory. After receiving the lock which is symbolized by L in the table 1, the lock is set on all the replica in the memory. In case the lock is not received by the process during the execution, then the obtained results are not valid. In the second step, the lock received by the process which asked for it is released and the updated values in the replica is sent to the other replicas in other servers in order to update their operations and values [14]. The problem emerging in the weak consistency is that in the access time to the synchronization variable, the distributed shared memory does not have any idea of the operation (i.e. read or write) on the replica. This problem has been solved by the release consistency. In this model the type of the operation is determined by receiving the lock and after its release. This model is not guaranteed for the geo-distributed systems with high availability [70]. The coarse-graininess of the communications and the coherence of the systems with virtual shared-memory with error handling in sharing extensive communication by the release consistency model is one of the applications of this model [71]. Figure 7: The behavior of the processes on the data-items based on the release consistency. Failure Rule 7 states the behavior of the processes in the shared memory using the release consistency model. If according to Rule 7 a process take the synchronization variable, then its operation is valid. Therefore, by receiving the synchronization variable, the process takes its demanded replica and updates it. The read operation like the write operation needs to receive the synchronization variable and by holding it and the execution of the read operation it reads a valid value. Otherwise, without receiving the synchronization variable the written or read value is not valid. The conditions under which the release consistency could be executed are as follows: • The process needs to successfully put the lock on the shared-memory before preforming the read or write operations. • Before releasing the lock on the memory, the read or write operation must be terminated by the process which holds the lock. • The accessibility to the synchronization variable needs to be done using the first-in, first-out consistency model. The behavior of the process in the shared-memory is sketched in Fig.7. In this model, the process which has received the lock is also determined and after the termination of the operation it releases the lock and the characteristics of the process are also eliminated. In case a process like P 4 does not receive the synchronization variable, then the memory faces with violations in terms of the release consistency. Lazy release consistency The challenge in the release consistency is that after the termination of the write operation and the release of the synchronization variable, it should propagate the entire changes occurred on the data in the replica to the other replicas in the memory. However, in case the update happens in all the replicas, there might be some replicas which do not need the update. Then with the propagation of the update the overhead increase and as a result the performance of this model would not be efficient. The lazy release consistency model is an extension to the release consistency which has been proposed in 1992 by P . Keleher [16] in order to improve the efficiency and optimality of the release consistency model. In this model the update occurs in the other replica only when it really needs to be updated. If necessary, it will send a message to the replicas in which the data has already been modified and updated. It is important to note that the timestamp is a great help in determining whether the data is obsolete or accordant to the latest update. Entry consistency The entry consistency model which is a variant of the weak consistency models has been proposed in 1991 by Bershad [19]. This consistency model like the release consistency uses the lock with the difference that is considers a lock for each data-item. One of the problems in implementing the entry consistency is to determine the data for the synchronization variables. In this model, by receiving the lock the access to the data-item stored in the replica is provided, and any type of operation (i.e. read or write), is applicable to that data-item. After the termination of the write operation, like the release consistency model, the lock is released. However, in case of the read operation the release of the lock is not possible as the process might need the lock to execute some other operations on that data-item. In absence of the lock and the execution of the read operation from the memory, the read value would be invalid. This model is the weakest data-centric consistency model as in the interval when the lock is released, the consistency is not executed on the other replicas and the process holding the synchronization lock applies the consistency to the replica in the shared-memory [68]. Rule 8 states the behavior of the entry consistency model. The write operation in accordance with Rule 8 suggests that at first the lock related to the particular data-item is received in the replica of the sharedmemory and then it is released after the termination of the write operation. If a process for executing the read operation receives the lock of the dataitem corresponding to that replica of the shared-memory, then with the read operation, it reads the valid value. When a process does not receive the lock corresponding to the desired data-item then the memory faces violations in terms of the entry consistency. In order to have a better understanding of how this model 8 illustrates its behavior in interaction with the shared-memory. As it is shown in Fig.8, in case a process has accessibility to the sharedmemory in another sever, then with the application for the read operation from the replica of the shared-memory, the update is first performed on the replica. After that the lock has been received by the process, the read operation is carried out. Finally, for any operation, even the read, in order to be done by the process, it has to receive the lock to be privileged to read the valid values. The important point in this model is that, by receiving the lock for the read operation, the process does not need it to be released after its termination. Eventual consistency This consistency model has a novel view to certain categories of distributed systems, where the update process thanks to its unconcurrency is rather easy. A great number of inconsistencies could be neglected thanks to this model in a relatively inexpensive way. The level of consistency between the processes and their confidence is variant in this model. Most of the processes barely preform the update on the replica, that is why a small number of the processes preform the update. On this basis, the only case which must be analyzed more frequently are the read/write conflicts where a process which wants to update a data-item while another process wants to read the same data-item simultaneously [68]. In this case the update by this model is preformed by a lazy fashion like the lazy release consistency model [16]. In this regard, the system tolerates a high level of inconsistency. If there is no new updates after a long period of time, the system or the replicas will gradually become consistent [57,62]. Storage systems like Casandra [72], Google Big Table [73], and Amazon Dynamo are guaranteed in large scale service providing. For example, Amazon Dynamo chooses the eventual consistency [74] in which each data-item in the replicas is synchronized gradually and decreases the synchronization overhead. The eventual consistency model [63] in case of the absence of the new updates for specific data, then they will receive updates from the nearest place where the data are reachable the latest update is read. However the read operation across multiple objects may return a combination of old and new values (non-integrated robustness values) [32]. Most of the cloud storage systems rely on some of the eventual adaptation changes [75]. The eventual consistency model thanks to the catch-all phase [11], shows the convergence for some of the replicas [32]. By using a lazy fashion, it ensures the convergence of all replicas in the same way over time. This model by using the asynchronous lazy replication shows a better performance and faster access time to the data [1]. However, it does not guarantee the same order of updates [58]. Epidemic replication is often applied to execute this consistency model [76]. Write operations which are preformed through the server are recorded in a file on the storage system, the general order of operations are acknowledged by the stamp, and the identification/ID and servers are then publicized [77]. In a system with eventual consistency, the server can utilize any kind of admit and verify method for write operations [23]. Finally, the two monotonic read and read your write consistencies, are the two desirable client-centric models in the eventual consistency, but not always required for this model [63]. Monotonic read consistency This consistency model is an example of the client-centric consistency models. When a replica requires data consistency, the latest changes on the data-item are sent to the demanding copy by the replica. The consistency model ensures that if a process observes a value in a certain time, then it would not see its 1 : 1 : 2 : 1 : Figure 9: The behavior of the processes on the data-items based on the monotonic read consistency. previous value. An example in this respect would be the distributed email data bank where each user, has a mailbox in multiple replicated machines. As an example, in a distributed email data bank, each user has a mailbox in multiple replicas. Emails could be added anywhere to the mailbox. Only when a replica requires data consistency. Those data are sent to the demanding replica [68]. Each session, determines the consistency insurance domain which could be the monotonic read consistency. One of the criteria to insure this consistency requires that the client keeps the previous session state [78,58,47,79,24,80,1]. These sequential read operations reflect a large set of write operations [23]. If the sequential read operation are preformed by a client, the updated content will be returned significantly [57]. We have expressed the operating behavior of this consistency model in Rule 9 [28]. In the monotonic read consistency model, if the client has read a value that from the memory, then it would not see any other values which have been written prior than that. In other words, by preforming the read operation by the client C i a valid value is read from the replica on which all prior changes have been affected. As a result, with respect to Rule 9 the read operation b is valid when the client C i by having access to the replica existing in server S i has executed all the changes on the object replica. Otherwise, it will face violations in the shared memory using the monotonic read consistency. To have a better understanding of this model, its behavior is depicted in Fig.9 The behavior of the monotonic read consistency model on the shared memory is introduced as the client C i , can read the valid value b, when by having access to each of the replica in the servers S 1 and S 2 the whole prior operations have been executed on the replica. Monotonic write consistency The monotonic read consistency model is another variant of the client-centric consistency model. In this model is propagated with a correct order in the whole replicas of the memory. The write operation by the process on the dataitem x is acceptable when the previous write operation is executed by the same process on the replica. Like the library of the software which needs to replace one or multiple functions in order to be updated. The important point is that in the monotonic write consistency, the write operations are done with the same sequence that have been started [68]. The write operation by a client on a replica is ensured when the whole previous write operations are recorded by the same client on the same replica [57]. This consistency model propagates the write operation with respect to the priorities between the operations [23]. Ultimately, it requires that each write operation be visible in order of presentation. Any sequence in the transactions like the Read Uncommitted is based on the priority specified by the global observer [58,47,79,80,1,81]. The behavior of the monotonic write consistency according to Rule 10 is expressed as follows: the write operation of b executes correctly when the write operation of a is preformed prior than that. In other words, when the client C i writes the valid value b on the replica S i that the operation of a on the same replica in the server S i is done prior than that This consistency model which is introduced in Fig.10 expresses that the client C i preforms the write operation of the valid value b on the data-item x if all previous write operations on that item are executed on the server S i . Read your write consistency In this model, the write operation is performed by the client on the data-item x which is always visible in succeeding read operations for the same client on the data-item x. This means that write operations are completed before the next read operation is preformed by the same client. This consistency model could be specified as follows: • The data-access time is prolongated (such as password change). • Similar to read-only consistency, but with the difference that the consistency of the last reading operation is determined by the latest client writing operation. In this model, the new written value is read by the client instead of the previously written one [58,47,79,24], [81]. This model requires that each write operation be visible in order of execution. Any sequence in transactions such as the read uncommitted is based on the priority specified by the global observer [58,47,79,80,1,81], and the read operation reflects its previous write operations [23]. Web page update could be named as one of the applications of this model. If the editor and browser are in the same program, the contents of the cache will be invalid when the page is updated, and the updated file will be downloaded and displayed [68]. Read your write consistency can ensure editor and browser performance. We have described the behavior and mode of operation of this model according to Rule 11 [28]. The read your write consistency model is expressed according to Rule 11 as follows: the client C i can read a valid value b when the write operation a is previously executed on the replica existing in the server S j . Where b is the valid written value by the client C i . In order to have a better understanding of this model, Fig.11 illustrates the way this model works. The behavior of this model in accordance with Fig.11 is such that the client can read the new value b as a valid result, before the execution of the write operation b by the client C 1 on the data item in the server S i , all write operations on that particular data item are executed in the existing version of the server. Otherwise, it will encounter inconsistency in the shared data repository. Write follow read consistency In the write follow read consistency which is also known as the causal session [28], the update propagations are based on the latest read operation. The client can write on the data-item x, when the latest value of data-item x is read by the same client. For instance, in the twitter a client can post a re-tweet on a post that he/she has already seen. With the read operation, this model verifies the previous dictating write operation of the client and ensures the new write operation [57]. This model is related to the happen-before relation [13] which is proposed by Lamport [47,79,80,1,81]. Rule 12, show the behavior of this model: As it can be seen in Fig.12, if the client C i read the value a from the dataitem x on the server S j , then this value is written on the same server and then writes the new value b on data-item x on server S j and as a result the write follow read operates correctly. Novel consistency models Over the course of the years, the essentialities of distributed systems, especially cloud environments, have changed. In spite of the malicious cloud system, the necessities such as security and reliability, or even the need for greater convergence in operations, have become apparent in these systems. With this in mind, researchers have introduced new consistency models to address these needs and challenges, which are presented later in this section. Fork consistency This is a client-centric consistency model, which expresses the strongest concept of data integrity without the presence of the online servers and reliable clients. If the service prevent a client from viewing another client's update, then the two clients will never see the update information of each other. This model was originally introduced for file systems which conceal client' operations from each other. According to this model, if we divide the clients into two groups each of these groups cannot see the operations of the other and vice versa [53]. Making decision on whether to use the accessibility or the peer-to-peer communication as the partitioning criterion has a great impact on the consistency of the model [29]. Fork consistency increases the concurrency of the operation on the shared data in the distributed systems and is introduced to protect the client information from the malicious systems. This model is presented for unreliable storage systems and their clients who are not in direct contact with each other. In fork consistency, when an operation is observed by the clients, they have the same view of the previously executed operations. When a client reads a value written by another client, then write consistency of the writer is guaranteed for the reader. SUNDR is a network file system protocol with security used in unreliable data storages, the purpose of this protocol is to ensure the fork consistency [69,82]. This consistency model is guaranteed, when the following three properties are considered and implemented by the data storage [83]. • Function Verification: This property includes a list of correct operations sent by a healthy client to the server, which is called the issue time. • Self-consistency property: The characteristic of this property is that every healthy client sees all his previous operations. This property assures that it is consistent with its operations. For example in a file system, the client always sees its own write operations. • Unconnected Feature: Each list is the result of the correct operation of a healthy client, with other clients having the same view of the operation of that healthy client. This consistency model has been introduced with three properties, if the third property of this model is modified to enhance the precision and improvement of the system inspection to the join-at-most-once property, then this model will be converted to the fork* consistency (FC*). As stated before, the advantage of this model is to increase the precision of the system inspections. This model is also used for unreliable client storage systems [83]. Among the applications used in this model, the SPORC is presented as a framework that complements the benefits of fork* consistency and operation transitions . This model operates unlocked simultaneously and automatically eliminates the conflicts resulting from the operation's consistency [31]. Another type of fork consistency model is the fork sequential consistency (FSC) [84], which is a hybrid consistency model. When a transaction is viewed directly or indirectly by multiple clients, all clients see the entire previous operations according to their event history [62]. For example, when a client reads the value written by another client, the reader will make sure of the consistency of the value written by the writer. In this model, the clients have complete satisfaction of the order in which the operations are performed. However, in the FSC, there are no guarantees that the operation of all clients are atomic. In this case, in addition to making sure of the sequence of the operations done by the healthy client on the healthy server, the atomicity the operation must also be guaranteed.For this reason, another hybrid consistency, called the Fork Linearize Consistency (FLC) [82], has been introduced. When the accuracy of the server is determined, the atomicity of all client operations on the server is guaranteed by this model [69]. This means that all clients view the other clients' operations in a consistent and uniform manner. If the server is accurate and reliable, the service must ensure linear consistency, and if it is not trustful, it ensures fork consistency, and only when the time stamp is used this protocol is applied correctly to preform the concurrent operations. If the server is correct, this protocol will execute all operations in serial and lock-step procedures and will not permit the execution of the operation. Verification integrity and consistency objects system (VICOS) is a lever for ensuring linear fork consistency for the application-oriented object-oriented storage system. In this case, the storage system should be transparent [85]. The cloud storage system can be well-adapted to the linear-line consistency model, but it does not tolerate any malicious or problematic behavior from the client or the server. Depot is a storage system with minimum reliability for which the Fork-Join-Causal consistency model has been presented [12]. The system ensures two previously introduced properties with the presence of the malicious nodes and keeps the updates with respect to stability, readability and restoration consistent. The fork-join-causal consistency is also called the View-Fork-Join-Causal [86]. This model minimizes the number of acceptable forks to run. This model has the strongest accessibility and data semantic convergence at the presence of the malicious nodes. View consistency In system with distributed proof structure, propagation of the information policy is preformed to have thorough knowledge on the complete structure of each proof tree. This knowledge for each of each proof tree is completely unexpected. This model, ensures the consistency in which the data are encrypted by the certificate authorities. Using this model, the constraints of consistency could be executed in the proof tree systems. In this model, it's unlikely to show the details of the proof completely for issuing privileges and evidences. It is likely that this type of distributed proof system is similar to that used in computing and the network environment of sensors. This model is one of a variety of data-driven consistency models [30,87]. But before looking at how this model operates, a definite definition of observation must be provided. View means a view of each set of real-world identities that are not more than one pair for each id pair < id, e >. The observance of the entity e is defined the system as a set of local observations. With respect to the fact that each observation only includes the local ones, a detailed snapshot of system status is unlikely. Therefore, the consistency level of these observations is of great importance. The system with the view consistency would be consistent if and only if the system has a valid view of the stored data status at specified intervals. However, three levels of the view consistency related to proven distribution protocols are described to be applied in the system [30]. • Increment consistency: the most introductory definition of the view consistency is the increment consistency which is more frequently used than the other consistency levels. The increment view consistency is that any operation during the construction of a related proof tree is valid in some points. A view of the completion time interval increment consistency of proof tree and the receipt of the request submitted by the applicant. In this case the increment consistency runs correctly. In theory 1, the protocol of the distributed proof Minami-Kotz, always uses the increment consistency when the authorization policies are being evaluated. In fact, the existing distributed construction protocols, use the increment view consistency when making a decision about the privileges. This phenomenon leads to various safety violations. Therefore, the increment consistency is not ensured due to the overlap of validation intervals stored in the system. • Query Consistency: the next level in view consistency is the query consistency where the whole operations used to create the distributed proof are valid when the triggering query the proof creation are done simultaneously. If the privilege policy is satisfied of using the query view consistency, then this policy is satisfied in the creation environment of the distributed proof consistency using a centralized proof framework to support the transaction evaluations. • Interval Consistency: another level of the view consistency is the interval consistency, in which the operation of the model is completely accurate if and only if the state of each multiple operation which consists of the validity of two accurate time intervals is encrypted. Nevertheless, some researchers have introduced a fourth state in order to execute this model more accurately [87]. • Sliding Window Consistency: although the concept of the interval consistency in some cases has been defined as the strong consistency. However, there might be situations where the observation face consistencies for multiple times and lead to multiple interrupts in providing services. For instance, in the pervasive computation environments with frequent text alternations, the accepted validity state of the operation might alternate. A share of the weakness in the triggering query consistency is related to the noncontinuous validity check of the view consistency. The entities might be interested to execute a level of the view consistency (somewhere between the query consistency and the time interval). A level between these two consistencies is called the sliding window consistency. This level of consistency requires a sequence of observations recorded about the validity of the applied operations to make the distributed proof. As a result, the algorithm used to execute this mode of consistency cannot utilize the "build and credit" strategy used to execute the interval and query consistencies. Yet, the algorithm in order to limit the sliding window consistency has to execute the instance of the proof tree for a number of times in order to evaluate all of the consistency conditions. Multi-dimensional consistency Multi-dimensional consistency [88], is a data-centric consistency which has introduced a unit called "conit" as a unit for consistency with a three-dimensional vector. This model handles the deviations like the sequential or numeric errors from the linear consistency model. Yet, the numeric error is often nonexecutable and in terms of definition has overlaps with old and sequential errors [47]. However, there are no errors to be disregarded in this consistency [68]. Multi-dimensional consistency, is a model for the distributed services. TACT, is a middle-ware layer which executes the continues consistency based on the conit unit using the three aforementioned features among the replicas [88]. VFC3 consistency Nowadays, the dependency to the data stored in the cloud data centers has surged dramatically all over the world. Different replication protocols are applied in order to achieve high accessibility and performance and to guarantee the consistency between the replicas. By using the traditional models, the performance of the consistency might decay. Therefore, most of the large scale data centers take the declination in consistency brought about by the decrement in the time delay for the end-users into consideration. Generally, the level of consistency is reduced by those cloud based system which give privileges to the stale data at random to stay in memory for constant periods of time. Moreover, the behavior of such systems causes ignorance to the data semantics. This behavior, the necessity to combine the strong and weak levels of view consistency is felt completely. In order to have better consistency in dealing with accessibility the VFC3 model of consistency is proposed [38]. The novel consistency model is used to replicate data in data centers under the library framework in order to increase the consistency level and is based on the three-dimensional vector of time, sequence and value related to the data objects. Each of the dimensions is a an scalar which shows the maximum level of discretization from the limitations of the consistency. By considering the following dimensions, this model provides the consistency: • Time dimension: denotes the maximum time that a replica having the latest value cannot be updated. • Sequence dimension: shows the highest update frequency which can be granted for an object without considering the replicas. • Value dimension: represents the maximum proportional difference between the data content of the replica to a constant value. Timed consistency Sequence and time are two different dimensions in the consistency of the shared objects in the distributed system and specially cloud based systems. One of the ways to avoid the in consistency between the operations is an effective and fast method in order to take the effects of an operation in the system. The sequential and causal consistency models, do not take the specified and valid time of the executed operation into consideration. The timed consistency [76] is a datacentric consistency model. In this model if an operation is not executed in time t, the other nodes must be observable in time ∆ + t. In some cases, this model is also referred to as the delta consistency. This model describes a combination of the sequential and old inconsistencies. In other words, the time interval in the system is terminated and the system reaches a steady state. This means that the minimum guarantee of a specific consistency model is granted in a fixed time period ∆t. If the replica during the time period fails to synchronize, then the desired item would not be accessible until the time that the replica is consistent again. This model is frequently used in order to guarantee the service level protocols and also increase the clearance of the operations between the consistency and accessibility [47]. Some other mentioned models such as the sequential and causal consistencies are the two models that have presented the hybrid consistency. In the timed sequential consistency model, the time criterion is involved in determination of the priorities in the execution sequence of the operations. If the operation e 1 in time t and operation e 2 are executed in time t + δ, then the whole nodes or processes observe the operations according to the execution time sequence [89]. As mentioned before, the timed causal consistency model is also another hybrid consistency model which is involved during the execution process of the timed consistency. In this model, using the operation event time, the causal consistency is executed with more validity. Whenever the operation e 1 is executed in time t and be the cause to the operation e 2 in time t + δ, the causal relation e 1 e 2 is valid [27]. Coherence model This model is one of the data-centric consistency models which provides and guarantees the sequence of operations on all data items [47,68]. For instance, in the causal consistency, two updates on two different data items by a client has to be executed according to the correct serial sequence. Besides, a storage system can reach the steady states when all the replicas are the same from all data items. Reaching a steady state is not feasible according to the size of the data store. Coordination between the updates in short period of time on a great number of servers is an extremely hard task to do. Thanks to the scalability feature, mostly the consistency model is insured to be executed on each data store consistency key. The introduced models are called the eventual coherence, causal coherence and sequential coherence. Adaptable Consistency Adaptable or rationing consistency [90] is executed as soon as the data-items are clustered proportional to their importance e.g., in on-line based stores the credit cards use this model of consistency on different types of data-items i.e., A, B, C. Although the A and C data-items receptively apply the linear and eventual consistency models, data-item B calculate the continues alternations of this consistency based on the inconsistency cost function. Whenever, the cost of inconsistency exceeds the cost of inaccessibility or high latency, then linear consistency is performed on] data-item B [47]. For instance, cloud-based rationing consistency is executed through GARF library [91]. This consistency model with self-adaptability [92] in the cloud environment in selected based on the consistency cost [93], it privileges the client to specify the maximum stale read rate or the consistency cost according to the Service Level Agreement (SLA) [47]. This model based on the type of cost and data-item present different levels of consistency. The RedBlue consistency like adaptable consistency provide two discriminant levels of consistency based the type of operations [47,11,94]. RedBlue Consistency The ReadBlue consistency [11,94] is presented to increase the replication speed in the distributed systems. In other words, the increment in speed suggests that whenever the client send a request to the server, it receives its response in a short period of time. Eventual consistency [63], by reducing the synchronization among the nodes or sites processes the local operations with a faster speed. In contrast, the linear and sequential consistencies as they have high rates of communication in their synchronization process among the nodes, they prohibit the agility in processing of the operations. The RedBlue consistency categorizes the operations based on the type of execution into two sets of red and blue operations. The execution order of the blue operations can be different from one site to another, whereas the red operations should be executed for all sites equally the same. This consistency model consists of two parts: 1. the order of RedBlue which specifies the order of the operations, and 2. a series of local serializable operations which have causality relation between each other. The causality relation in this model records these realtions in other sites by assuring that the dependency of the operations are recorded in the main site and guarantees them. In definition, the sequence of operations in specific sites are processed locally. In such system each operation with a red label is executed in a serializable order [95] while each blue labeled operation is performed like the eventual consistency [12,32,96]. These labels specify the category of operation. The execution of this model makes sure of the absence of the inconsistency in the attributes of the application. Finally, the whole replicas are converged. The operation in this model has no effect on the replacement of the blue operations. On this basis, a method is introduced in order to increase the feasible space to execute the blue operations by dividing them into two phases of the operation generator and shade. The operation generator merely finds the alternations in the main operation. In the recognition phase, the blue and red operations are identified. However, in the shade operations the identified alternations are executed and are replicated on all sites and the operations are the shade, blue or red exclusively. Challenges and issues Some of the proposed consistency models such as linearize, fork-linearize, forkjoin-causal, causal, and causal+ consistency which are proposed by the researchers to cope with different challenges. As you can see in Fig. 13 a number of consistency challenges in the distributed systems are projected. Specially, the linearize which is utilized to increase the fault tolerance and reliability by applying the protocols such as: wait-freedom, 2phase-commit, quorum based replication protocol and hybirs. The trade-off between consistency and availability is the most important challenge on which researchers have conducted a great number of researches during recent years. After that, the reliability, cost, and security in the distributed systems have been focused by the scientists. In our research we have proposed a new classification which discusses the challenges covered by consistency models. Data consistency results in an increment in security and user reliability in the system. By emplacing the replica at the nearest point to the client latency and response time are expected to be reduced. Therefore, it brings about more accessibility to the data. Data-centric consistencies by executing the partial replication protocols not only increase the scalability and availability, but also reduce the consumed energy. The trade-off between consistency, violation, and staleness is that by increasing the consistency the others deteriorate. In case the distributed system is based on the cloud, the Quality of Service (QoS) and Service Level Agreement (SLA) are the challenges. The decrement in violation and staleness results in an increment in the QoS in the cloud. Reliability and fault tolerance In order to increase the reliability, the data are replicated in the distributed system. Fault tolerance can be in variant forms such as using the other replicas in case of the failure of the local replica, keeping several replicas to maintain against the distorted data [68], tolerance of read aborts and data deletion by the malicious client [26], and failure in message sending from the server to the client or vice versa and also message loss, etc [97]. GARF is an object oriented system which has proposed three class of Slow, Parm, Causal, and weak consistency in order to increase the reliability in the system [91]. Message passing systems consent on two third of the values of replicas in order to increase the fault tolerance [97]. However, researchers have proposed systems with stronger consistencies e.g., linear. Linear consistency using the 2phase-commit, wait-freedom, quorum-based, and Hybris protocols performs as follows: • In 2phase-commit avoids access to erasure code and delete them by malicious client [26]. • In wait-freedom prohibits the execution of the read or write operations of the erasure code in heterogeneous systems with malicious servers or clients [98,99]. • In quorum-based in order to increase the fault tolerance a consensus must achieved on the data contained by the replicas [49]. • In Hybris, it provides linear, although public cloud comes with the eventual consistency [100]. Hybris also tolerates the connection loss with the cloud besides it coherently replicates the metadata of the public cloud in private cloud. However, the performance of these protocols in Byzantine Fault Tolerance (BFT) with more the f faulty replicas is not acceptable. As a result of this deficiency the BFT2F which is an extension over BFT has been proposed in order to append the linear to the BFT. This algorithm shows a more promising performance with more than 2f faulty replicas. This prohibits the malicious servers from responding to client requests and depicts the inconsistencies in the system [101,102]. The Shared Cloud-backed File System (SCFS) provides the strong consistency along with the semantic POSIX on the cloud which uses the eventual consistency and is able to discriminate between the malicious and benign Cloud Service Providers (CSPs) [103]. A framework using the Fork* in order to communicate the faulty servers and tolerate the low speed or lost network connectives is provided which solves the conflict results using the Fork* [31]. In contrast, Eiger, a geographical distributed storage system, for which the causal consistency is provided in order to tolerate the network partition disconnectivities between the data-centers [42]. In recent years, thanks to the disk which come with the RAID technology, the state machine replication (DARE) with high performance prohibits the total network connection loss [61]. Performance and availability Placing a copy of data near to the process or the client which uses it leads to an improvement in performance in distributed systems [68,33]. GARF by proposing the three classes of Atomic, Sequential, and CAtomic provides the strong consistency criteria which satisfies the availability [91]. The trade-off between consistency and availability indicates that the system in some time intervals by tolerating the inconsistency causes an increment in the data availability. With respect to this trade-off the weak consistency on the structured log file replicas results in an increment in availability [104]. Also, the timed consistency is a weak consistency which defines a threshold for the access time. Access time, is the reasonable time for data access [89]. The functionality of a system is differs based the types of consistencies which are applied in it. TACT is a middle-ware in distributed systems which using numerical staleness ordering deviations are the criteria which selected arbitrarily in the continues consistency based on Conit in order to improve the system performance [88]. In contrast, the composable consistency, five criteria of concurrency, consistency, availability, visibility and isolation are presented the combination of which results in the favored consistency and improves the functionality of the system [10]. However, the arbitrary consistency using the partial replication, a mechanism which stores or replicates the data on each storage node and improves the performance of the system [8]. High availability could be achieved by sacrificing the consistency. ZENO, a state machine replication, in BFT in order to achieve high availability replaces the eventual consistency with the linear. This machine, as long as the network connectivity is approved, with a consensus over the latest update between the replicas serves the client requests [105]. The View-Fork-Join-Causal (VFJC) consistency with respect to the Consistency Availability Convergence (CAC) theorem in order to converge and access the replicas is presented [86]. Eiger using the causal consistency grants the ease of access for the clients [42]. However, in comparison with the causal and VFJC, the causal+ consistency in chain reaction storage systems with network partition is satisfied with high availability. This consistency increases the convergence among replicas [106]. The causal consistency using the Consistency Availability Partition tolerance (CAP) theorem stands the network partitioning and is satisfied with the availability [53,107,108]. However, the PACELC [109] which is forked from the CAP theorem, in case of network partitioning, illustrates the trade-off between consistency and availability [110,70]. In recent years, a new type of consistency known as the RedBlue is proposed which divides the operations into two subcategories of local operations (Blue) and global operations (Red). Eventual consistency with lazy fashion replication enforces the Blue operations so that the clients have more accessibility to the data in their local operations [11]. The daily increment in client demands for the high accessibility to the data has made the researchers to propose the eventual consistency with high availability [53,107,108,63]. However, in recent years, for the JSON data model and storage systems with high availability, the eventual consistency with convergence and liveness property is proposed [111,112]. Scalability Scalability is one of the most important points of interest for the researchers in the distributed systems. Scalability can viewed form different points such as increment in number of the servers, data centers, clients as wells as their requests, and the expansion in the geographical area [68]. Expansion in size occurs as soon as a great number of processes need to have access to the data stored in a server. In recent years, researchers have analyzed the trade-off scalability and consistency in the geographical distributed systems which prohibits the inconsistency in case the expansion in the scalability. Cassandra is an scable distributed storage system which provides multiple consistencies [72]. However, the other storage systems such as COPS and Chain Reaction are scalable in terms of an increment in the number of the servers in each cluster and only provide the causal+ consistency [106,32]. Moreover, besides being scalable, COPS provides data access as well as convergence among replicas. In terms of the cloud computing scalability could be emerged as an expansion in the number of clients, servers, and CSPs which prevents the bottleneck phenomenon the systems [32,90,69,113,114]. Yet, recently a service named SATURN is proposed which provides partial replication and causal consistency. This highly scalable ignorant to the number of clients, servers, data portioning, and locality [115]. Cost/Monetary Cost There exists a trade-off between consistency and cost. Data is replicated in the distributed systems in order to increase the performance of the system. However, update propagation is costly [68]. The cost parameter itself includes the bandwidth for data transmission among replicas, data overhead, storage, monetary cost [63,90] based on the cloud service provision. BISMAR has considered the cost efficiency in different consistency levels in cassandra [93]. In recent years, some researchers have presented the causal [116] as wells as liner [103] consistencies in order to reduce the storage space in the distributed systems. The fork-linearize [50], fork & serialize in cloudproof [113], V F C 3 [38], causal [42], [114], causal+ [106], linear with the hybris protocol [100] are the consistencies in order to reduce the bandwidth and network overhead. Explicit is an extended eventual consistency which is presented to reduce the synchronization cost of conflict operations and optimal storage in the geo-distributed systems [117]. Security With the emergence of different distributed systems such as grid computing [118], cloud computing [119], edge computing, fog computing, mobile & mobile cloud computing [2,120], the accessibility of the malicious servers and clients to data is vital challenge in the distributed systems. Weak consistency using the protocols and system design has increased the data access security. Weak consistency has been proposed for the (a) structured log files of the replicas, (b) fixed servers which store the client operations using their digital signature (c) or audit servers which inspect the malicious servers [104]. Along with the provision of structured logfiles, Bayou's anti-entropy protocols for data transmission between the replicas and eventual consistency through digital certificates and trusted delegates were provided to ensure the security over the insecure wireless communications or the Internet [77]. With the passage of time, Fork consistency and a variety of consistencies derived from it have been introduced to enhance the level of data security in dealing with servers and malicious clients. SUNDR is a distributed file system that, by Fork consistency and digital signature, automatically detects any maladaptive behavior in the server sectors [29] and ensures that clients can recognize any disintegraty and consistency errors, until they observe file changes made by another client [82]. Other protocols such as key distribution and file storage protocols are developed to generalize the fork consistency on encrypted distributed file systems to protect user data on untrusted servers [84]. Fork consistency in terms of performance has proven to be good for dealing with malicious servers, but it has brought about system failure in dealing with fixed servers. Therefore, researchers are looking for a combination of the fork consistency and other types of consistencies. Fork-linearize and fork-sequential are hybrid consistencies out of which the linearize or sequential consistency runs on the fixed server to ensure the client data and in case of the malicious servers the fork consistency is applied. But, the main disadvantage of these two hybrid consistency models is that when they are faced with the fixed servers, they do not allow server operations to be executed in absence of the Wait-Freedom protocol [62]. Besides, Venus is introduced as a security service to interact with untrustworthy servers, which combines three consistency models in its service. It also recruits the Fork-linearize hybrid consistency to deal with malicious servers and uses the eventual consistency for high data availability [50]. Furthermore, one of the biggest advantages of fork-linearize consistency is when the clients are not interacting with each other and want to ensure the integrity, and integration of the shared data on unreliable servers, this consistency will help them ensure that the operations of all clients are guaranteed on these servers [69]. Moreover, the fork-join-causal consistency in the Depot distributed system, not only guarantees the safety and biological characteristics for the faulty nodes, but also provides better data stability, durability, scalability, and accessibility by tolerating the network partitioning [12]. Staleness and violations Applications in different ways determine that which inconsistencies can be tolerated. By defining three independent paradigms, a general method for determining the inconsistency is presented [88]: the deviation in numerical values between replicas or the deviation according to the order of the update operation indicates the degree of severity of the violation and the deviation in reading between the replicas shows the stale read. Distributed Depot system, using Fork-Join-Causal consistency, in addition to guaranteeing the stability and durability of data, reduces the stale read by causal consistency by tolerating network partitioning [12]. Harmony is the consistency proposed for the Cassandra, which supports several consistencies [92]. This consistency is related to the application type of self-adaptive consistency. Eventual consistency which is devised in harmony has shown better performance in reducing the stale read time. Even the eventual consistency in comparison with some data-centric consistencies has also shown a better performance in reducing the staleness in reading [121]. With indirect monitoring of consistency, it is possible to calculate the optimal execution cost of consistency and examine the stale read with respect to the consistency behavior [80]. Or even by local and global replica inspections that reduce the severity of violations and the rate of reading slowly through consistency of readability and adaptability to read self-writings and global audits by causal adaptation [24,122,123]. The rate of readability and the severity of the violation are two of the challenges that interact with the cost, quality of service, and service level agreement in distributive environments such as cloud. Reducing the aging rate and the severity of the breach reduces the cost of compensation for noncompliance [90,93] or even the cause of reducing bandwidth consumption or reducing data overhead in the network [114,55]. A service level agreement or quality of service is a challenge that has been introduced in recent years by providing consistency as a service. As stated earlier, the readability and severity of violations of the inconsistency determination axes indicate improved service quality or service level agreement based on the level of compliance promised [24,122,123,41]. Quality of Service (QoS) V F C 3 based on the functionality of the cache memory has resulted in an improvement in QoS. Data replication and storage in the cache memory in the nearest node to the client reduces the response time of the client request and bandwidth [38]. Furthermore, there exists a trade-off between the QoS, Violations, and Staleness where a decrement in the later ones results in an increment in the QoS [24,123,41]. Service Level Agreement (SLA) With the emergence of the cloud computing and the introduction of the consistency as a cloud service by the CSPs, the researchers confronted with a new challenge called the SLA. Recently, this challenge based on the least response time has been proposed [45]. Besides, a melioration in QoS of the consistency in the cloud brings about the promised consistency level in the SLA [24,123,41]. Latency In geographically distributed systems several big data-centers are condensed in different spots all over the world. These large-scale data-centers provide computers with sufficient resources to serve many users. However, this concentration of the resources leads to a gap between users and resources. This, in turn, causes network latency and jitter [120]. Researchers use replication to reduce the data access latency to a local replica of the data in the nearest location to the client, this latency includes the duration of write or read, the transmission of messages between the client and server, and eventually the data access time. Although eventual consistency is one of the weakest consistencies, it has alleviated the data access, write, and read latencies. Additionally, the Indigo middleware layer has provided an extended eventual consistency named Explicit on the storage system [124]. Similar this middleware layer, QUELEA is presented for the distributed systems with eventual consistency. However, QUELEA is a programming model that is able to detect the consistency features at the granular level of the application, which results in a reduction in latency [125]. In distributed systems, full replication is applied to reduce the response time of the client requests. Yet, by introducing of the causal consistency and partial replication, not only the storage overhead and replica transference has decreased, but also the researchers have reduced the response latency to the client requests [116]. In recent years, the novel consistency has introduced the RedBlue, which besides to its other benefits plays the role of the eventual consistency to the local operations of the clients who are in Blue's operations to reduce access and response to client demand [117]. However, by disseminating data by intelligent metadata techniques and providing causal consistency, SATURN ensures increased operational capacity and reduced latency of updated data reading operations [115]. Energy consumption In recent years, due to the growth in the power hunger of the servers the Internet based service providing companies like Google, Microsoft, Amazon, etc. are faced with extremely high energy costs. However, the trade-off between energy consumption and consistency is the new challenge with which has gained lots interest among researchers. In order to reduce the energy consumption the consistency must be sacrificed. For instance, Caelus is a device which utilizes both of the causal and eventual consistencies thanks to which its battery life has been extended [114]. Yet, with proposition of the Hot & Cold replication technique and the causal consistency the power consumption has been minimized in the distributed storage system [126]. Conclusion With the emergence of the distributed computations systems as well as the replication process in these systems beside the scalability and availability concepts, the consistency models have been proposed. Our purpose is to concentrate on the consistency models which are proposed in the distributed systems. These models are categorized into to groups data-centric and client-centric by the researchers. In this paper, first the we had and introduction on the main categories of consistency models which are divided into data-centric, client-centric and the hybrid models. Hybrid models are a combination of the different consistency models which are proposed in this paper. Besides, we have shown the contributions of different consistency levels in distributed systems. Subsequently, the traditional, extended, and novel are those consistencies which are proposed based the previously mentioned subcategories consistency models. Next, the functionality of the proposed consistency models categorized in these subgroups are explained in detail. Consequently, in order to have a full insight of the detailed process performed in the conventional data and clientcentric models, we have explained them in mathematical terms. By providing a new categorization of the consistency models and novel distributed storage systems over time, we found that distributed systems also require consistency. We analyzed the challenges in the distributed systems such as reliability, availability, latency, etc. as well as trade-offs between these challenges and the consistency. We came to the conclusion from this study that by offering a variety of consistencies in distributed systems and their urgent need for consistency, it can be introduced as a service in these systems.
2019-02-08T22:11:01.000Z
2019-02-08T00:00:00.000
{ "year": 2019, "sha1": "44bf67e55ad6f646b8274fe028c817f824670e7c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "44bf67e55ad6f646b8274fe028c817f824670e7c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
272046039
pes2o/s2orc
v3-fos-license
Mapping women’s role in small scale fisheries value chain in India for fisheries sustainability Sustainability in small scale fisheries is receiving wider acceptance worldwide as the system faces different kinds of exploitations. Gender can play a significant role in achieving sustainability as they are the primary beneficiaries in small scale fisheries. Exploring their level of participation in resource use can provide a database that functions as the key determinants for sustainability. This article looks for empirical evidences on the role of men and women in small scale fisheries through gender structure analysis. The indigenous communities (n=154) in Vazhachal Forest Division, Kerala, southern state in India is considered for the study. Methods adopted includes household survey using semi structured questionnaire, transect walks, focus groups and direct observations. Results reveal that although higher percentage of men (66.20%), women’s role is substantial (33.80%) in fisheries value chain including pre harvest, harvest and post-harvest sector. Their presence had a significant relation in supporting men in fisheries activities like collection of baits ( χ 2 = 6.189, p= 0.013), accompanying men in fishing ( χ 2 = 4.153; p= 0.042), sorting of fishes ( χ 2 = 3.566, p=0.059), processing of fishes ( χ 2 =9.776, p= 0.002) and in mending of nets ( χ 2 = 4.40, p=0.042). Results, further, reveal that men and women have unique and overlapping roles in small scale fisheries. The key findings of the study provide quantitative evidence to develop strategies for small scale fisheries sustainability. INTRODUCTION Small scale fisheries offer multiple benefits to its resource users such as poverty alleviation, food and nutritional security and livelihood (FAO, 2017).It is closely associated with the lives of the community and their tradition.Studies indicate that in rural communities both men and women perform fishing, particularly in inland fisheries, on part time or seasonal period thereby creating an actual contribution towards subsistence or economic benefit (FAO, 2013;Matthews et al. 2017).Their knowledge on the ecosystem forms an important element of sustainability (Berkes, 2012) since small scale fish producers has direct access to resources.However, there is a lack of scientific evidence to detail the relation between the resource use and gender (Leach, 2007;Jonson, 2014).It is important to consider the contribution of men and women in natural resource management (De la Torre-Castro et al. 2017) as management strategies may not be successful if it fails to appreciate the association between gender and environment and Sustainability,Agri,Food and Environmental Research,, 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art2789also if there is lack of equal participation of resource users in resolution mechanism (Leach, 2007). Analysing the interaction between men and women in a social setting facilitate in gendered differentiation and equity.It reflects social cohesion (Jackson, 1994) and the managerial role played by gender (Sprague, 2005).In small scale fisheries men and women are differently placed in the value chain.Men are usually entrusted the task of fishing while women for processing and marketing (FAO, 2013).This differentiation leads to unequal labour distribution and differential labour efficiency (World Fish Center, 2010). The contribution of women in fisheries is depreciated in official statistics as there is gender bias in policies (World Bank, FAO and World Fish Center, 2010;FAO, 2013).Traditional laws and customs further limit women's access to fisheries resources and in decisionmaking processes.This results in their vulnerability and marginalization (Porter, 2006;FAO, 2007;Okali and Holvoet, 2007).These scenarios urge the need for gender appraisal in fisheries value chain. With this background the article looks into the empirical evidences related to the participation of indigenous men and women in small scale fisheries in India where forests and river resources forms an importance source of subsistence for them.Indigenous communities in India are officially known as Scheduled Tribes (ST's).635 ST's are identified all over India.Among these, some groups are with low development indices and experiences population decline.They are classified as Particularly Vulnerable Tribal Groups (PVTG's) and seek special attention at the management level.Economically and socially ST's are least advanced.Hence, they experience deprivation and vulnerability (Mehta and Shah, 2003;Kasi, 2011).Considering their fragile nature and chances of exploitation, Government of India has laid down rules and regulations for the protection of ST's.Tribal development boards are in force at state and regional level for coordinating and implementing development programmes and monitoring the welfare of tribes.In agreement with this it is important that their dependence to resources is analysed.This provides data for resource management and sustainability. MATERIALS AND METHODS To address the gap in recording the role of indigenous men and women in small scale fisheries we adopted an exploratory study among ST's in Vazhachal forest division in Kerala, southern state in India.They are distributed on the bank of Chalakudy river that streams within the forest division.This is the 5th largest river in the state which is known for the presence of diverse endemic fishes.Many among these fishes have ornamental value and high demand in aquarium trade.The geographic coordinate system of the river is 10°5' to 100 35' N latitudes and 76°15' to 76°55' E longitudes.The catchment area of the river is 1704 Sq.Km with annual runoff of 3121 million m 3 (Bachan, 2003).This Sustainability,Agri,Food and Environmental Research, Fishing is related to weather conditions.It is mainly practised at the onset of monsoon season (June) as the river dries up during summer and resources become scarce (Bachan, 2003).Since small scale fisheries is the subject focussed, the study concentrated on the tribes who pursue single or multiple livelihood activities but engage in fishing as well. Fisheries in this region is multi species, non-targeted and multi geared.Mixed method approach was followed in this study by making multiple visits in the settlements between April 2019 and August 2019.It involved quantitative as well as qualitative method including surveys, interviews, focus group discussions and direct observations.Harvard Analytic framework is used in this study as it highlights the gendered division of labour at the household or community level (Okali, 2012).It assists in identifying the activities of men and women in fisheries and their dependency towards similar assets (Harrison, 2000). In the following sub sections, we describe the legal clearance for the study, data collection method and data analysis. Legal clearance for conducting the study: Accounting the vulnerable nature of tribes, Government of India has taken protective measures to safeguard them.To visit tribal settlements or to perform activities in tribal areas special consent from concerned authorities is mandatory.As a part of this study, the lead author applied for permission to the Directorate, Scheduled Tribe Development Board, Trivandrum, Kerala, the central administration of ST's in the state.The sanction order received from the Directorate was then directed to the regional Tribal office at Chalakudy, Thrisur district, Kerala as the study area is under its purview.A Memorandum of Understanding (MoU) was signed between the lead author and the Tribal Development Officer, Chalakudy on the agreement that the study will not distract the lives of tribes, influence their culture and habitual life and will follow the government protocols.Similarly, permission from the forest department was also mandatory since the study area is a forest area.The permission was received from Sustainability,Agri,Food and Environmental Research,, 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art2789Divisional Forest Officer, Vazhachal Forest Division, Thrissur.The protocol refrain outsiders from staying at tribal settlements.The visiting time to tribal areas was reserved as 10 am to 5pm.From each settlement a member of the community was authorised as the facilitator.The time of our entry and exit to tribal areas must be recorded in the log book at range office. Data collection: The details regarding the tribal settlements were collected from Chalakudy Divisional Forest Office, Kerala.Field trips were conducted to familiarize the study area.A pre tested semi structured questionnaire was formulated based on pilot study.A household survey of 154 tribal people selected through random sampling was performed.Questions in the survey focussed on fishing frequency, gears operated, mode of fishing and fish resource utilisation.These data gave insights on the similarities and differences in fishing activities between men and women.This was followed by focus group discussions with 5-7 members.Different participatory rural appraisal (PRA) tools were used to describe their fisheries such as transect walks, which are the observatory walks to understand the nature and composition of settlement and identify the resources and resource map to describe the nature of fishing grounds.They also demonstrated the mechanism of gear operations, their indigenous methods of fishing by plates and clothes.This ensured the active participation of the respondents.Verbal communication was promoted in discussion instead of written methods since the tribal communities in this study are having low literacy level.Regional language (Malayalam) was used to communicate with the respondents.The facilitator helped to translate the queries in tribal dialect.Verbal response was recorded in the structured recording sheet by the lead author. Data Analysis: The collected data were analysed using Statistical Package for the Social Sciences (SPSS) version 23.The fishing nature, fishing mode, fishing frequency between men and women were represented in simple descriptive statistics.Chi square test of independence or Pearson's Chi square test was carried out to statistically analyse the interdependence of the variables.This was followed by a strength statistic, Cramer's V.This is one of the most useful statistics applicable for nominal data and p < 0.05 was considered as the level of significance.The categorical variables analysed through chi square test includes fishing pattern between men and women; participation of women among two communities in bait collection, accompanying men for fishing, sorting of fishes, fish processing and net mending.Garrett Ranking analysis was done to identify the preferences in catch utilisation and the most preferred gear between men and women.In this method the respondents are advised to rank the variables.The assigned rank is converted to score value by the following formula: Percent position= 100(Rij-0.5)/Nj where: Rij= Rank assigned for the i th variable by j th respondents Nj= Number of variable ranked by j th respondents Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art2789 The percent position calculated was then converted into scores by comparing with Garrette's table.For each factor, the scores of each respondent are added and then total score and average score was estimated.The variable with high average score signifies the most preferred factor. In this, women in the age group of 26-35 years (34.6%) and men with above 45 years (33.3%)predominated.Educational status shows majority of women were illiterate (63.5%) while 44.1% men were literate with primary education.Fishing pattern of Tribal fishers: To know the fishing pattern among men and women it was categorised as casual, full time, part time and seasonal (Table 2).Descriptive statistics shows that a greater number of women preferred fishing as casual activity (75%) while most men preferred it as part time (52%) or seasonal activity 36.3%).Significant variation was recorded in fishing pattern between men and women (χ 2 = 90.40,df= 3, p= 0.00). Gears used by tribal fishers: Multiple gears were operated by the tribal people for fishing.Depending on its operation fishing method was categorised as active or passive. Active mode represents the gears that can be moved or dragged like hooks and lines while passive gears represent those that are placed stable for a fixed period before retrieval like gill nets.Majority of women (25.32%) opted active mode of fishing while majority of men method among the communities was fishing using plate and cloth.In this a cloth with a small hole at the centre is wrapped on a plate and placed in shallow water with baits (small balls of rice and coconut oil mixture) in it.Small fishes enter the plate through the holes for feeding.Once the fishes aggregate in the plate, it is drifted for collecting the fish.The fishes caught by this means were either used as fish bait or for household consumption.have an influence on the catch size.Women usually preferred fishing in the places adjacent to their houses or in shallow waters in morning hours with hooks and lines.Once they receive the catch of considerable size for consumption, the activity is reduced to a halt.On the contrary, men explore distant places for fishing.Since the major gear used by them is gill net and it needs to be kept undisturbed for a fixed period before retrieval, it is usually operated during evening.Nets are hauled early morning; fishes are separated from nets and sold at cooperative societies.weight of fish in kilogram and the rate of fishes sold are recorded.Fig. 1 shows the channel of supply of fishes caught by tribes. In addition to household consumption and sales, fishes were also utilised for miscellaneous activities such as medicinal purposes and customary purposes.This scored third and fourth rank respectively in garette ranking.The study noted that selective fish species were used for medicinal purposes.Anguilla bicolour is one among the preferred species with medicinal value and particularly valuable to treat asthma.The study also recorded an important finding that despite the river being known for the presence of indigenous ornamental fishes reported from the state, none of the respondents' used the fishes for aquarium purpose or supplied the catch to exporters/suppliers of ornamental fishes. Participation of indigenous women in small scale fisheries: Determining the ways women support men in fishing gives us a clear understanding on their level of participation in small scale fisheries.Five variables were listed to compare women's activities in different stages of the value chain (pre harvest, harvest and post-harvest) in communities and their active involvement with men (Table 6).The first variable tested was women accompanying men in fishing.68% of the respondents in Malayar community and 45.73% of the respondents in Kadar community were of the opinion that women accompanied men in fishing.As majority of men go far off places from their settlements, they use canoes for fishing expeditions.Since river streams intensely, especially during monsoon season, balancing the canoes is a tough task.In such instances women's participation has supported them in paddling and balancing the canoes. Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art2789 Women also helped in operating the gears.Since most men preferred gill net, its efficient operation requires assistance.Women performed this function by knotting the gear in suitable places and dragging nets.Chi square results shows women's participation in this sector to be significant (χ 2 =4.153, p= 0.042). The second factor considered was the involvement of women in bait collection.A major proportion of women (64%) from Malayar community and 37.20% of women from Kadar community showed their presence in bait collection.Fishing in shallow waters by scooping or by indigenous method is a common practise observed among women in the settlements.Since bait collection is an important pre harvesting activity in fishing, the effort of women in this activity has reduced the time spent by men in bait collection.Chi square analysis confirms this view as there is a significant effect (χ 2 =6.189, p= 0.013) observed in the presence of women in supporting men by bait collection in two communities. The third factor considered was women's participation in fish sorting.43.41% of women in Kadar community and 64% of women in Malayar community were involved in this activity.No significant relation was present on the role of women in sorting the fishes between the communities (χ 2 = 3.566, p= 0.059).This could be because that once the gears are knotted at selected fishing sites it is hauled after specific time interval, mostly at dawn.Women are not usually accompanied for performing this task.Fishes are sorted from nets by men and taken for marketing.Thus, the role of women is insignificant in this process. Fourth variable considered was women's role in fish processing, a post harvesting method in fisheries.Fish processing was not commonly observed among tribes.However, from the participants who practised fish processing it was noted that compared to Kadar community higher proportion was observed among Malayar community (44%).A significant association was found on the presence of women in fish processing between the communities (χ 2 = 9.776, p= 0.002 ).An open-ended query followed to understand the ways in which fishes were processed.Salt drying was the common method noted among the interviewees where fishes were cleaned and sun dried on rocks by applying salt. Another method followed was drying with turmeric powder and salt.Apart from this smoking was also practised for processing.Dried fishes fetched higher amount than the fresh products.However least preference was given to commercialize processed fish due to the time consumption and effort required to perform this task.It was usually used for family consumption and seldom sold in markets. Fifth factor was on the role of women in net mending.41.86% interviewees of Kadars and 64% interviewees of Malayars indicated women participated in mending nets. Their presence in this activity was significant (χ 2 = 4.140, p= 0.042).This helped the communities to lower the expenditure in buying and repairing nets. DISCUSSION Livelihood diversification is a common phenomenon in indigenous people as it supports and sustains their life (Kalafatic, 2004).The indigenous communities (ST's) in this study show a similar pattern where they are involved in multiple livelihoods including small scale fisheries.The geographic distribution and the ecosystem values influence them to depend on the natural resource-based activities.The river flowing nearby the settlements is advantageous to the communities to practise fishing.The result shows that men and women in indigenous communities participate in small scale fisheries.The difference observed in their fishing pattern can be attributed to the diverse livelihood opportunities available to them and the physical characteristics of the river.Fish resource availability depends on the seasonal variations.During summer the water level in the river may decline or dries up (Bachan, 2003), so fish resources become scare.Accordingly, they divert to other livelihood strategies.This signifies how the dependence on ecosystem services varies with the seasonal changes (Geheb and Binns, 2001).Various other studies have recorded diversification adding to livelihood security, especially in rural communities (Ellis, 2000;Kalafatic, 2004) and non-timber products playing a substantial role in the livelihood of indigenous groups; especially for women (Neumann and Hirsch, 2000;Shillington, 2002).that functions within the forest division.It is graded as local self-government where the duties are entrusted to the tribal people.It guarantees a platform for marketing their products and save the time and capital for marketing their product to far-away places.The financial returns to the fishers are assured through marketing.This also guarantees employment opportunities and food security (Dubey et al. 2009;Sahoo et al. 2020) to the communities. The difference in resource use pattern observed between men and women signifies it is gendered.This may be due to the variation in catch per effort between them.This in turn is influenced by various other factors such as the gears used for fishing and place of fishing.From the multiple gears operated by tribal fishers, the most preferred gear among men being gill net and hooks and lines among women can be attributed to the diversity in the space used by them and the nature of the river.Men usually go to distant places for fishing while women fish in shallow waters nearby to their settlements.The river has strong bedrock surface with deep crevices and pool and many rapids and falls, well known being the Athirappilly falls (Biju et al. 2000) which hinder other gears to operate.In the case of women, they show a divergence from men due to their household responsibilities.They are engaged in fisheries either to support household or as a leisure activity with children or their community members.They move either individually or in groups to spend their time in evening nearby their settlements.As they consider it as recreational, active mode of fishing using hooks and lines are preferred.They consider it as an opportunity for social bonding which is one of the important factors that determine the value of life (Camfield et al. 2009).This validates the view that subsistence fishing is a common practise among the rural inhabitants who have access to water bodies (Hoggarth et al. 1999;Hossain, 2001). At certain circumstances women play the role beyond subsistence.It is evident from the supply of their excess catch to cooperative society thereby meeting the need of additional income in their household. It is interesting to note that among the fishes caught, selective fish species are used for medicinal purposes.Anguilla bicolour is one among the preferred species with medicinal value.It is particularly valuable to treat asthma and these communities have traditional way of preparing medicine.This may be due to the presence of vitamins such as A, C, D, B1, B2 and B6, fatty acids like Eicosapentaenoic acid and Docosahexaenoic acid and other minerals that this species is considered as medicinal value (Roos et al. 2007;Hakim et al. 2016).It is also important to note that fishes caught are not used for aquarium purpose. However, literature shows that this river is having ornamental potential fishes with wide acceptance in export consignments (Raghavan et al. 2008). There is a general concept that due to the prevailing household responsibilities and the differential influencing factors, women's role is concentrated mainly on the pre and post-harvest sector (World Fish Center, 2010;FAO, 2013;Szymkowiak and Rhodes-Reese, Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art27892020).In most countries there is a lapse in tracking their actual contribution or categorising them as assistant to men.Due to these women are still marginalised, their contributions are undervalued in fisheries and quoted as invisible (Zhao et al. 2013;Pettersen, 2018;Salmi and Sonck-Rautio, 2018).Close examination of the functionalities of this sector reveals indigenous women's role in fisheries is substantial than generally hypothesized. They have direct as well as indirect contribution in fisheries that is revealed by the sex disaggregated data in this study.Women in the communities show direct participation in fishing expeditions with men.Cultural norms that restrict women in exploring the space for fishing (De la Torre-Castro et al. 2017;Frangoudes and Gerrard, 2019) are not observed among these groups; rather their participation is highly appreciated.The presence of women has contributed in paddling the canoes, knotting the gears and other assistance to men.A similar finding is seen in Lao PDR's Nam Ngum Reservoir where women accompany men for fishing in mechanized boats.While men concentrate on diving, women take part in controlling the boats, dragging nets and collecting fishes.(Viravongsa, 2000). The study also confirms women's participation in pre harvesting activities such as bait collection and net mending.The tribal fishers preferably use small fishes as fish bait. They collect these fishes either by scooping or by indigenous methods (plates and rice bowls), especially by women.This resembles the gleaning activities of women in coastal areas (World Fish Centre, 2010).Gleaning, the process of collecting marine organisms in coastal areas by rural people is an important livelihood activity and moreover a group activity among women (Grantham et al. 2020).There are economic as well as social perspectives in women participating in gleaning.In the case of women, they may have limited economic source to own the gears for fishing or limited access to space which draws them in gleaning in near shore (Kleiber et al. 2013) or else due to the importance given to the social values (Grantham et al. 2020).Similarly, the women in the tribal communities preferably follow indigenous method of fishing.It is advantageous for them as they do not need to invest much amount in buying the gears.Also, this method requires a time interval for fish aggregation where the plates are kept idle in shallow waters.This time can be efficiently used to perform household activities like cooking or care giving to their children or elder ones.This signifies the social commitments of indigenous women in tribal setting. A major share of this collection is used as fish bait by men.This effort of women has contributed significantly in reducing the time spent in bait collection by men.This in turn helped them to utilise more time in fishing. The indigenous women show their prevalence in mending nets as well.They efficiently perform this activity as per the requirement or repairing the damaged nets. Through this activity the chances of procuring new nets is reduced and reusing the product becomes possible.Women's participation in this sector is reported in other studies as well Sustainability,Agri,Food and Environmental Research,, 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art2789(Mercier, 2001;Lukanga, 2018;Szymkowiak, 2020).These activities need to be valued as it supports the spouse and the family wellbeing. Women in processing sector are a common practice in developing countries (Ahmed et al. 1999;Kolawole et al. 2010).Indigenous women also show their presence in the post harvesting sector.But contrary to the statement that processing sector is stamped to be a women dominant activity and men in the commercialization of catches this study result shows that men also participate in fish processing.They consider it as a family activity where they perform these activities irrespective of gender.This indicates the social nature exhibited by the indigenous communities.An array of methods is used for processing such as salt drying, sun drying, and smoking and using turmeric powder.Many other factors also influence the variation in processing methods such as the preference of the consumers, labour and the tolerance required for the process (Medard et al. 2002).An advantage of processed food being that it can be preserved and used at a later stage, the nutritional security of the communities is ensured by this activity.Processed fishes are seldom sold in markets.Higher prices are received for processed fishes on marketing than fresh fishes.But tribal people refrain from these activities due to time and effort required for performing this task. As conclusion and implications the findings highlight that indigenous woman are equally efficient as that of men in tribal social setting including household management, food and nutritional security and even the economic security.Participation of women in fishing has contributed in household subsistence and supporting men at various levels of fishing value chain.This indicates women exhibit unique as well as complementary roles in fishing activities.Considering their central role in fisheries, the opinion of women in fisheries needs to be valued and adopted as the variables for fisheries management decision making process.Policy makers can consider these evidences to develop gender action plan and strategies for sustainability in fisheries.The study also highlight that the indigenous communities are unaware of the ornamental value of fishes caught by them and their demand in aquarium purpose.This is a research gap.This can be filled by creating awareness on the importance of ornamental fishes in the aquarium industry and capacity building on sustainable collection of live fishes. , 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art2789topography with favourable vegetation and climate supports rich biodiversity of flora and fauna.Hence the tribes rely greatly on natural resource-based activities.The indigenous communities involved in the study are Kadars and Malayars.They reside in clusters of 10-50 houses known as colonies or settlements.The study was carried out in nine settlements in which Kadars are distributed in seven settlements (Malakkapara, Sholayar, Aanakkayam, Vachumaram Perungalkuthu, Pokalapara and Vazhachal) and Malayars in two settlements (Thavalakuzhipara and Vachumaram).Kadars are PVTG's while Malayars are non PVTG's.During the study the communities contained 300 households.These households engage in diverse livelihood activities, most of which are subsistence based.Major livelihood activities include non-timber forest collection, crop farming, tourism and fishing.The tribes shift their livelihood activities depending on the seasonal availability of resources.Forest resources form the core source for livelihood. Fig. 1 . Fig. 1.Channel of distribution of fish caught by tribal fishers. Table 1 . Characteristics of the settlements Sustainability, Agri, Food and Environmental Research,, 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art2789(29.22%) preferred both active and passive mode of fishing.In total 35.06% of tribal fishers preferred both the categories.Garette ranking analysis was done to identify the most preferred gear among these individuals. Of the various gears operated (Table 3) gill net was the most preferred gear among tribal fishers (highest mean score of 71.94).Second most preferred gear was lines and hooks (mean score 62.87).Gears categorised as others indicate miscellaneous activities including fishing by indigenous methods.A commonly practised indigenous Table 3 . Garette ranking analysis on the preference of gearSustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 12(X), 2023: http://dx.doi.org/10.7770/safer-V12N1-art2789 to note that this aspect is gendered as fishing activity of men has strongly supported in income generation while the catch of women favoured food security of the household (mean score-74.84).The factors such as place of fishing, gears used and time of fishing Table 4 . Garette ranking analysis on fish resource utilisation by tribal women Table 5 . Garette ranking analysis on fish resource utilisation by tribal menCooperative societies mentioned in the study are the institutions functioning with the participation of tribal people who manage a business and the benefits are distributed based on use and ownership.In the study settlements fishers sold their catch to society with a minimum fixed price (150 INR/Kg).Societies support the tribal people by providing the services of marketing the fishes as well as social security.Outsiders, mainly hotel owners participate in the auction and the secretary in charge of the society maintains the log book of sale on a daily basis.The particulars such as date of sale, name of customer, Table 6 . Chi square test on participation of women in small scale fishing value chain
2024-08-29T16:36:18.815Z
2023-01-25T00:00:00.000
{ "year": 2023, "sha1": "06d59279834cb57cbec5e58bddeaa71c31507513", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.7770/safer-v12n1-art2789", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6395c18526a895967b2cc64d1e8e6f5b60e03cc3", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [] }
239547174
pes2o/s2orc
v3-fos-license
Tropical land use drives endemic versus exotic ant communities in a global biodiversity hotspot Understanding how land-use change affects biodiversity is a fundamental step to develop effective conservation strategies in human-modified tropical landscapes. Here, we analyzed how land-use change through tropical small-scale agriculture affects endemic, exotic, and non-endemic native ant communities, focusing on vanilla landscapes in north-eastern Madagascar, a global biodiversity hotspot. First, we compared ant species richness and species composition across seven land-use types: old-growth forest, forest fragment, forest-derived vanilla agroforest, fallow-derived vanilla agroforest, woody fallow, herbaceous fallow, and rice paddy. Second, we assessed how environmental factors drive ant species richness in the agricultural matrix to identify management options that promote endemic and non-endemic native while controlling exotic ant species. We found that old-growth forest, forest fragment, and forest-derived vanilla agroforest supported the highest endemic ant species richness. Exotic ant species richness, by contrast, was lowest in old-growth forest but highest in herbaceous fallows, woody fallows, and rice paddy. Rice paddy had the lowest non-endemic native ant species richness. Ant species composition differed among land-use types, highlighting the uniqueness of old-growth forest in harboring endemic ant species which are more sensitive to disturbance. In the agricultural matrix, higher canopy closure and landscape forest cover were associated with an increase of endemic ant species richness but a decrease of exotic ant species richness. We conclude that preserving remnant forest fragments and promoting vanilla agroforests with a greater canopy closure in the agricultural matrix are important management strategies to complement the role of old-growth forests for endemic ant conservation in north-eastern Madagascar. Introduction Tropical forests harbor the highest proportion of biodiversity worldwide (Myers et al. 2000). However, they are decimated and fragmented due to human land use, often resulting in a mosaic landscape consisting of forest patches and various agricultural land-use systems (Grass et al. 2020). Although forest conversion and degradation constitute the major drivers of tropical biodiversity decline (Gibson et al. 2011), many species remain in the agricultural landscape (Belshaw and Bolton 1993). Assessing the implications of land-use change for biodiversity is, therefore, a fundamental step to develop effective conservation strategies for tropical agricultural landscapes and to identify sustainable land-use options that prevent further biodiversity loss. Ants are among the most abundant insect groups in terrestrial ecosystems (Passera and Aron 2005). Their diversity and endemism rate peak in tropical regions (Lach et al. 2010). Ants are involved in diverse ecosystem functions such as soil turnover (Folgarait 1998), seed dispersal (Retana et al. 2004), predation (Cerdá and Dejean 2011), and food provision for other animal groups like birds (Dean and Milton 2018). However, ants, in particular native ants, are sensitive to human land use, such as logging and forest conversion, which often reduces species richness and changes the species composition of ant communities (Dunn 2004;Delabie et al. 2007;Ottonetti et al. 2010). On the contrary, habitat disturbance generally promotes invasion of exotic ant species, which can be problematic for the native ant community, because of their competitive behavior on resources (Holway et al. 2002). In addition, the structure of ant community may also be driven by environmental factors such as canopy openness, vegetation structure, deadwood volume, and landscape forest cover (Luke et al. 2014;Solar et al. 2016). Analyzing how these factors influence species richness can thus help identify land-use management options that result in few exotic species to efficiently preserve the native (endemic) ant community. Madagascar, a global biodiversity hotspot, hosts more than 1200 ant species, with 93% of described species are endemic to the island, and 41 species are exotic (Myers et al. 2000;Fisher and Christian 2019). Forests are indispensable for the majority of Madagascar's endemic biodiversity (Irwin et al. 2010). However, deforestation through agricultural practices has led to a loss of 44% of Madagascar's forest cover between 1953 and 2014 (Clark 2012;Vieilledent et al. 2018). Today, protected areas constitute the main conservation strategy to protect Madagascar's remaining biodiversity (Gardner et al. 2018). However, little is known about conservation opportunities in the agricultural landscape outside protected areas. Exotic ants could be problematic for Madagascar's native ant community, as demonstrated in a previous study that found aggressive competition of exotic ant species towards native ant species (Dejean et al. 2010). Thus, conservation strategies need to include the management of exotic species to protect native ant species. North-eastern Madagascar holds proportionally more forest than other regions of the island. Nonetheless, these are also subject to transformation, mainly through slashand-burn agriculture for hill rice cultivation by smallholders (Zaehringer et al. 2015). Slash-and-burn agriculture consists of cutting down the forest or woody fallow land, burning the plant biomass, and cultivating rice. This is followed by a fallow period, after which an additional cycle of slash-and-burn cultivation can commence. Northeastern Madagascar also supplies the biggest share of vanilla to the global market (FAO 2020). The increase of global demand and the rise of vanilla prices between 2012 and 2019 has triggered an expansion of vanilla cultivation in the region, where forest or fallow land is transformed into vanilla agroforests (Llopis et al. 2019). Here, 1 3 the biodiversity value of vanilla agroforests may depend on land-use history, meaning whether the vanilla agroforest was established at the expense of forest (forest-derived), driving forest transformation, or whether it was planted on formerly fallow land (fallow-derived), rehabilitating formerly burned land (Martin et al. 2020a). However, this important distinction is hardly made in land-use research (Martin et al. 2020a), but research on plants (Osen et al. 2021;Raveloaritiana et al. 2021) and birds indicates differences in species richness and composition depending on land-use history, particularly for endemics. Overall, the landscape of north-eastern Madagascar is characterized by a mosaic of forest fragments, rice cultivation (irrigated and rainfed), vanilla agroforests, and fallow lands (Zaehringer et al. 2015). Vanilla agroforests are characterized by a combination of shade trees and Vanilla planifolia plants, leading to a structurally diverse habitat with medium to high canopy closure (Osen et al. 2021). Woody and herbaceous fallows and rice paddy have significantly lower or no canopy closure and structurally and taxonomically simplified vegetation (Osen et al. 2021). Recent studies assessed the impact of land-use change on biodiversity in north-eastern Madagascar, highlighting the value of vanilla agroforests to complement forests for the conservation of vertebrates including birds and lemurs, plants, and ecosystem functions (Hending et al. 2018(Hending et al. , 2020Osen et al. 2021;Schwab et al. 2021;Raveloaritiana et al. 2021). Invertebrates, in particular ants, have only been studied in forest habitats rather than in the agricultural landscape. In this paper, we report how land-use change through smallholder agriculture affects endemic, exotic, and non-endemic native ant communities in north-eastern Madagascar. First, we compared ant species richness and species composition across seven land-use types: old-growth forest, forest fragment, forest-derived vanilla agroforest, fallow-derived vanilla agroforest, woody fallow, herbaceous fallow, and rice paddy. Second, we assessed how environmental factors drive ant species richness in the agricultural matrix to identify management options that promote endemic and nonendemic native ant species while controlling exotic ant species. Study region and sampling design We carried out our study in the central part of the SAVA region, in north-eastern Madagascar. The climate is warm and humid (UPDR 2003), with 2223 mm annual precipitation and 24.0 °C average temperature (average across 80 plots, data retrieved from CHELSA climatologies; Karger et al. 2017). The climax vegetation type is tropical rainforest (Baena et al. 2007). We studied seven land-use types: old-growth forest (10 replicates), forest fragment (10), forest-derived vanilla agroforest (10), fallow-derived vanilla agroforest (20), woody fallow (10), herbaceous fallow (10), and rice paddy (10). The description of each land-use type is found in Table 1. Overall, we had 80 plots distributed in ten villages and two old-growth forest sites in Marojejy National Park (Fig. 1). The minimum distance to neighboring plots was 719 m ± 438 m. The average plot elevation above sea level was 192 m ± 207 m. We standardized our plot size to a 25 m radius circle. Ant sampling and identification We sampled ants using bait and pitfall traps on all plots except the old-growth forest between October and December 2017, and in the old-growth forest between August and December 2018. In each plot, we established five sampling stations: one at the plot center, and four 16 m away from the plot center in each cardinal direction. At each sampling station, we set bait and pitfall traps 10 m apart. For bait traps, we put sardine and sugar as bait on two different white flat plastic supports (diameter of 13 cm) about 5 cm apart (illustration in Supplementary Material S1). We set the baits for 30 min and thereafter collected the specimens present on the white plastic support for 30 s. For pitfall traps, we buried a plastic cup (9 cm top diameter, 11 cm deep, 6 cm bottom diameter) in the soil with the opening at the same level as the ground surface. We then filled one-third of the plastic cup with 70% ethanol and few drops of soapy water. We left the pitfall traps active for 48 h. We preserved ant specimens in 70% ethanol for further identification. To identify ants to the genera level, we used the identification key from Fisher et al. (2016). We then identified ants to species level using identification keys (e.g. Fisher and Smith 2008;Bolton and Fisher 2014;Rakotonirina and Fisher 2014;Salata and Fisher 2020). If identification was unclear, we identified species as morphospecies. We stored voucher specimens at the Biodiversity Centre in Antananarivo, Madagascar, for reference and further taxonomical study. We assigned our ant species into different origin categories (AntWeb 2020): endemic, exotic, non-endemic native, and unknown ants. In total, we recorded 128 ant species/morphospecies (57 endemic species, 19 exotic species, 18 non-endemic native, and 34 species of unknown origin) distributed in 38 genera and seven subfamilies. In the present study, we excluded the unknown ants but included the results on this category in the supplementary materials (Supplementary Material S2-S4). Environmental parameters In each plot, we recorded canopy closure, tree species richness, stem density, understory vegetation cover, and lying deadwood volume as local environmental parameters. We counted and identified all trees (woody perennial plants) and tree-like plants (arborescent Fig. 1 a Location of the SAVA region within Madagascar, b SAVA region with the four districts Andapa, Antalaha, Sambava, Vohemar and the study area therein, c map of the study area with forest cover and triangles depicting the 10 study villages and 2 old-growth forest sites, and d our seven land-use types studied summarizing the typical land-use change dynamic through slash-and-burn agriculture for hill rice cultivation in north-eastern Madagascar. Dashed arrows indicate possible, but prohibited transformation of oldgrowth forest. Full arrows represent common transformation trajectories from one land use to another. Rice paddy is not part of the slash-and-burn cycle and represents the most intensive land use in the region 1 3 palms and ferns) with a diameter at breast height ≥ 8 cm, from which we extracted data on tree species richness and stem density (number of living stems per ha). To assess canopy closure, we set a Nikon D5100 camera with a fisheye lens and 180° field of view on a tripod at 2.4 m height. Then, we took series of hemispherical photographs. We processed the photographs with the "ImageJ" program (Rasband 2014) to create a binary image representing sky and vegetation and applied an automated thresholding technique following the protocol of Beckschäfer (2015). We derived gap fraction values and converted them to canopy closure values, to finally calculate the mean canopy closure percentage per plot (Osen et al. 2021). To estimate the understory vegetation cover of each plot, we adapted the protocol of Van Der Maarel (1966), taking into account the understory such as shrubs, saplings, and non-woody plants. We took photographs with a view from zero to three meters above the ground in each cardinal direction from the plot center. We divided each photograph into six 0.5 m layers and estimated the understory vegetation cover in % for each layer. Then, we averaged the understory vegetation cover value from all layers on all photographs to represent a single understory vegetation cover value (in %) for each plot (Schwab et al. 2021). We measured the diameter and length of all lying deadwood with a midpoint diameter ≥ 10 cm and length ≥ 100 cm. We excluded deadwood that was already decomposed into powder. We used the equation "deadwood volume = [(π * diameter 2 )/4] * length" to calculate the volume of each lying deadwood piece (Rondeux et al. 2012). We summed up the volume of all lying deadwood pieces per sample area, then upscaled the volume of lying deadwood per hectare. As landscape parameters, we calculated the landscape forest cover percentage within a 250 m radius buffer surrounding our plots using 2017 landscape forest cover data (Vieilledent et al. 2018). Data analysis We combined data from bait and pitfall traps into a plot/species presence/absence matrix and used the R version 3.6.3 program for all analysis (R Core Team 2020). Comparison of ant species richness and composition To compare ant species richness between land-use types, we ran generalized linear mixed models (GLMMs) with ant species richness as the response variable, land-use types as the explanatory variable, and Poisson distribution as family, using the glmmTMB function of the glmmTMB package (Brooks et al. 2017). We set village and old-growth forest sites as a random factor to account for potential spatial autocorrelation. We used Tukey's honestly significant difference (Tukey HSD) with Bonferroni correction for pairwise comparison between land-use types. We performed a permutational multivariate analysis of variance (PERMANOVA, 999 permutations) using the adonis function of the package Vegan (Oksanen et al. 2018) and the pairwise.adonis function with False discovery rate correction of the package pair-wiseAdonis (Arbizu 2017) to assess differences in species composition among and between land-use types. Then, we performed a permutational multivariate analysis of dispersion (PERMDISP, 999 permutations) using the betadisp function of the package Vegan to test the homogeneity of dispersion among land-use types (Oksanen et al. 2018). A homogenous dispersion implies that the difference in species composition among land-use types from PERMANOVA is explained by the difference in location of the centroids. We found that heterogeneous dispersion significantly contributes to the differences in endemic 1 3 (PERMDISP, Df = 6, F = 5.07, p = 0.002) and exotic (PERMDISP, Df = 6, F = 21.65, p = 0.001) but not non-endemic (PERMDISP, Df = 6, F = 1.45, p = 0.204) ant species composition between our land-use types. We visualized the community structure of each landuse type using non-metric multidimensional scaling (NMDS). We used the dimcheckMDS function of the package goeveg to select any number of dimensions with a good stress value (stress < 0.2) for the NMDS (Goral and Schellenberg 2021). We used Jaccard dissimilarity distance for PERMANOVA, PERMDISP, and NMDS. Additionally, we used the upset function of the package UpSetR to visualize the number of unique and shared species between land-use types (Gehlenborg 2019). Environmental parameters as drivers of ant species richness Since we aimed to provide applied land management options for ant species conservation in the agricultural matrix, we excluded old-growth forest sites in our models when assessing the environmental parameters driving ant species richness. Additionally, we excluded rice paddy and herbaceous fallow in our model since they do not contain trees, therefore no canopy closure and deadwood, which substantially can affect our model results by driving too much the effect on species richness. Prior to building our models, we performed a correlation test between all explanatory variables to assess the possible effects of multicollinearity. In case the explanatory variables were strongly correlated (Spearman |r|> 0.7 (Dormann et al. 2013)), we kept only one of them. We found that canopy closure, stem density, and tree species richness were highly correlated (Supplementary Material S5). We retained canopy closure in our models because stem density and tree richness data were missing in two plots. With GLMMs, we set a full model with ant species richness as the response variable, canopy closure, deadwood, understory vegetation, and forest cover as explanatory variables, and village as a random factor. Since our response variable was count data, we first used the Poisson family for our models. We performed a stepwise logistic regression using the step function to identify the best-fit models. We considered models with the lowest AIC score to explain and discuss the response of ant species richness to environmental parameters. We tested for any misspecification problems in the models such as over-or underdispersion, zero inflation, and quantile deviations using the DHARMa package (Hartig 2020). We found underdispersion in our models with exotic ant species richness and corrected it by using the Conway-Maxwell-Poisson (COMPOIS) distribution (Huang 2017). Discussion Our study showed that smallholder land use differentially affected endemic and exotic ant communities in north-eastern Madagascar, a global biodiversity hotspot. Endemic and non-endemic native ant species richness was not significantly affected by forest conversion to forest-derived vanilla agroforests, but endemic ant richness markedly decreased when forests were transformed via slash-and-burn practices to hill rice cultivation. On the contrary, forest transformation promoted exotic ants, with higher exotic species richness in land-uses in the agricultural matrix compared to old-growth forest sites. In addition, we found that endemic ant species composition was different in the old-growth forest compared to all other land-use types, highlighting the uniqueness of old-growth forests for endemic ant species that are less resilient to disturbance. The exotic ant community in the old-growth forest was only a small subset of the exotic community in the agricultural matrix. In regards to vanilla, endemic, exotic, and nonendemic native ant species composition differed between forest-and fallow-derived vanilla agroforests. Finally, higher canopy closure and landscape forest cover were associated with higher endemic ant species richness but lower exotic ant species richness, suggesting an opportunity for management measures conserving endemic ants as well as mitigating the invasion of exotic ants. The simple dots without connecting lines represent the unique species found in the land-use type. As an example, the old-growth forest has 9 unique endemic species. Two or more dots connected by a line indicate a shared species between two or more land-use types, which correspond to the number on the bar of the upper part of the graph. For example, old-growth forest and forest fragment share: 1st connecting line = 4, 2nd = 2, 3rd = 5, 4th = 1, 5th = 2, 6th = 1, 7th = 1, 8th = 1, making up in total 17 shared species 1 3 Ant species richness response to land-use change Although the transformation of forests to agroforests usually leads to habitat degradation and species loss (Martin et al. 2020a), our study showed that the endemic ant species richness was not significantly affected by forest conversion to forest-derived vanilla agroforests. This could be explained by the complex structure of forest-derived vanilla agroforests, which still retain a considerable proportion of the original forest structure (Osen et al. 2021), possibly providing diverse food resources and microhabitats for many endemic ant species. Our finding corroborates research showing that complex agroforests can harbor the same or even higher ant species richness compared to forests (Schroth et al. 2004;Philpott et al. 2008). In contrast, endemic ant species richness dropped by up to 90% when forests were transformed via slash-and-burn agriculture practices (from 9.6 mean species richness in the old-growth forest to 0.9 in herbaceous fallow). Even after years of fallow succession from herbaceous to woody fallows, the endemic ant species richness still did not recover to reach the species levels found in the forests. Slash-and-burn constitutes an extreme form of land conversion because it consists of a complete removal of the vegetation through burning (Styger et al. 2007). This potentially leads to a direct loss of microhabitats and resources for many endemic ant species, thus forcing them to find another suitable habitat to survive. Rice paddy is not part of the slash-and-burn cultivation cycle, but we included this landuse type in our study as it is very common in the study area and refers to permanent and intensive land use. Rice paddy is characterized by mostly inundated areas for rice cultivation and banks covered with herbaceous plants which are frequently walked, thereby limiting the availability of microhabitats such as leaf litter and deadwood, which are important nesting sites for many ant species (Queiroz et al. 2013). This could explain our finding that rice paddy exhibited the lowest endemic and non-endemic native ant species richness compared to the other land-use types. We provide evidence that exotic ant species are positively affected by human disturbance (Folgarait 1998;Rizali et al. 2010). We found that the old-growth forest had five times lower exotic ant species richness than herbaceous fallow. This could be because exotic ant species generally have a high capacity for adaptation and competition allowing them to co-occur with or dominate native ant species in disturbed habitats (Wetterer 2007). It is important to note that the invasion of exotic ant species could harm native (endemic) ant species (Holway et al. 2002). For example, in Indonesia, the presence of invasive yellow crazy ants, Anoplolepis gracilipes, reduced native ant species in cacao agroforests (Bos et al. 2008). Also, in Australia's monsoonal tropics, Pheidole megacephala, the socalled African big-headed ant, constitutes a big threat to the native ant community because of their aggressive behavior (Hoffmann et al. 1999). However, evidence of the impact of exotic ant species on Madagascar's native ant fauna is scarce. Only Dejean et al. (2010) found that the white-footed ant, Technomyrmex albipes, exhibited aggressive competition on food resources towards the native ant community. This suggests the need for further research on competition between exotic ants and native ant species in Madagascar. 3 land-use types derived from previous slash-and-burn cultivation (fallows and fallow-derived vanilla agroforest) in the agricultural matrix. This highlights the negative effect of past burning in endemic and non-endemic native ant species composition. Our findings are also in line with another study in the same system, highlighting the change in endemic and native herbaceous plant species composition due to slash-and-burn cultivation (Raveloaritiana et al. 2021). Besides, our study showed a clear distinction of the exotic ant species composition between unburned and previously burned land-use types and rice paddy. Here, we highlight that oldgrowth forest and forest fragment harbor only a small subset of the exotic ant community occurring in the agricultural matrix. Only two exotic species occurred in the old-growth forest, whereas six species from the exotic ant community in the agricultural matrix occurred in forest fragments. This finding provides evidence that old-growth forest is more resistant to the arrival of certain exotic species than the forest fragment. The susceptibility of the small remnant forest fragment could be due to their proximity to surrounding agricultural lands which serve as a gateway for exotic ant species (Assis et al. 2018). Management opportunities for endemic ant conservation in the agricultural matrix Identifying management practices that maintain endemic biodiversity in agricultural landscapes presents a great opportunity for biodiversity conservation (Kremen and Merenlender 2018). In our study, we found a contrasting effect of canopy closure and landscape forest cover on endemic and exotic ant species richness. This provides clear evidence that keeping higher canopy closure and maintaining a high landscape forest cover can conserve endemic ant species while reducing exotic ant species in the agricultural matrix. For vanilla agroforests specifically, a greater endemic ant species richness could be achieved by promoting trees in fallow-derived and maintaining trees in forest-derived vanilla agroforests. This offers a win-win opportunity for vanilla farmers and the conservation of endemic ant species as a high canopy in vanilla agroforests does not conflict with high vanilla yields (Martin et al. 2020b). Although we did not include tree species richness in our models, it is important to highlight that increasing tree species richness could positively affect ant species richness as shown by other studies (Ribas et al. 2003;Vasconcelos et al. 2019). This suggests that high tree diversity should be maintained in the agricultural matrix to benefit endemic ant species. Here, in particular, the maintenance of already existing forest-derived vanilla agroforests is recommended, as their tree species richness is more similar to forest fragments and old-growth forest than to the other studied land-use types (Osen et al. 2021). On the other hand, the positive influence of landscape forest cover on endemic species richness suggests that forest fragments could be a source of endemic ant species for the different land-use types in the agricultural matrix (Solar et al. 2016). A previous study also supports that a greater ant species richness within agricultural land can be boosted by large patches of adjacent forests (Lucey et al. 2014). Similarly, in our study, the forest fragment shared all of its endemic ant species with the land-use types in the agricultural matrix. This suggests that remnant forests embedded in the agricultural landscape in north-eastern Madagascar are key to preserve the endemic ant diversity in the agricultural matrix. Conclusion Our study showed not only the uniqueness of old-growth forest in preserving unique endemic ant species, but also its resistance to the arrival of most exotic ant species. Despite rapidly occurring land-use change in north-eastern Madagascar, the agricultural matrix, still harbors considerable amounts of endemic and non-endemic native ant species. We conclude that preserving remnant forest fragments and promoting vanilla agroforests with a greater canopy closure in the agricultural matrix are important management strategies to complement the role of old-growth forests for endemic ant conservation in north-eastern Madagascar.
2021-10-23T15:23:42.731Z
2021-10-21T00:00:00.000
{ "year": 2021, "sha1": "3ebf4da901469fbc490a2138d963d2d361d20c37", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10531-021-02314-4.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "643b5c51fbd2d715a72cfedf3eae10b15ea26953", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
222316039
pes2o/s2orc
v3-fos-license
Self-Triggered Consensus of Vehicle Platoon System With Time-Varying Topology This paper focuses on the consensus problem of a vehicle platoon system with time-varying topology via self-triggered control. Unlike traditional control methods, a more secure event-triggered controller considering the safe distance was designed for the vehicle platoon system. Then, a Lyapunov function was designed to prove the stability of the platoon system. Furthermore, based on the new event-triggered function, a more energy efficient self-triggered control strategy was designed by using the Taylor formula. The new self-triggered control strategy can directly calculate the next trigger according to the state information of the last trigger. It avoids continuous calculation and measurement of vehicles. Finally, the effectiveness of the proposed two self-triggered control strategies were verified by numerical simulation experiments. INTRODUCTION In recent years, multi-agent systems have been widely applied in intelligent transportation (Hee Lee et al., 2013;Vilarinho et al., 2016). As an important part of the intelligent transportation system, the self-driving vehicle platoon system has a wide range of applications in improving road utilization, enhancing safety and reliability, and alleviating traffic congestion. The formation of control is an important issue for the vehicle platoon system. It refers to the control problem that a group of intelligent vehicles can interact with each other to maintain a predetermined geometric formation during the movement of a specific target or direction. In general, this mutual interaction between intelligent vehicles can be divided into fixed and timevarying topology. Most of the current research is mainly focused on a fixed topology (Peters et al., 2016;Viegas et al., 2018). However, in the actual driving process, the vehicle platoon system often has to face various complex terrain and traffic conditions. Formations do not stay the same all the time. The vehicle platoon system requires a change of formation. Therefore, it is necessary to study the time-varying topology of vehicle platoon system. At present, there are few research studies on vehicle platoon systems with time-varying topology. More research is focused on multiagent systems (Munz et al., 2011;Saboori and Khorasani, 2014). For example, we found that we can design more reasonable and effective control strategies by analyzing the derivatives of timevarying topological variables . It is thus more practical to study the time-varying topology of the vehicle platoon system than fixed topology. Recently, the formation consistency of the vehicle platoon system has been widely considered. It has been applied to deal with consistency of formation control problems (Ren, 2007;Stojković and Katić, 2017;Wang et al., 2017;Li et al., 2018). Bela Lantos and Gyorgy Max achieved the formation consistency of unmanned ground vehicles by using a two-trajectory nonlinear dynamic model (Lantos and Max, 2016). Peters et al. (2016) designed a way by which each follower tracks its immediate predecessor to achieve vehicle formation consistency. Nevertheless, in the traditional vehicle platoon system consistency study, it is assumed that the vehicle platoon system has sufficient computing resources and energy supply (Fax and Murray, 2004;Lafferriere et al., 2005). The vehicle platoon system can thus carry on a continuous information exchange and a continuous control. However, such assumption is unreasonable. More often than not, the power supply and communication bandwidth of a vehicle platoon system are limited. Recently, it has been found that event-triggered control can coordinate resources among intelligent vehicles. Many scholars are thus interested in event-triggered control. As an aperiodic control mode, event-triggered control can update the controller only when needed. That is, the controller of the intelligent vehicle takes an effect when the measurement error of the vehicle platoon system exceeds a certain threshold. Since event-triggered control can reduce the energy loss to a certain extent, many scholars apply it to consistency research (Wei et al., 2017). The author in Chu et al. (2019) proposed an unified event-triggered and distributed observer-based controller with globally asymptotic convergence rate. The consistency of vehicle platoon system is realized by the controller. A faulttolerant controller which considered the communication timedelay and event-triggered mechanism was designed to achieve the consistency of the vehicle platoon system (Fei et al., 2019). However, in order to obtain the next trigger moment, we need to constantly obtain the state information of surrounding vehicles and calculate whether the trigger conditions are met in the distributed event-triggered control function. It is because of continuous communication and computation that an eventtriggered control strategy cannot reduce the detection loss in essence. But the self-triggered control strategy only needs to calculate the next trigger moment based on the status information of the last trigger moment. In the self-triggered strategy, data detection is no longer required between any two triggering moments. From this perspective, the self-triggered control strategy has a better performance. Authors designed a self-triggered control strategy for the second-order multi-agent system with fixed topology to ensure the consistency of the formation system (De Persis and Frasca, 2013). As far as we know, there are few research studies made on time-varying topology under self-triggering control in vehicle platoon system, and this sparked our research. Based on the above considerations, we studied the consistency of time-varying topology for vehicle platoon system with secondorder dynamics by using distributed event-triggered control and self-triggered control strategies. The contributions of our work are three-fold: (1) A distributed event-triggered control function considering the safe distance between vehicles was designed, and this eventtriggered control is more energy efficient than the continuous control in Fax and Murray (2004) and Lafferriere et al. (2005). (2) Based on the Lyapunov stability analysis method, the distributed event-triggered control function under timevarying leader and time-varying topology was given. In comparison with the fixed topology in Du et al. (2017), the research of time-varying topology is more practical. Moreover, the research on time-varying leader is of more practical significance. (3) According to (1) and (2), two distributed self-triggered control strategies were designed. In Zhang et al. (2016), Dolk et al. (2017), Wei et al. (2017), Wen et al. (2018), Chu et al. (2019), , an event-triggered control strategy was designed. Compared with these, the self-triggered control strategy further reduces the continuous detection of adjacent vehicles. Additionally, the distributed self-triggered controller is more general and practical than some existing control methods. The rest of this paper was organized as follows. Preliminaries and the problem formulation are given in section 2. The eventtriggered control and self-triggered control of vehicle platoon system with time-varying topology are studied in section 3. Two numerical simulation experiments are presented in section 4. Lastly, conclusions are drawn in section 5. Graph Theory Consider the consensus issue of multi-agent systems with timevarying topology; a communication graph is used to describe the communication topology of these agents. An undirected graph G = (V, E, A) consists of a finite node set V = {1, 2, · · · , N}, an edge set E, where E ⊆ V × V, and an adjacency matrix A = a ij ∈ R N×N . If (j, i) ∈ E, a ij = 1, and a ij = 0 otherwise. The neighbor set of vehicle i is defined as N i {j ∈ V|(j, i) ∈ E, j = i}. The Laplacian matrix of the graph G is defined as L = [l ij ] ∈ R N×N , where l ii = j =i a ij and l ij = −a ij , where i = j. Moreover, we assume that there are no self-cycles, that is a ii = 0 for any i ∈ N. The degree matrices D = diag d 1 , · · · , d N are diagonal matrices, whose diagonal elements are given by d i = N j=1 a ij . The Laplacian matrix associated to G is defined as L = D − A. The set of all neighbors of node i is denoted by N i = {j ∈ V :(j, i) ∈ E}. The matrix B = diag b 1 , b 2 , · · · , b N , where b i is called the adjacency coefficient between the following vehicle i and the head vehicle. If the following vehicle i is adjacent to the head vehicle b i = 1, otherwise b i = 0. In this paper, we define the time interval constant h ij > 0 to control the safe distance between vehicles i and j. At the same time, we define h i > 0 to control the safe distance between vehicle i and the leader vehicle. Definitions and Lemmas ASSUMPTION 2.1. It is assumed that no topology changes happen during the trigger interval. ASSUMPTION 2.2. It is assumed that the communication between vehicles is good, that is, there will be no communication delay and other uncertain factors. ASSUMPTION 2.3. Suppose that at least one spanning tree exists in G and the node corresponding to the header is the root of the tree. The existence of the spanning tree ensures that each following vehicle can obtain the status information from the leader. LEMMA 2.1. 2x T y ≤ ax T x + 1 a y T y, where a > 0, and the vectors x and y can be any value. LEMMA 2.2. Satur and Kharchenko (2020) suppose the matrix A is a n × n real symmetric matrix, Y is an n-dimensional real vector, and λ max (A) ≥ λ i (A) ≥ λ min (A)(i = 1, 2, ..., N). One has (1) LEMMA 2.3. Li W. et al. (2019) assuming that the function f satisfies Lipschitz condition, there is a non-negative constant l ≥ 0 that satisfies f (t, Problem Formulation An auto-driving vehicle formation system consisted of n smart cars (see Figure 1) is considered in this paper. Between the vehicles, status information can be transmitted according to certain regulations. The h i v 0 in Figure 1 is the distance between the i th vehicle and the leader. In this paper, the dynamics of the leader vehicle is described as where x 0 (t), v 0 (t) ∈ R m express the displacement vector and the velocity vector of the leader vehicle, and f (t, x 0 , v 0 ) is the control input of the leader vehicle. When f = 0, the velocity of leader vehicle is constant, when f = 0 the velocity of the leader vehicle is changing. The dynamic equation of the i th follower intelligent vehicle is described as where x i (t), v i (t) ∈ R m express the displacement vector and the speed vector of the follower intelligent vehicle i respectively, and is the control input of the i th follower intelligent vehicle. is an acceleration function known to all vehicles, and it satisfies LEMMA 2.3. Distributed Event-Triggered Control of Vehicle Platoon System With Time-Varying Topology In order to reduce the sensor data acquisition and the energy consumption of frequent communication between vehicles, a distributed event-triggered controller is designed in this section. In the distributed event-triggered controller, each of the following vehicle has different trigger function, and its controller update is asynchronous. When the trigger condition is satisfied, the controller of the i th follower vehicle is Since the topology is time-varying, the graph G can be treated as G(t). Accordingly, A, L, D, and B become A(t), L(t), D(t), and B(t). In this case, Assumption 1 and Assumption 2 still hold. The Leader Vehicle Speed Is Constant In this section, the leader-follower consistency problem in the case of time-varying topology, which is based on the fact that the leader vehicle speed is constant, is studied, i.e., To make the system consistent, we set the controller of i th follower vehicle: where k and r are control gains, and N i (t k i ) represents the set of neighbors of the i th follower vehicle at t k i . In order to describe the displacement and speed tracking between the following vehicle i and the leader vehicle, we defined the displacement error ε i and the velocity error η i as follows: The measurement error e x i (t) and e v i (t) are designed to represent the displacement and velocity differences between the triggering and the measuring moments of the i th follower vehicle. We have then Frontiers in Neurorobotics | www.frontiersin.org So the controller of the following intelligent vehicle becomes The states and the measurement errors of intelligent vehicle are written in the vector form: According to (9), we have Theorem 3.1. Consider a fleet of N + 1 vehicles, where the dynamic equations of the head vehicle and the follower vehicle are (3) and (4). If the following conditions are met under the controller (8), then (1) The proposed event triggering function satisfies , H(t) = L(t) + B(t) and ζ will be indicated below. When this condition is met, the controller automatically updates, that is, the trigger time is reached. (2) The minimum eigenvalue of (L(t) + B(t)) ⊗ I m is greater than zero, which is greater than an arbitrarily small normal number δ. (3) The differential coefficient of (L(t) + B(t)) ⊗ I m exists, and the maximum eigenvalue of its derivative is greater than zero; for any small positive number, σ is satisfied (4) The relation between η(t) and e where ζ = max kψ 2 , 1 a − 1 kλ min (H(t)) , 0 < a < 1, ψ = λ max d (L(t) + B(t)) ⊗ I m /dt , then all the vehicle reach the same state, and the existence of the safe distance h ij v 0 avoid a collision. Hence, the problem of intelligent vehicle formation is solved, i.e., for i = 1, 2, . . . , N, we have PROOF. Based on system (10), we can construct the common Lyapunov function candidate where Firstly, we prove the positivity of V(t) Frontiers in Neurorobotics | www.frontiersin.org It can be seen that the Lyapunov function (15) selected is positively definite. The time derivative of (15) can be expressed as Taking out the first term, we have d dt By (18), we get Take (19) into consideration, we have where ζ = max kψ 2 , 1 a − 1 kλ min (H(t)) , ψ = λ max (d((L(t) + B(t)) ⊗ I m )/dt). According to the trigger condition (11), the derivative of the Lyapunov function is less or equal to 0, so the stability is proved. The Speed of the Leading Vehicle Is Time Varying In the actual situation, the speed of the leading vehicle cannot be fixed, most of them are time varying. Therefore, in this section, we study the consistency of leader followers in the case of time-varying topology based on the fact that the speed of the leader vehicle is time varying, i.e., f (t, . To make the system consistent, we set the i th follower vehicle's controller as Similar to (6)-(9), we can format the system (4) as follows . Theorem 3.2. Consider a fleet of N + 1 vehicles, where the dynamic equations of the head vehicle and the follower vehicle are (3) and (4) respectively. If the following conditions are met under the controller (21), then (1) The designed event triggering function satisfies the following conditions. where H(t) = L(t) + B(t) and ζ will be indicated below. When this condition is met, the controller automatically updates, that is, the trigger time is reached. (3) The differential coefficient of (L(t)+B(t))⊗I m exists, and the maximum eigenvalue of its derivative is greater than zero, so there exists a small positive number σ satisfying (4) The relation between η(t) and e x (t), e v (t) is PROOF. Based on system (22), we can construct the Lyapunov function candidate where H(t) = L(t) + B(t). It can be seen that the (28) selected is positively definite. The time derivative of (28) can be expressed as Taking out of the first term, we have d dt Using Lemma 2 to enlarge the second item in (29), we yield < η(t),f (t, ε(t), η(t), e x (t), e v (t)) > Considering (30) and (31), we get By using Lemma 3, the above items are amplified, and then where a 1 > 0. Considering (34), we have According to the trigger condition (23), the derivative of the Lyapunov function (29) is less or equal to 0, and it is constant, so the stability is proved. Distributed Self-Triggered Control of Vehicle Platoon System With Time-Varying Topology As can be seen from the distributed event-triggered control (11) and (23), the control method reduces the dependence on the global state information and the real-time state of measurement error in the trigger interval. However, it will increase the energy consumption of the sensor and microprocessor in the process of continuous measurement error detection. In order to improve this problem, we apply the self-triggered control strategy to solve the problem of intelligent vehicle formation. Under this strategy, the next trigger moment t i k+1 of the i th follower vehicle can obtained according to the state of the i th vehicle at the previous trigger time. The Leader Vehicle Speed Is Constant In this part, we will transform the event-triggered control (11) into a self-triggered control strategy for the case that the vehicle speed of the leader is constant. We know that from the previous distributed event triggering where L j,· represents L j,k , and k = 0, 1, 2, · · · , N. According to the above expressions and the distributed eventtriggering control function (11), we get In order to simplify (37), we define and We can see that when σ i =0, the inequality is not true. So σ i > 0, that is to say t − t k i > 0. To sum up, the self-triggering control strategy of the follower vehicle at t i+1 k moment is determined by the following conditions σ i > 0 which satisfies (41), we get the next trigger time t i+1 k = σ i + t i k . In particular, if the topology of the vehicle queue changes at time t, so that t i+1 k = t. REMARK 2. The existence of σ i indicates that a Zeno behavior does not exist. At the same time, it indicates that the self-triggered control strategy can realize the leader-follower consistency of vehicle formation under the condition of time-varying topology and the leader vehicle speed being the same. The proof of stability is same to event-triggered control, so we are omitted here. The Speed of the Leading Vehicle Is Time Varying In this part, we will transform the event-triggered control (23) into a self-triggered control strategy for the case that the vehicle speed of the leader is time varying. We know that from the previous distributed event triggering Using Taylor's formula, expand ε i , η i , e x i , e v i at t i k , we have and According to the above two formulas and (23), we get In order to simplify (44), we define We obtain that σ i = 0, and the inequality is not true; σ i > 0 that is to say t − t k i > 0. To sum up, the self-triggering control strategy of the follower vehicle at t i+1 k moment is determined by the following conditions: If there is a σ i > 0 which satisfies (48), we get the next trigger time t i+1 k = σ i + t i k . In particular, if the topology of the vehicle queue changes at time t, so that t i+1 k = t. REMARK 3. The existence of σ i indicates that a Zeno behavior does not exist. At the same time, it indicates that the self-triggered control strategy can achieve the leader-follower consistency of vehicle formation under the circumstance that both the topology structure and the leader vehicle speed are time varying. The proof of stability is the same to the event-triggered control. It is thus avoided here. SIMULATION In this section, we will give two numerical experiments to verify the correctness and validity of the above theorems. Both experiments are based on a leader-follower vehicle formation system, which consist of a leader vehicle and four follower vehicles. Firstly, we verify that the speed of the leader vehicle is constant. The dynamic equation of leader and follower are shown below: where u i (t) is defined in (5), k = 3.4, r = 1.2. and the parameters satisfy the conditions in Theorem 3.1. In order to more intuitively verify the effectiveness of the selftriggering control strategy proposed in this paper, we assume that the vehicle formation system carries out three topology switches. The topology structure between vehicles at the initial moment is shown in Figure 2. Each adjacency matrix A and coefficient The initial values of the leader vehicle and the follower vehicle are defined as follows: Figure 3 shows the velocity error between the follower car and the leader car. Figure 4 express as the real-time distance between each follower car and the leader car. Figure 5 express as the changes in the controller of each follower car. Because the topology is changed, the controller changed dramatically twice. The self-trigger interval of each follower are displayed in Figure 6. Figure 7 shows the relative position of vehicles when the formation is finally stabilized. As we can see from Figures 3-5, when the topology changes, the controller of the follower vehicle adjusts the vehicle speed to keep the vehicle in formation and the error of vehicle speed tends to zero over time. This indicates that the controller can adapt to various topological switching situations by adjusting the control intensity. At the same time, due to the change of topology, the relative positions between vehicles will also change, and the follower vehicles will constantly adjust their positions to the new relative positions under the action of the controller. It is worth noting that the position error of the follower vehicles does not gradually approach zero as time goes on, and it reaches a fixed value greater than zero in Figure 4. This fixed value is the safe distance (h ij v 0 ) between the vehicles. As shown in Figure 7, when a stable state is reached, the vehicles should keep a safe distance from each other. Moreover the self-triggering instants are displayed in Figure 6. The simulation results exhibit that the controller and the self-triggering control strategy designed by us have a good performance. It achieves the stability of vehicle formation system under the condition of constant topological changes. Moreover, the vehicles can keep a safe distance. Secondly, we verify that the speed of the leader vehicle is time varying. In order to more intuitively verify the effectiveness of the self-triggering control strategy, we randomly selected several When the speed of leader changes, the dynamic equation of the follower's vehicle is defined as where f (t, x, v) = − sin (x) − 0.25v + 1.5 cos(2.5t). The define of Figures 8-12 is similar to Figures 3-7, but Figures 8-12 show the results of a leader with time-varying velocity. From Figures 8-10, we can see that when the speed of the leader vehicle or topology changes, the follower vehicle can quickly adapt to the changing so that its speed is consistent with the leader vehicle, and the real-time distance between follower vehicle and leader vehicle change rapidly. Moreover, it can be seen from Figure 10, that after the vehicle formation system reaches stability, the controller of the follower vehicle no longer exerts control. The self-triggering instants are displayed in Figure 11. Notably, after the leader vehicle speed changes, the safety distance of the follower vehicle also changes in Figure 9. However, the vehicles ultimately kept a safe distance, as shown in Figure 12. The simulation results show that the controller and the self-triggering control strategy designed in this paper have a good performance. It can make the vehicle formation system reach stable state under the condition of changing topology and leader speed. The number of triggers with a distributed event-triggered control scheme in Yang et al. (2018) and self-triggered control scheme (41) within 0-15 s are shown in the Table 1. What we can FIGURE 9 | The real-time distance between each follower car and the leader car. obtain from Table 1 is that the self-triggered control scheme (41) needs less triggering events than the distributed event-triggered control scheme in Yang et al. (2018). At the same time, the mean time interval which represents the average time between each trigger in Table 2 indicates that the self-triggered control strategy designed in this paper has a lower trigger probability and execution moment. It shows that the self-triggered control strategy proposed here can effectively reduce the energy loss of data detection and calculation in the control process. CONCLUSIONS In this paper, we have studied leader-follower consistency in vehicle formation systems with time-varying topology under event-triggering mechanism. The difference between our work and the published papers is that we have designed a selftriggering control strategy that avoids continuous calculation and measurement and reduces the loss of communication resources. At the same time, we have proved the consistency of the system under the control of the trigger function. In addition, we have also studied the consistency of the vehicle formation system with time-varying topology when the leader speed is time varying. Finally, the effectiveness of the proposed controllers has been verified by numerical experiments. In addition, it should be noted that, although we proved the stability of formation system by Lyapunov function, we did not give its string stability which will be studied in the future. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/supplementary material. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
2020-10-14T13:07:35.644Z
2020-10-14T00:00:00.000
{ "year": 2020, "sha1": "ea29b7a28297f437cdbeabf3a84439f05209aaa3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbot.2020.00053/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea29b7a28297f437cdbeabf3a84439f05209aaa3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
211109122
pes2o/s2orc
v3-fos-license
Weak Insertion of a Perfectly Continuous Function A sufficient condition in terms of lower cut sets are given for the insertion of a perfectly continuous function between two comparable real-valued functions on such topological spaces that Λ−sets are open. Weak Insertion of a Perfectly Continuous Function International Journal of Scientific and Innovative Mathematical Research (IJSIMR) Page | 7 A = ∩{O : O ⊇ A,O ∈ O(X,τ)} and A = ∪{F : F ⊆ A,F ∈ C(X,τ)}. In [4, 9, 13], A is called the kernel of A. Definition 2.2. Let A be a subset of a topological space (X,τ). Respectively, we define the closure, interior, clo-closure and clo-interior of a set A, denoted by Cl(A),Int(A),clo(Cl(A)) and clo(Int(A)) as follows: Cl(A) = ∩{F : F ⊇ A,F ∈ C(X,τ)}, Int(A) = ∪{O : O ⊆ A,O ∈ O(X,τ)}, clo(Cl(A)) = ∩{F : F ⊇ A,F ∈ Clo(X,τ)} and clo(Int(A)) = ∪{O : O ⊆ A,O ∈ Clo(X,τ)}. If (X,τ) be a topological space whose Λ−sets are open, then respectively, we have A ,clo(Cl(A)) are closed, clopen and A,clo(Int(A)) are open, clopen. The following first two definitions are modifications of conditions considered in [6, 7]. Definition 2.3. If ρ is a binary relation in a set S then ρ ̄ is defined as follows: x ρ ̄ y if and only if y ρ v implies x ρ v and u ρ x implies u ρ y for any u and v in S. Definition 2.4. A binary relation ρ in the power set P(X) of a topological space X is called a strong binary relation in P(X) in case ρ satisfies each of the following conditions:  If Ai ρ Bj for any i ∈{1,...,m} and for any j ∈{1,...,n}, then there exists a set C in P(X) such that Ai ρ C and C ρ Bj for any i ∈{1,...,m} and any j ∈{1,...,n}.  If A ⊆ B, then A ρ ̄ B.  If A ρ B, then clo(Cl(A)) ⊆ B and A ⊆ clo(Int(B)). The concept of a lower indefinite cut set for a real-valued function was defined by Brooks [1] as follows: Definition 2.5. If f is a real-valued function defined on a space X and if {x ∈ X : f(x) < l} ⊆ A(f,l) ⊆ {x ∈ X : f(x) ≤ l} for a real number l, then A(f,l) is called a lower indefinite cut set in the domain of f at the level l. We now give the following main result: Theorem 2.1. Let g and f be real-valued functions on a topological space X, in which Λ−sets are open, with g ≤ f. If there exists a strong binary relation ρ on the power set of X and if there exist lower indefinite cut sets A(f,t) and A(g,t) in the domain of f and g at the level t for each rational number t such that if t1 < t2 then A(f,t1) ρ A(g,t2), then there exists a perfectly continuous function h defined on X such that g ≤ h ≤ f. Proof. Let g and f be real-valued functions defined on X such that g ≤ f. By hypothesis there exists a strong binary relation ρ on the power set of X and there exist lower indefinite cut sets A(f,t) and A(g,t) in the domain of f and g at the level t for each rational number t such that if t1 < t2 then A(f,t1) ρ A(g,t2). Define functions F and G mapping the rational numbers Q into the power set of X by F(t) = A(f,t) and G(t) = A(g,t). If t1 and t2 are any elements of Q with t1 < t2, then F(t1) ρ ̄ F(t2),G(t1) ρ ̄ G(t2), and F(t1) ρ G(t2). By Lemmas 1 and 2 of [7] it follows that there exists a function H mapping Q into the power set of X such that if t1 and t2 are any rational numbers with t1 < t2, then F(t1) ρ H(t2),H(t1) ρ H(t2) and H(t1) ρ G(t2). For any x in X, let h(x) = inf{t ∈Q : x ∈ H(t)}. We first verify that g ≤ h ≤ f: If x is in H(t) then x is in G(t) for any t > t; since x is in G(t) = A(g,t) implies that g(x) ≤ t, it follows that g(x) ≤ t. Hence g ≤ h. If x is not in H(t), then x is not in F(t) for any t < t; since x is not in F(t) = A(f,t) implies that f(x) > t, it follows that f(x) ≥ t. Hence h ≤ f. Also, for any rational numbers t1 and t2 with t1 < t2, we have h(t1,t2) = clo(Int(H(t2))) \ clo(Cl(H(t1))). Hence h(t1,t2) is a clopen subset of X, i. e., h is a perfectly continuous function on X. The above proof used the technique of proof of Theorem 1 of [6]. 3. APPLICATIONS The abbreviations c, pc and cc are used for continuous, perfectly continuous and contra-continuous, respectively. Weak Insertion of a Perfectly Continuous Function International Journal of Scientific and Innovative Mathematical Research (IJSIMR) Page | 8 Before stating the consequences of theorems 2.1, we suppose that X is a topological space that Λ−sets are open. Corollary 3.1. If for each pair of disjoint closed (resp. open) sets F1,F2 of X , there exist clopen sets G1 and G2 of X such that F1 ⊆ G1, F2 ⊆ G2 and G1∩G2 = ∅ then X has the weak pc−insertion property for (c,c) ( resp. (cc,cc)). Proof. Let g and f be real-valued functions defined on the X, such that f and g are c (resp. cc), and g ≤ f.If a binary relation ρ is defined by A ρ B in case Cl(A) ⊆ Int(B) (resp. A ⊆ B ), then by hypothesis ρ is a strong binary relation in the power set of X. If t1 and t2 are any elements of Q with t1 < t2, then A(f,t1) ⊆{x ∈ X : f(x) ≤ t1}⊆{x ∈ X : g(x) < t2}⊆ A(g,t2) ; since {x ∈ X : f(x) ≤ t1} is a closed (resp. open) set and since {x ∈ X : g(x) < t2} is an open (resp. closed) set, it follows that Cl(A(f,t1)) ⊆ Int(A(g,t2)) (resp. A(f,t1) ⊆ A(g,t2) ). Hence t1 < t2 implies that A(f,t1) ρ A(g,t2). The proof follows from Theorem 2.1. Corollary 3.2. If for each pair of disjoint closed (resp. open) sets F1,F2, there exist clopen sets G1 and G2 such that F1 ⊆ G1, F2 ⊆ G2 and G1∩G2 = ∅ then every continuous (resp. contra-continuous) function is perfectly continuous. Proof. Let f be a real-valued continuous (resp. contra-continuous) function defined on the X. By setting g = f, then by Corollary 3.1, there exists a perfectly continuous function h such that g = h = f. Corollary 3.3. If for each pair of disjoint closed (resp. open) sets F1,F2 of X , there exist clopen sets G1 and G2 of X such that F1 ⊆ G1, F2 ⊆ G2 and G1∩ G2 = ∅ then X has the pc−insertion property for (c,c) (resp. (cc,cc)). Proof. Let g and f be real-valued functions defined on the X, such that f and g are c (resp. cc), and g < f. Set h = (f + g)/2, thus g < h < f, and by Corollary 3.2, since g and f are perfectly continuous functions hence h is a perfectly continuous function. Corollary 3.4. If for each pair of disjoint subsets F1,F2 of X , such that F1 is closed and F2 is open, there exist clopen subsets G1 and G2 of X such that F1 ⊆ G1, F2 ⊆ G2 and G1∩G2 = ∅ then X have the weak pc−insertion property for (c,cc) and (cc,c). Proof. Let g and f be real-valued functions defined on the X, such that g is c (resp. cc) and f is cc (resp. c), with g ≤ f.If a binary relation ρ is defined by A ρ B in case A ⊆ Int(B) (resp. Cl(A) ⊆ B ), then by hypothesis ρ is a strong binary relation in the power set of X. If t1 and t2 are any elements of Q with t1 < t2, then A(f,t1) ⊆{x ∈ X : f(x) ≤ t1}⊆{x ∈ X : g(x) < t2}⊆ A(g,t2) ; since {x ∈ X : f(x) ≤ t1} is an open (resp. closed) set and since {x ∈ X : g(x) < t2} is an open (resp. closed) set, it follows that A(f,t1) ⊆ Int(A(g,t2)) (resp. Cl(A(f,t1)) ⊆ A(g,t2) ). Hence t1 < t2 implies that A(f,t1) ρ A(g,t2). The proof follows from Theorem 2.1. ACKNOWLEDGEMENT This research was partially supported by Centre of Excellence for Mathematics (University of Isfahan). INTRODUCTION A generalized class of closed sets was considered by Maki in 1986 [10]. He investigated the sets that can be represented as union of closed sets and called them V −sets. Complements of V −sets, i. e., sets that are intersection of open sets are called Λ−sets [10]. Recall that a real-valued function f defined on a topological space X is called A−continuous [15] if the preimage of every open subset of R belongs to A, where A is a collection of subset of X. Most of the definitions of function used throughout this paper are consequences of the definition of A−continuity. However, for unknown concepts the reader may refer to [2,5]. Hence, a real-valued function f defined on a topological space X is called perfectly continuous [14] (resp. contra-continuous [3]) if the preimage of every open subset of R is a clopen (i. e., open and closed) (resp. closed) subset of X. We have a function is perfectly continuous if and only if it is continuous and contra-continuous. Results of Katˇetov [6,7] concerning binary relations and the concept of an indefinite lower cut set for a real-valued function, which is due to Brooks [1], are used in order to give a necessary and sufficient conditions for the insertion of a perfectly continuous function between two comparable realvalued functions on the topological spaces that Λ−sets are open [10]. If g and f are real-valued functions defined on a space X, we write g ≤ f in case g(x) ≤ f(x) for all x in X. The following definitions are modifications of conditions considered in [8]. A property P defined relative to a real-valued function on a topological space is a pc−property provided that any constant function has property P and provided that the sum of a function with property P and any perfectly continuous function also has property P. If P1 and P2 are pc−property, the following terminology is used: A space X has the weak pc−insertion property for (P1,P2) if and only if for any functions g and f on X such that g ≤ f, g has property P1 and f has property P2, then there exists a perfectly continuous function h such that g ≤ h ≤ f. In this paper, is given a sufficient condition for the weak pc−insertion property. Also, several insertion theorems are obtained as corollaries of these results. In addition, the insertion and strong insertion of a contracontinuous function between two comparable contra-precontinuous (contrasemi-continuous) functions have also recently considered by the author in [11,12]. THE MAIN RESULT Before giving a sufficient condition for insertability of a perfectly continuous function, the necessary definitions and terminology are stated. Definition 2.2. Let A be a subset of a topological space (X,τ). Respectively, we define the closure, interior, clo-closure and clo-interior of a set A, denoted by Cl(A),Int(A),clo(Cl(A)) and clo(Int(A)) as follows: If (X,τ) be a topological space whose Λ−sets are open, then respectively, we have A V ,clo(Cl(A)) are closed, clopen and A Λ ,clo (Int(A)) are open, clopen. The following first two definitions are modifications of conditions considered in [6,7]. The concept of a lower indefinite cut set for a real-valued function was defined by Brooks [1] as follows: ℓ} for a real number ℓ, then A(f,ℓ) is called a lower indefinite cut set in the domain of f at the level ℓ. We now give the following main result: (g,t2), then there exists a perfectly continuous function h defined on X such that g ≤ h ≤ f. Proof. Let g and f be real-valued functions defined on X such that g ≤ f. By hypothesis there exists a strong binary relation ρ on the power set of X and there exist lower indefinite cut sets A(f,t) and A (g,t) in the domain of f and g at the level t for each rational number t such that if t1 < t2 then A(f,t1) ρ A(g,t2). Define functions F and G mapping the rational numbers Q into the power set of X by F(t) = A(f,t) and A(g,t). If t1 and t2 are any elements of Q with t1 < t2, then F(t1) ρ¯ F(t2),G(t1) ρ¯ G(t2), and F(t1) ρ G(t2). By Lemmas 1 and 2 of [7] it follows that there exists a function H mapping Q into the power set of X such that if t1 and t2 are any rational numbers with t1 < t2, then F(t1) ρ H(t2),H(t1) ρ H(t2) and H(t1) ρ G(t2). For any x in X, let h(x) = inf{t ∈Q : x ∈ H(t)}. , then x is not in F(t ′ ) for any t ′ < t; since x is not in F(t ′ ) = A(f,t ′ ) implies that f(x) > t ′ , it follows that f(x) ≥ t. Hence h ≤ f. The above proof used the technique of proof of Theorem 1 of [6]. APPLICATIONS The abbreviations c, pc and cc are used for continuous, perfectly continuous and contra-continuous, respectively.
2020-02-13T09:04:48.340Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "db345594410676a191bb36e8ceb1f5860b59ca74", "oa_license": null, "oa_url": "https://doi.org/10.20431/2347-3142.071200", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "db345594410676a191bb36e8ceb1f5860b59ca74", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
253236233
pes2o/s2orc
v3-fos-license
Significant Tumor Reduction With Traditional Chinese Medicine in a Patient With Advanced Prostate Cancer: A Case Report Prostate cancer (PC) is the most common malignancy of the male genitourinary system. For patients with advanced progressive PC, the treatment strategies include second-line endocrine therapy, chemotherapy, and immunotherapy. Such therapeutic techniques are either too expensive or too toxic for some patients, and traditional Chinese medicine (TCM) has become an alternative for its low cost and low toxicity. The application of Shi-pi-san and Gui-zhi-Fu-ling-wan in PC has never been reported. We report their application on a 71-year-old male patient, who was diagnosed with PC and was undergoing endocrine therapy. He originally chose chemotherapy, and experienced acute renal failure, which required hemodialysis during hospitalization. He felt weak and opted for Chinese herbal medicine treatment. After treatment with Shi-pi-san and Gui-zhi-Fu-ling-wan, the patient’s tumor and other symptoms were significantly reduced, and he reported feeling “refreshed.” This case indicates that TCM treatment has unique advantages and is more tolerable than endocrine therapy and chemotherapy. Considering that the patient was undergoing hemodialysis treatment and using low-molecular-weight heparin (LMWH) to prevent blood coagulation while taking TCM, whether LMWH has a synergistic anticancer effect remains to be explored. Patients with advanced prostate cancer (PC) often suffer from lower urinary tract obstruction. With the development of the disease, bone metastasis, bone pain, fracture, and other symptoms may occur, impairing the quality of life of patients. Visceral metastasis may also occur, which ultimately reduces patient survival. Presently, for those patients with advanced progressive PC who receive endocrine therapy, the treatment strategies include second-line endocrine therapy, chemotherapy, and immunotherapy. Those therapeutic techniques have their limitations. The second-line endocrine therapy and immunotherapy are expensive and cannot be commonly afforded by all patients, while chemotherapy inevitably has some toxic effects, which can be beyond tolerance thresholds for some patients, especially the elderly. For such patients, traditional Chinese medicine (TCM) could be a viable alternative treatment for cancer. We report a case of PC in a 71-year-old male patient who was given TCM. Case Report On June 14, 2018, the patient was diagnosed with PC at the age of 69 years (T4N0M0, Glason's score was 4 + 5 = 9) in another hospital, and he was treated with endocrine therapy for over a year. The pathological diagnosis from the biopsy indicated prostate adenocarcinoma (four on the left side of prostate biopsy), with a Glason's score of 4 + 5 = 9, which was divided into five groups and involved nerve fibers (Figure 1). The immunohistochemical results were as follows: P504S (+), CK (H) (−), p63 (−), AR (+), PSA (weak+), S-100 (nerve fiber +), and (three on the right side of prostate biopsy) prostate adenocarcinoma, a Glason's score of 4 + 5 = 9, divided into five groups. The patient presented to our hospital on September 16, 2019 with a diagnosis of PC stage IVB (adenocarcinoma, T4N1M1b, bone metastasis), which suggested that continuing the same endocrine therapy would not be effective. The whole abdominal computed tomography (CT) revealed the peripheral invasion of PC with involvement of the left pelvic wall, seminal vesicle, bladder, bilateral lower ureter (maximum diameter: 76 mm × 88 mm), multiple swollen lymph nodes in the pelvic cavity and retroperitoneal (maximum diameter: 25 mm) (Figures 3A, 3C, 4A, and 4C). The magnetic resonance imaging (MRI) of the whole spine revealed diffuse metastatic disease in cervicothoracic lumbosacral vertebrae and bilateral ilium. During treatment the total prostate-specific antigen (PSA) of the patient was > 100 ng/ml, and the free PSA was > 30 ng/ml. The patient was treated with zoledronic acid 4 mg IV QD once due to the diffuse bone metastasis of the whole body. After the use of zoledronic acid, the patient's urine output decreased progressively-with a 24-h urine output of 50 ml for more than 48 consecutive hours. And the serum creatinine increased progressively, up to 498.8 μmol/L, cystatin C 1.65 mg/L, urea nitrogen 14.38 mmol/L, and uric acid 556.3 μmol/L. Color Doppler ultrasound of the urinary system showed: hypoechoic mass from the bladder to the prostate area, mild hydrops in both kidneys, cysts in the right kidney, widening of the inner diameter of the bilateral ureters, and residual urine of 108 ml in the bladder. Renal failure was sudden and may be related to the use of zoledronic acid. Color Doppler ultrasound showed that the upper track was obstructed. The urologist recommended a nephrostomy, although the patient and family members rejected the suggestion. Then, a catheter was placed, and the flow of catheter remained at 20 mL. So, hemodialysis was started after urgent consultation with a nephrologist. After three hemodialysis sessions in a week, the patient's creatinine level decreased compared to that in the prior hemodialysis. He was discharged on September 30, 2019, and dialysis treatment was to be completed in the outpatient department until his creatinine level returned to normal. In addition, the LMWH sodium (Qi-zheng) was used for prophylactic anticoagulation at a dose of 3,500 IU or 5000 IU for each dialysis. After 2 weeks of dialysis, the patient's creatinine was in the normal range for two consecutive times, 89.7 and 97.2 μmol/L, respectively, thus, we conclude the catheter was successfully removed. Upon catheter removal, 500 mL of urine and blood was discharged, the urine volume gradually recovered, and the patient was provided normal drinking water. During this period, the patient did not receive any anti-tumor therapy, including chemotherapy, immune, or endocrine therapy. Patient received outpatient dialysis treatment only. On November 16, 2019, the patient requested oral TCM for anti-cancer treatment in the outpatient service. The patient's condition is as follows: the patient was feeling cold in the daytime average temperature of 26°C even with a thick jacket, experiencing spontaneous sweating, without thirst, excessive saliva, fatigue, poor appetite, loose or watery stools, edema of both lower extremities, occasional prickles in the lower abdomen pain, and his pulse indicated a slippery-like sinking lever, and his tongue was slightly red with a thick and white coating. According to the patient's symptoms, tongue coating, and pulse, the patient's symptoms belongs to mixed-type syndromes, including Yang-qi deficiency syndrome and Phlegm-Stasis syndrome (Deng et al., 2020). Based on the physical findings and TCM theory, the adjusted Shipi-san and Gui-zhi-Fu-ling-wan were prescribed for the patient. Table 1 displays the component of Shi-pi-san and Gui-zhi-Fu-ling-wan for specific medications. These recipes were prepared by our hospital pharmacy. All herbal medicines were produced in China. After taking the above Chinese herbal medicines, the patient visited our hospital on December 7, 2019 for reexamination and removal of the internal jugular vein catheter for hemodialysis. The Doppler ultrasound of his internal jugular vein displayed the right internal jugular vein embedded with a catheter and peripheral thrombosis (about 1.73 cm × 0.47 cm) ( Figure 2). He was admitted to the hospital for anticoagulation treatment. Meanwhile, the patient reported feeling "refreshed" in contrast to feeling "uncomfortable" prior to TCM intake. Re-examination of the whole abdominal CT indicated peripheral invasion with involvement of the left pelvic wall, seminal vesicle gland, bladder, and bilateral lower ureters, and decreased mass (maximum diameter: 60 mm × 43 mm) ( Figure 3). The pelvic cavity and retroperitoneal multiple enlarged lymph nodes were decreased (maximum diameter: approximately 11 mm) (Figure 4). According to the Response Evaluation Criteria in Solid Tumors (RECIST) Version 1.1, the efficacy of the herbal medicines in the patients was evaluated as PR (37.2% reduction). Due to the limitation of laboratory conditions, PSA cannot be diluted. The indicators of PSA both before and after treatment are: total PSA > 100 ng/ml, free PSA > 30 ng/ml, so that, the difference of PSA before and after treatment cannot be compared as the evaluation of curative effect. Discussion According to statistics from the American Cancer Society, PC is the most common cancer in men, with a total of 3,650,030 in 2019 (Miller et al., 2019). Patients with advanced PC often experience poor urination. Our patient had similar symptoms of difficulty in urination for 1 year, which was serious but not life-threatening in the long term. This was consistent with an autopsy report, which reported that > 33% of men aged 70 to 79 years who died of other causes had PC prior to their death (Jahn et al., 2015). Therefore, we focused on alleviating the patient's symptoms and controlling the disease progression. Since the patient could neither tolerate the side effects of chemotherapy nor afford the heavy economic burden of second-line endocrine therapy, TCM treatment was a worthy choice for him. The application of Shi-pi-san and Guizhi-Fu-ling-wan in PC has never been reported. In this case, the efficacy of Shi-pi-san and Gui-zhi-Fu-ling-wan in clinical practice has been confirmed by the significant tumor reduction, thus, this is unique case report. The efficacy of Shi-pi-san and Gui-zhi-Fu-ling-wan in our patient can be explained in two aspects: the TCM theory and anti-cancer mechanism. According to the theory of TCM, the patient's symptoms-feels cold, experience spontaneous sweating, mental fatigue, lack thirst, poor appetite-are manifestations of Yang-qi deficiency. And excessive saliva, loose or watery stools, and edema of both lower extremities are the manifestations of internal dampness caused by Yang-qi deficiency. Reddish tongue, thick white coating, and deep pulse are also signs of Yang-qi deficiency and internal dampness. Therefore, Shi-pi-san is used to warm Yang, activate Qi, and drain Yin water, which is the correspondence between the prescription and the syndrome. In the TCM treatment, we used Dried Ginger and Monkshood to warm the Yang. We used Poria cocos, Fried Atractylodes macrocephala, Papaya, Costus, and Pericarpium Arecae to activate Qi and drain the Yin water. At the same time, the patient has occasional prickles pain in the lower abdomen, slippery pulse, and PC is a pelvic mass and pelvic cavity blood stasis. As part of the treatment, Gui-zhi-Fu-ling-wan was chosen. Gui-zhi-Fu-ling-wan was created by Zhongjing Zhang and for "women's pelvic cavity blood stasis." Modern basic research has reported that Gui-zhi-Fu-lingwan can also induce apoptosis of myoma cells. (Lee et al., 2019). Therefore, Gui-zhi-Fu-ling-wan were chosen to eliminate blood stasis in the patient's pelvic cavity, and wine-treated Radix et Rhizoma rhei was added to enhance the power of promoting blood circulation and removing blood stasis. To summarize, we treat this patient from two aspects of adjusting the overall balance of Yin and Yang, strengthening the body's immunity and eliminating local pelvic tumor, that is, removing pathogenic factors, so that, we have achieved gratifying curative effects. With regard to molecular mechanisms, modern pharmacological studies have reported that pachytearic acid, which is extracted from Poria cocos induces mitochondrial dysfunction and initiates PC cell apoptosis by inducing the activity of caspase-9/-3 and increasing the ratio of Bax/Bcl-2 protein (Gapter et al., 2005). Research by Son et al. (2017) has reported that Atractylodes macrocephala extract can promote the proliferation of lymphocytes and simultaneously release a large amount of tumor necrosis factor (TNF)-alpha and interleukin (IL)-6. Among them, TNF-alpha can activate death receptors on the surface of (Knutson & Disis, 2005). In the Papaya extract, dietary isothiocyanate promotes cell apoptosis by activating the caspase-8 and -9 pathways and phosphorylation of anti-apoptotic protein Bcl-xL, among which the c-Jun N-terminal kinase (JNK) pathway is critical for cell apoptosis mediated by phosphorylation of the anti-apoptotic protein Bcl-xL (Basu & Haldar, 2008). Licorice polysaccharides, which are extracted from Glycyrrhizae Preparata can activate CD4+ and CD8+ cells, and CD8+ cells can directly recognize and eliminate cancer cells. Meanwhile, licorice polysaccharides significantly inhibit tumor growth in CT26 tumor-bearing mice. This activation of CD4+ and CD8+ increases the production of cytokines, especially IL-2, IL-6, and IL-7, and reduces TNF-alpha levels (Ayeka et al., 2017). Among them, the cytokines IL-2 and IL-6 are involved in destroying cancer cells by antigenpresenting macrophages in the process of tumor immune monitoring (Haabeth et al., 2011). Previous studies by Ayeka et al. (2016) have also indicated that licorice polysaccharides inhibited the growth of cancer cells and upregulated IL-7 in vitro. The Costus extract can inhibit the proliferation, cloning, and metastatic potential of PC-3 cells, and can cause G(0)/G(1) and G(2)/M cell cycle arrest in PC-3 cells. It induces PC-3 cells apoptosis by producing reactive oxygen species, reducing glutathione, permeabilizing mitochondrial and lysosomal membranes, inducing caspase-9/-3 activity, promoting PARP-1 cleavage, damaging DNA, and increasing the ratio of the Bax/ Bcl-2 protein (Elkady, 2019;Kim et al., 2008). Monkshood extract can inhibit tumor cell proliferation, while reducing the lipopolysaccharide (LPS)-induced apoptosis of mouse peritoneal macrophages by reducing nitric oxide (NO) and reactive oxygen species (ROS) production, indicating that it may exert anti-cancer effects through an anti-inflammatory mechanism (Huang et al., 2013). Dry Ginger extract, 6-Shogaol inhibits the growth of PC cells by inhibiting STAT3 and nuclear factor kappa B (NF-κB) signaling pathways (Saha et al., 2014). In addition, the use of Radix Paeoniae Rubra and Cortex Moutan significantly reduced the size of mouse bladder tumors. Radix Paeoniae Rubra reduces Gil stage cells and significantly increased sub-0 stage cells, thereby inhibiting tumor cell proliferation (M.-Y. Lin et al., 2016). Cortex Moutan can block tumor cells in G1 and S phases and cause the expression of phosphatidylserine outside the cell membrane. It induces the activation of caspase-8 and caspase-3 and the degradation of poly (adenosine diphosphate-ribose) polymerase. Cortex Moutan can also inhibit tumor cell invasion (M.-Y. Lin et al., 2013). The study of Cassiem and de Kock (2019) has suggested that Peach kernel extract can resist tumor cell proliferation, and the low ATP level caused by amygdalin can cause cell pyknosis or necrosis. Emodin, the Radix et Rhizoma Rhei extract can inhibit the viability of PC cells and promote apoptosis (Zheng et al., 2018). However, how these drugs interact still requires further research for confirmation. Recently, increasing real-world data have suggested that TCM has evident benefits in cancer treatment in terms of improving quality of life (Tang et al., 2019) and prolonging survival (Shih et al., 2021). The data include benefits on ovarian cancer (Zhu et al., 2019), advanced liver cancer (Zhao et al., 2021), and PC (P. H. Lin et al., 2019). However, the anti-tumor use of TCM is not simply to pile up anti-tumor drugs, but to use drugs under the guidance of the Chinese medicine theory, and this aspect is rarely reported. Hence, this will be the focus of our future study. After TCM intake, the patient's symptoms were evidently alleviated that he felt so "refreshed," and his being "uncomfortable" was significantly relieved. His primary tumor and lymph node metastasis were also significantly reduced after TCM use. Since the patient underwent hemodialysis, and the low-molecular-weight heparin (LMWH) was used simultaneously with the Chinese herbal medicines, it is worthy to explore the role of LMWH in the treatment. To date, no strong evidence to prove the anticancer effect of LMWH has been found. No significant difference in the survival benefit was observed among PC patients receiving standard treatment with or without LMWH (Klerk et al., 2005). Another study, which used LMWH in patients with advanced cancer (Kakkar, 2004), also revealed no significant difference in the survival benefits of patients in 1, 2, and 3 years after the addition of LMWH daily for 1 year on the basis of standard treatment. Only subgroup analyses suggested that the benefit of LMWH starts to exhibit on patients who survived for > 17 months. Altinbas et al. (2004) have reported that patients with small cell lung cancer treated with LMWH 5000 IU once daily for 18 weeks along with standard chemotherapy had a progression-free survival period of 10 months, which was significantly compared with the control group for 6 months. Although basic research has indicated that anticoagulants, such as unfractionated heparin or LMWH, can promote cancer cell apoptosis (Yekh, 2001), and inhibit tumor cell proliferation (Carmazzi et al., 2011) and tumor angiogenesis (Norrby & Østergaard, 2008), they do not affect the growth of locally implanted tumors (Maat & Hilgard, 1981;Milas et al., 1985). In particular, LMWH does not affect tumor cell proliferation (Sciumbata et al., 1996); thus, the inhibitory effect of LMWH on tumor cell proliferation depends on the cell type and the molecular weight of heparin (Carmazzi et al., 2011). To date, the use of anticoagulants, including LMWH, has been more focused on metastasis than on primary tumors (Bobek & Kovařík, 2004). This promotes the hypothesis that TCM plays a vital role cancer treatment, and LMWH may play a synergistic role. The synergistic anticancer effect of LMWH may not be related to its anticoagulant effect because thrombus of the internal jugular vein catheter still occurred during LMWH use in this patient. Moreover, studies have reported that removing the anticoagulant sequence from heparin could still retain its anticancer activity (Casu et al., 2009;Folkman et al., 1983;Lapierre et al., 1996;Sciumbata et al., 1996). Thus, whether LMWH has a synergistic anticancer effect remains unclear and should be further explored. In conclusion, the patient's tumor shrank significantly after using Shi-pi-san, Gui-zhi-Fu-ling-wan and LMWH, which can be explained from the perspective of TCM theory. However, this is only a case. The anticancer effects of Shi-pi-san and Gui-zhi-Fu-ling-wan in PC patients with Yang-qi deficiency syndrome and Phlegm-Stasis syndrome still need to be further confirmed by prospective randomized clinical trials. At the same time, whether it is necessary to use LMWH in combination to play a better anti-tumor effect also needs further pharmacological confirmation. Acknowledgments The author would like to thank the patient for his trust and Editage (www.editage.cn) for English language editing. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Ethical Approval Ethics approval has been obtained. Informed Consent Informed consent has been obtained.
2022-11-01T06:16:10.679Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "b0ec184181dead77cfcd49c18ce5b3eda88dcb01", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Sage", "pdf_hash": "6e8d22e7c658354d26e27ddc67115c2ce3fd25d3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216245382
pes2o/s2orc
v3-fos-license
Possibilities of modelling the bolts in program ANSYS In this paper, the techniques of modelling bolts in program ANSYS are investigated. Several types of the bolt models are proposed and compared. The finite element models with solid bolt with and without thread, and no-solid model. The pretension effect is applied for all types of bolts. Different types of connections and finite elements are used for modelling. In addition, the assembled mechanical structure is loaded by pretension effect and the field of stress and displacement of points around holes for bolts are investigated. Moreover, the stresses in the bolts computed by three-dimensional finite elements are investigated. Introduction The bolts are used for connection of two or more parts together. For good connection every joint have to be very tied and loaded by pretension load applied to the bolt. A lot of studies are dedicated to study the behaviour of bolts. Kim et al. [1] investigated a modelling technique of the bolted joints. They stated, that the solid bolt model provided the best accurate responses compared with the experimental results. In addition, the coupled bolt model showed the smaller computational time and memory usage. Zhang and Poirier [2] used an analytical model of bolted joints and took into account the compression deformation caused by external force. This analytical model cannot be used to bolted assemblies when the members are of different geometry or when the external forces are not symmetric. Maggi et al. [3] showed using the ANSYS how changes of geometric shapes in bolted plate could change the connection behaviour. Other application can be found in papers [4][5][6][7][8]. The bolts are assembled from parts with small edges and shapes. So, if the real bolts are modelled in the finite element software, then the simulation need a lot effort for computation of analysis. In this work, the possibilities of modelling of bolts are investigated and compared in program ANSYS. The bolts are modelled as 3D and line bodies with different definition of contacts behaviour. The finite element modelling of bolts The modelling methods of bolts are described in this chapter. The model consists of two flanges with holes for bolts and different types of modelled bolts (figure 1). In this model is taken into account the pretension effect (500 N) and a contact behaviour between bodies. All created holes have bigger diameter as diameter of considered bolts. The bottom face of the flange is fixed. Bolt model with real thread The bolt is modelled as solid body with thread (figure 2). On its body, the finite element mesh is created by three-dimensional elements. For better results, the mesh has to be very fine. The pretension is applied Bolt model with thread defined in options of contact The modelled bolt is shown in figure 3, it is simpler than the above model, because the thread is deleted. The same types of pretension effect and contacts are used. However, the frictional contact between nut and the body of bolt is modified by option "Contact correction" where the bolt thread is defined using the real parameters of thread. Bolt model without thread The modelled bolt is shown in figure 4. It is almost the same as the above model, the thread is deleted and the contact behaviour does not use option "Contact correction". The same type of pretension effect is used. The frictional contacts between bolt and flanges are used and bonded contact between nut and head of bolt is used. Bolt model with cylindrical head and nut The next model (figure 5) is modelled as solid body but the small faces on head and nut are removed. The bonded contact is used between nut and body of bolt. Other contacts are defined as frictional. Bolt model with cylindrical head and nut, and cylindrical joint contact The solid model is shown in figure 6 and it is the same as mentioned above. The bonded contact between nut and bolt is replaced by cylindrical joint with defined thread parameters. The thread parameters have to be defined by using the "Commands". Bolt model as line body with bonded contact and imprint faces The 3D solid body is replaced by line body (figure 7) whose diameter is the same as diameter of the bolt. The line body is meshed by beam elements BEAM188. The bonded contact is used for the connection between the line body and the flanges. One end of the line is connected with the upper imprint face on the flange and the other end is connected with the lower imprint face of the flange. However, the multi point constraint is defined in the contact formulation. The end of the link body with flange is connected by rigid links. Bolt model as line body with bonded contact The 3D solid body is replaced too by line body (figure 8) whose diameter is the same as diameter of the bolt. The line body is meshed again by beam elements BEAM188. This model is simplified because it does need the imprint faces. The bonded contact is used for the connection between the line body and the flanges. The one end of the line is connected with the upper flange and the other end is connected with the lower flange. The pinball radius must be defined in the contact options. The one should be big enough to include the cylindrical edges of real structure. However, the multi point constraint is defined in the contact formulation. The end of the link body with flange is connected by rigid links. Bolt model without solid and line body The bold is not modelled in geometry, only the beam connector is used ( figure 9). In the ANSYS Workbench Connection menu, the beam connect is created from body to body. The same procedure for the connection is used as is mentioned above. Results and discussion The model with defined contacts, pretension effects and boundary conditions is computed by ANSYS software. At first, the reaction forces for all modelled bolts are checked to ensure they are the same. The computed stresses on the whole body of the bolts can be compered for the whole 3D solid body models (figure 10). The computed stresses of the line bodies are the same for the whole cross-section. To view results on the body to body beam connector is required to use the ANSYS parametric design language commands. The displacement of the points of model are shown in the figure 11. It can be stated that the displacements are axially symmetric. For the solid bodies can be concluded that the stresses are available for all parts, the best representation or real bolt, the mesh must be fine, the high computational time. For the line body can be stated that the low computational time is required, simple geometry but no contact results. The geometry and the contact results are unavailable for the beam connector. The APDL commands are needed to know for the post-processing. The behaviour of the flanges is affected by definition of contacts. The biggest effect can be seen for body to body beam connector, because bonded contact is defined only between edges. Conclusion In this paper, the possibilities of modelling of the bolts in ANSYS software were investigated. The solid models were compared with line models and with beam connector without body. The basic advantages and disadvantages were summarized. If it is necessary to know the behaviour of bolts, then the best
2020-04-02T09:10:16.425Z
2020-04-02T00:00:00.000
{ "year": 2020, "sha1": "398193a18718a099260a2534de286a4f1ce221d7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/776/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2018373ec851e41db02e131968dbaca40f6bad98", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
249005328
pes2o/s2orc
v3-fos-license
New Diagnostic Techniques and Treatment of Ischemic Stroke International Journal of Pharmaceutical Sciences Review and Research International Journal of Pharmaceutical Sciences Review and Research Available online at www.globalresearchonline.net ©Copyright protected. Unauthorised republication, reproduction, distribution, dissemination and copying of this document in whole or in part is strictly prohibited. Available online at www.globalresearchonline.net 126 Doddikindi Nanditha, Bejjarapu Pallavi, V. Sai Harshitha, G. Ramya Balaprabha, Rama rao 1. Department of Pharm D Interns, CMR College of Pharmacy, Kandlakoya, Hyderabad, 501401, India. 2. Assistant Professor of Pharm D, CMR College of Pharmacy, Kandlakoya, Hyderabad, 501401, India. 3. Professor and Principal, CMR College of pharmacy, Kandlakoya, Hyderabad, 501401, India. *Corresponding author’s E-mail: nandithareddy886@gmail.com INTRODUCTION Stroke occurs when blood vessels in the brain get rupture and bleed or when there is a blockage in the blood supply to the brain due to plaque 1 . Due to blockage, the brain cells cannot get oxygen and they start to die within a few minutes. This can cause brain damage, long-term disability, and sometimes death 2 . Stroke is the second foremost cause of death. There are two kinds of stroke Ischemic stroke and Hemorrhagic stroke. 85% is of strokes are ischemic stroke and the remaining 15% are hemorrhagic stroke 3 . Ischemic stroke occurs when there is restricted blood flow to the brain due to vascular stenosis or insufficient oxygen supply. Hemorrhagic stroke occurs when there is a rupture of blood vessels which leads to accumulation of blood in the brain 4 . Ischemic stroke ranges from mild (Transient ischemic stroke) to severe(severe ischemic stroke). TIA is also called a mini-stroke and it is different from other types of major stroke because blood flow to the brain is blocked for only a short time i.e; not more than 5mins 5 . Dual antiplatelet therapy is initiated within 24 hours of symptoms onset and continued for 3 weeks which minimizes the risk of ischemic stroke. In acute ischemic stroke condition thrombolysis within 4.5 hours and mechanical thrombectomy within 24 hours after symptoms onset which improves functional outcomes 6 . Ischemic stroke can occur due to various pathophysiological changes. Ischemic stroke has highly effective evidence-based therapies such as alteplase and mechanical thrombectomy 7 . American stroke association recommended to remembering F.A.S.T this stands for F-face drooping A-Arm weakness S-Speech problem T-Time to call Other symptoms may include difficulty in walking, dizziness, confusion, severe headache 8 . Diagnosis • Physical exam • Blood tests • Computerized tomography (CT) scan: A CT scan uses a sequence of X-rays to create a detailed image of your brain. A CT scan can manifest bleeding in the brain, an ischemic stroke, a tumor or other conditions. Dye is injected into the bloodstream to view blood vessels in New Diagnostic Techniques and Treatment of Ischemic Stroke A Review Article the neck and brain in greater detail (computerized tomography angiography). • Magnetic resonance imaging (MRI): An MRI utilizes powerful radio waves and magnets to create a detailed view of the brain. An MRI can detect brain tissue damaged by an ischemic stroke and brain hemorrhage. A dye is injected into a blood vessel to view the arteries and veins and highlight blood flow (magnetic resonance angiography or magnetic resonance venography). • Carotid ultrasound: In this test, sound waves establish detailed images of the inside of the carotid arteries in the neck. This test shows the build-up of fatty deposits (plaques) and blood flow in the carotid arteries. • Cerebral angiogram; In this uncommonly used test, a thin, flexible tube (catheter)is inserted through a small incision, usually in the groin, and guides it through major arteries and into the carotid or vertebral artery. A dye is injected into blood vessels to make them clear under X-ray imaging. This procedure gives a complete view of arteries in the brain and neck. • Echocardiogram: An echocardiogram uses sound waves to establish detailed images of the heart. An echocardiogram can find a source of clots in the heart that may have traveled from heart to brain and caused stroke 9 . • Cone beam imaging: It is a technology used for detecting hemorrhage, Occlusion site, ischemic core, and tissue that is at risk. It bypasses the CT scan and directs towards the Angio suite for imaging and appropriate care. • Cerebrotech stroke detecting visor: It is a device that detects emergent large vessel occlusion in suspected stroke patients. It is designed to detect stroke within seconds and uses low-energy radio waves to detect the blockages 10 . Emergency Response Tissue Plasminogen Activator (tPA): Tissue plasminogen activator (tPA) is the most common emergency stroke treatment medication. tPA works by dissolving arterial blood clots that obstruct nourishment from getting to the brain. This life-saving medication is delivered intravenously to ischemic stroke patients within 4.5 hours of a stroke. Mechanical Thrombectomy: If there is a medical reason to avoid the use of tPA, neurovascular surgeons perform mechanical thrombectomy through a device. These devices include stentrievers (including Solitaire) and the Penumbra system. Endovascular thrombectomy can be performed up to 8 hours after a stroke 11 . Secondary Treatment Secondary treatment mainly focuses on diagnosing and treating the condition that caused the stroke. Carotid artery stenosis is the narrowing of the two large blood vessels in the neck that supply blood to the brain. The narrowing is usually caused by the build-up of cholesterol. This condition reports for about 25 percent of ischemic strokes. carotid artery stenosis can be treated through: Carotid endarterectomy (CEA) -the surgical removal of plaque in the artery Carotid artery stenting (CAS)the minimally invasive placement of a stent in the artery using catheters to prevent the artery from narrowing 12 . Vivistim nerve stimulation treatment: A device known as vivistim is used to stimulate the vagus nerve. It works by vagus nerve stimulation with muscle movement, this stimulation leads to the strengthening of neural circuits in the brain associated with motor functioning, learning, and memory. Cerenovus aneurysm device: It is received a CE mark for its Bravo flow diverter device. It is used in patients suffering from intracranial aneurysms. This device is designed to divert blood flow from an aneurysm which reduces the risk of stroke. This device aims to improve clinician ease of use and reduce the length of procedure 10 . Intracranial atherosclerotic disease (ICAD): Is a narrowing of arteries in the brain. Approximately 10 percent of strokes happen due to ICAD. When necessary, balloon angioplasty (widening of the arteries) or intracranial stenting is performed. CONCLUSION ✓ Relevant treatment of ischemic stroke is essential in the reduction of mortality and morbidity. ✓ Thrombolysis within 4.5 hours and mechanical thrombectomy within 24 hours after symptom onset can show a better outcome of the functional activity ✓ Treatment is mainly focused on symptomatic management then secondary: prevention and rehabilitation. ✓ Administration of 4-tissue plasminogen activator remains as the mainstay of treatment for decades.
2022-05-24T15:05:09.447Z
2022-04-15T00:00:00.000
{ "year": 2022, "sha1": "dbc25dcd9243845946e41d9406292f3f4019e31e", "oa_license": null, "oa_url": "https://doi.org/10.47583/ijpsrr.2022.v73i02.023", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "20da3a133a232e9a3ce75a2ed95fb09dd525a798", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
116480813
pes2o/s2orc
v3-fos-license
Electrical Power Supply of Remote Maritime Areas: A Review of Hybrid Systems Based on Marine Renewable Energies : Ocean energy holds out great potential for supplying remote maritime areas with their energy requirements, where the grid size is often small and unconnected to a continental grid. Thanks to their high maturity and competitive price, solar and wind energies are currently the most used to provide electrical energy. However, their intermittency and variability limit the power supply reliability. To solve this drawback, storage systems and Diesel generators are often used. Otherwise, among all marine renewable energies, tidal and wave energies are reaching an interesting technical level of maturity. The better predictability of these sources makes them more reliable than other alternatives. Thus, combining different renewable energy sources would reduce the intermittency and variability of the total production and so diminish the storage and genset requirements. To foster marine energy integration and new multisource system development, an up-to-date review of projects already carried out in this field is proposed. This article first presents the main characteristics of the different sources which can provide electrical energy in remote maritime areas: solar, wind, tidal, and wave energies. Then, a review of multi-source systems based on marine energies is presented, concerning not only industrial projects but also concepts and research work. Finally, the main advantages and limits are discussed. Introduction The electricity supply of remote marine areas is mostly generated from solar and wind energy, thanks to their maturity and attractive prices compared to other renewable energies [1]. However, these renewable energy sources are based on the exploitation of intermittent resources. To resolve this drawback, storage systems such as batteries and Diesel generators can be used, but investment costs and induced pollution are often not favorable. Moreover, it is costly and logistically difficult to implement a diesel supply in remote marine areas. However, over the last few years, marine renewable energies have encountered some interest and a genuine development by the industry, because of their potential available energy [2]. Tidal and wave energies are the most developed among the different marine energies [3,4]. Hence, hybrid systems combining solar, wind, and marine energies can now be developed to provide sustainable and reliable electrical energy. Wind and wave hybrid energy systems have already been developed, according to reviews written by different authors [5][6][7][8][9]. Some multipurpose platforms have been studied in terms of feasibility [10,11]. The present paper aims to put forward a review of hybrid systems combining marine energies on the same platform or structure, such as wave, tidal, wind, and solar energies. Firstly, a brief review of these renewable energy sources is detailed, to show the basics and existing technologies. A comparison between the different temporal characteristics of each renewable source is given, to highlight the different temporal scales and forecast abilities. Then, a review of industrial and academic multisource systems is presented, from projects tested under real sea conditions to those still at the concept stage. Finally, some advantages and limitations of multisource systems based on marine energies are listed. A Short Review of Renewable Energy Sources Concerned by This Study Island areas in maritime environments present the advantage of having several primary resources in their neighborhoods. Concerning the development and maturity of renewable energies over recent years, the main sources that can be used seem to be solar and wind energies, which present a high technical level of maturity and the most interesting cost [1,2]. Furthermore, among the marine energies available from the ocean, tidal and wave energies are currently two of the most advanced and promising technologies, with a better maturity level than other marine energies such as thermal and salinity gradient conversion energies [4]. Marine energies have the advantage of a good predictability and a high available energy level [1,2]. This part of the review aims to briefly present these four renewable energy sources in terms of the operating principle, main technologies existing today, and temporal resource characteristics. An overview of the main technologies currently existing is shown in Figure 1. Solar Energy At present, solar energy is one of the most widely used renewable energies in the world. Photovoltaic panels used to convert solar energy into electrical energy have now reached a high maturity level, with many technologies available on the market for different kinds of application [12]. Fundamentals of Operating Processes A solar cell uses a semi-conductor material, often silicon, to absorb photons of incident solar radiation received by the cell [12]. A semi-conductor is based on two energy bands. One of them is called the valence band. Electron presence in this band is allowed. In the second energy band, called the conduction band, electrons are absents. The band between the valence band and the conduction band is called the band gap. An electron can move from the valence band to the conduction band if the amount of energy provided by incident solar radiation to the electron is larger than the band gap value. This electron can move into an external circuit due to the p-n junction. This process results the vertical axis wind turbine (VAWT). The two technologies are compared by M.R. Islam et al. [19]. The main characteristics of each of them are discussed in the following two paragraphs. HAWT are characterized by a turbine placed on a nacelle at the top of a hub, with a rotation axis parallel to the ground. As explained in a previous paper [24], different technologies exist. They are classified according to different criteria: the number of blades (two, three or more), the rotor orientation (upwind or downwind), the hub design, the rotor control (stall or pitch), and the yaw orientation system (active or free). The low cut-in speed and the high power coefficient are often cited as advantages of horizontal turbines. However, a nacelle orientation system should be used to follow the wind direction changes and the installation presents more constraints [19,20]. This kind of wind turbine is mostly used in large scale systems. Wind turbines with three blades and upwind rotor orientation are the most widely used technology today [19]. For VAWT, the blades rotate around a vertical axis. The main advantage of this is that there is no need for an orientation system, as this kind of turbine can absorb wind from any direction, and it operates better than HAWT in the case of turbulent winds [19,20]. Among the different sub-technologies of VAWT, two categories are often found: the Darrieus turbines, which are based on lift forces, and the Savonius turbines driven by drag forces [20,24]. VAWT are mostly used for small applications and small power systems, for example residential networks [19,20,23]. The offshore wind turbines can be also distinguished according to the substructure and the foundations. Three categories can be found, according to the water depth and the distance to the shore [5,23,25]: • In case of shallow water installation (water depth lower than 30 m), several bottom-fixed substructures can be used: the gravity-based substructures and the monopile substructures which are currently the most frequently used [5,25], and the suction bucket still at a development stage [5]. • For transitional water (water depth between 30 m and 80 m), others kinds of bottom-fixed structures are used. The jacket frames structures, the tri-piles structures, and the tripod structures are the most used [5]. • Finally, in case of water depth larger than 80 m, floating structures are used [5]. The mast is mounted on a floating structure moored to the seabed. Three kinds of floating structures exist: the ballast stabilized structures (or spar floaters), the tensioned-leg platforms (also called mooring line stabilized), and the semi-submersible platforms [5,25]. Floating wind turbines are mainly considered for offshore wind farms far from the shore, as the wind resource available is larger than along the coast. More details related to offshore wind turbine technologies can be found in previous papers [5,23,25]. Tidal Current Energy Among all existing marine renewable energy converters, tidal energy converters are one of the most developed technologies [2,4,26]. Belonging to the hydrokinetic type of energy [27], two kinds of tidal energy can be distinguished [28,29]. The first is tidal kinetic energy, which is induced by water movement according to tide cycles, for which turbines are used to produce electricity. The second is the tidal potential energy, where tidal barrages are used to extract the energy resulting from the water elevation cycle (also called tidal range devices). In this paper, only the tidal kinetic energy is studied, as the extracting technology (a turbine), the power density, the size requirements, and the power range are more suitable for coastal areas [29]. Moreover, the geographical areas suitable for tidal range devices are quite rare in the world [29]. Indeed, their installation is possible only in areas with a water level elevation about several meters. Thus, the use of tidal range systems to provide electricity for maritime remote areas, such as islands, is not discussed in this article. Fundamentals of Operating Processes Tidal current turbines produce electrical power from the kinetic energy of the rise and fall movements of tides in coastal areas. Indeed, due to the gravitational and rotational forces induced by the Earth, Sun, and Moon positions, ocean water moves horizontally according to cycles that are easily predictable, and which are related to the time and location on the Earth [2,30,31]. Water flow allows the submerged turbine rotation on which blades are mounted, similarly to the process used in wind turbines [2,28]. The turbine drives a generator, for which many technologies can be used [28,32,33]. The output power depends on the tidal current speed [33]. This kind of turbine can also be used to extract kinetic energy from river currents [26]. Tidal turbines are often compared to wind turbines with respect to the turbine operating principles. However, the tidal turbine performs better due to the water density, which is greater than air density, increasing the power density [26,29]. In Horizontal Axis Marine Current Turbines (HAMCT), the turbine is composed of two or more blades rotating around an axis parallel to the water flow direction [26,28,29,31]. A list of projects is given by Zhang et al. [33]. This is currently the most common tidal turbine on the market [4], due to its good technical and economical maturity level. Thus, HAMCT now reach the megawatt scale [36]. Moreover, floating systems have encountered some interest in the last few years and they are now the subject of active research [36]. Vertical Axis Marine Current Turbines (VAMCT) can harness tidal flow from any direction with two, three, or more blades rotating around an axis that is perpendicular to the current direction [26,29,31,33]. Two main kinds of vertical axis turbines exist: the Darrieus turbine and the helical turbine (also called the Gorlov turbine) [26,31]. However, some disadvantages limit their development. The low self-starting capacity and efficiency, along with torque variations are often cited [36]. Another category exists, that of oscillating hydrofoil systems. Tidal currents make a pressure difference on a foil Section, which creates drag and lift forces on the oscillating foil attached to a lever arm [29]. A linear generator is often used to generate the electrical output power [33]. However, this technology is still at the development level [27]. Other kinds of devices can be found. Among them, ducted turbines, the tidal kite, and helical screw systems (also called the Archimedes screw) can be cited [26,29,31,35]. A further classification of tidal kinetic energy converter technologies exists, based on the water flow harnessing techniques. Axial-flow and cross-flow turbines can be distinguished, and horizontal and vertical designs exist for both of these [29]. Wave Energy Concerning other renewable energy sources, wave energy received attention from academics and industrialists, mostly since the 1980s, as the available worldwide resource is considerable [2,37]. Many wave energy converter technologies have been developed up to today, invoking especially a large number of patent publications [4]. Wave energy is sometimes classified among the hydrokinetic energy category [26]. Moreover, wave energy is now considered to be suitable for the electricity supply of small islands and coastal areas [1,3,38]. The Atlantic Ocean is often considered for wave energy projects due to the high wave power density available [39], but some recent articles have analyzed several islands case studies in Mediterranean Sea [40][41][42], involving some changes in technologies in order to fit the wave characteristics of the considered location [40]. Fundamentals of Operating Processes A wave energy converter transforms wave energy into electrical energy. Wave energy comes from the effect of wind on the sea surface, creating waves. These follow the wind direction across several thousands of kilometers, creating significant swell, until they reach the narrow waters near the shore where the wave speed decreases. The power output of a wave energy converter depends on the wave height and its peak period [26,37,43]. Harnessing wave energy and converting it into electrical energy is a complex process compared to other renewable energy sources. Indeed, several conversion stages are necessary: primary, secondary, and tertiary conversion stages, according to A.E. Price [44] and several other past papers [39,45,46]. In the case of these papers, short descriptions are given below. The primary conversion stage aims to convert wave motion into body motion, air-flow or water-flow, using mechanical, pneumatic, or hydraulic systems, called prime movers. This stage converts a low frequency motion (the wave) into a faster motion. The second conversion stage transforms the fluid energy of the first stage into electrical energy. Depending on the fluid used in the primary stage, the converter used in this step can be an air turbine, a hydraulic one, or a hydraulic motor connected to an electrical generator. They are called Power Take-Off systems (PTO). This step converts the low frequency fluid or mechanical motion into high rotational speed with the electrical generator. The tertiary stage conversion aims to adapt the electrical output characteristics of the wave energy converter to the grid requirements with power electronic interfaces. Some wave energy converters show merged primary and secondary conversion stages, where wave energy is directly transformed into electrical energy with a linear generator [39,46]. Location: onshore or shoreline, nearshore or offshore. Onshore systems are placed on a cliff, a dam, or land without a mooring system. Nearshore systems often lie between 0.5 and 2 km from shore, in shallow waters (between 10 and 25 m deep). The first generation of wave energy converters was based on onshore and nearshore systems [54]. Offshore wave energy converters are at several kilometers from the shore in deep water (>40 m), with the ability to harness high wave energy levels [2,37,39,43,45,46,53]. • Point absorber: they are small with respect to the wavelength and they can absorb energy from any wave direction. • Terminator: the device axis is perpendicular to the wave propagation direction. • Attenuator: the device axis is parallel to the wave propagation direction. • Oscillating Water Columns (OWC) are based on the compression and decompression forces in the air chamber created by water level variations which drive a turbine. OWC devices can be either deployed in shallow water as a stationary structure, or in deep water, for which floating systems can be used [55]. Recently, a new OWC device, called U-OWC has been developed [56]. Based on a vertical U-duct, this new structure avoids the wave to propagate into the inner body as in a traditional OWC device. • Overtopping Devices (OTD) use the water level difference between the sea and the partially submerged reservoir to produce electricity (potential energy) when the wave overtops the structure and falls into the reservoir. The turbine rotates by releasing the water back into the sea. Some overtopping devices are integrated to a breakwater [57,58]. Moreover, structural design of some overtopping devices can be suitable for other maritime needs [59]. • Wave Activated Bodies (WAB) or Oscillating Devices are based on the use of one or more moving bodies [26,37,48]. Three categories of wave activated bodies can be distinguished: heaving buoy, surface attenuator, and oscillating wave surge converter [43]. The performances of these devices depend on the mooring system, for which different configurations exist [60]. Some references classify wave energy converters according to other criteria. A review of electrical generators used, control methods employed, mechanical and/or electrical controllers applied, wave conditions considered, and power electronic converters used for different projects is proposed by E. Ozkop and I.H. Altas [52]. Classification by the power take-off technology (second conversion stage) is also given in [46], resulting in three main sub-categories: the hydraulic PTO, the turbine PTO, and the all-electric PTO, as discussed in the previous Section. Mooring configurations are also discussed [43,48,51]. A new classification based on the operating principle has recently been carried out [38]. A. Babarit proposed a comparison of the existing technologies based on the so-called capture width ratio [47]. Some development trends concerning the different criteria listed earlier are highlighted in previous papers [4,39]. Offshore application, floating installation, and point absorber technology are the most common aspects considered for the projects reviewed. Intermittency and Variability Comparison The four renewable sources introduced in the previous sections present different temporal characteristics. Indeed, as they are based on the use of different primary resources, their intermittency and variability are different, with more or less predictability, so they cannot be dispatched as conventional sources could be [61]. A limited number of studies have discussed these aspects considering all the sources. In one of these [61], Widén et al. presents the main intermittencies and variabilities of the four sources, with a review of existing forecast methods. The standard deviation of each source according to the different time scales (frequency bands) is studied by J. Olauson et al. at the level of a country [62]. The highest standard deviation rate for the different sources is related to short-term timescales for solar (<2 days), mid/short-term for wind (2 days to 2 weeks), long-term for wave (>4 months), and mid-term for tidal (2 weeks to 4 months) energies [62]. Natural cycle timescales of solar, wind, tidal, and wave resources are also discussed in the International Energy Agency report [63]. Variability of solar, wind, and tidal resources for the UK is studied by P. Coker et al. considering the persistence, statistical distribution, frequency, and correlation with demand [64]. G. Reikard et al. studied the variabilities of solar, wind, and wave energy for integration to the grid, with a forecasting system proposed for the three sources based on a regression method. Wave energy is shown to be more predictable than solar and wind, due to the strong weather impact for the latter two sources [65]. Recently, a review of solar and wind space-time variabilities has been conducted by K. Engeland et al. but this does not include tidal and wave resources [66]. Table 1 presents the main characteristics in terms of variability and intermittency for each source, with the origin and existing methods to evaluate temporal variations according to different publications [61][62][63][64][65][66]. Review of Multisource Projects Including Renewable Marine Sources As explained in the previous section, the most developed marine energies at the current time are tidal kinetics and wave energies. Thus, they can be used to provide electrical power in maritime areas, as for example in floating systems or islands communities. During recent decades, renewable energies used in these applications were often solar and wind energies, but intermittencies of these sources involved the use of Diesel generators or storage capacities. According to the time characteristics of solar, wind, tidal, and wave energies, the development of multisource systems combining several of these sources could bring a sustainable and reliable power level to ensure the load supply in the future. This Section aims to present a review of projects combining the use of some or all of the four sources presented previously, on the same platform. The details depend on the kind of project and the development status. Firstly, hybrid system projects developed by companies will be reviewed; from hybrid devices tested in offshore conditions to projects that are still at concept status. Also, a review of several energy island concepts will be given. Then, an up-to-date review of studies concerning sizing optimization and energy management systems will be carried out, considering the published papers in these fields. Several projects presented in the following sections have already been more thoroughly reviewed [5][6][7][8][9][10][11]67], especially for hybrid wind-wave systems. However, farms and colocated systems such as the independent and combined arrays described by C. Pérez-Collazo et al. [5] are not part of the focus of this article, since they are not considered as combined systems. Review of Industrial Hybrid System Projects Including Marine Energies Multisource systems that include marine energy are still scarce. As wind turbines now reach a high level of maturity, most of these projects consider offshore wind turbine use. Two categories of projects can be identified, according to the maturity level and the development status. Several projects have been tested under real sea conditions (meaning potentially severe environmental conditions) either at a reduced-scale or at full-scale (Section 3.1.1), whereas others have still not progressed beyond the concept step (Section 3.1.2). A review is given below and is summarized in Tables 2 and 3. Technologies are characterized according to the classification given in the previous Section when the technical information is available. An overview of industrial hybrid system projects according to their power scale and to their furthest known development status is given in Figure 2. Finally, some island energy concepts will be presented (Section 3.1.3). Projects Tested under Real Sea Conditions Despite their current scarcity, multisource systems tested under real sea conditions can be classified according to the sources used. These systems relied on either wind or wave energies combined with one or more renewable sources, whereas other projects only consider wind and wave energies. A single industrial project uses solar, wind, tidal and wave energies on the same platform [68]. Details of these systems are given below, according to the sources used. • Wind and wave: several projects have considered these sources. The Poseidon P37 product, designed by Floating Power Plant, is currently the most advanced technology in the multisource floating platforms field, as it was the first hybrid system connected to the grid. Twenty months of grid-connected tests were effected successfully on the Danish coasts, with three 11 kW wind turbines and 30 kW of wave energy converters. A Megawatt scale will be reached with the P80 device, which is expected for 2020 [69,70]. The W2Power device designed by Pelagic Power uses the same energies, with 10 MW installed on the platform [71][72][73]. However, this project is still at reduced scale test status, as the platform currently tested in the Canary Islands concerns the WIP10+ device, which is a 1:6 scale prototype with only wind turbines [74]. Previously, wave tank tests allowed the mooring system to be validated and the behavior in both operational and survival modes to be assessed [72]. • Wind and solar: although photovoltaic panels and wind turbines now reach a high maturity level, projects combining both energies on a floating platform are still scarce. The Wind Lens hybrid project, developed by the Kyushu University (Fukuoka, Japan), has considered wind turbines (Wind Lens turbine) and solar panels on a floating platform [75][76][77][78], connected to batteries to ensure the electrical power supply of measurement and control devices. The total power installed reached 8 kW. Authors have observed that offshore wind turbine production is better than the similar land-based turbine due to higher wind speed values. In winter, the energy produced by the offshore wind turbine is two to three times the energy produced by the land-based wind turbine [76]. A more powerful platform is expected in the future according to [76]. • Wind and tidal: The Skwid system designed by the MODEC company seems currently to be the only project combining wind and tidal turbines at an industrial scale. However, little information is available concerning this project, since the system sank during installation in 2014 [79,80]. The turbines used could harness wind and tidal current flowing from any direction thanks to their vertical axis, avoiding complex orientation systems needed by horizontal axis turbines. • Wave and solar: The Mighty Whale project is one of the oldest multisource systems which considers the use of ocean energy [81]. During tests at sea between 1998 and 2002, observations showed that combining the use of wave and solar energies allowed the power production to be smoothed and reduced the auxiliary generator use by storing the energy in batteries. However, the results presented in a previous paper [81] are strongly dependent on climatic conditions (Sea of Japan). • Wind, solar, tidal, and wave: the PH4S device developed by the French company Geps Techno is currently the only platform combining the four renewable sources [68]. A prototype is currently being tested on the French Atlantic coast and the first observations from this company show a reduction of global power intermittency. The review shown in Table 2 demonstrates that devices tested under real sea conditions are still scarce and often used a few dozen kilowatt systems. All the projects have considered wind and/or wave energies on a floating or fixed platform. These structures often come from a previous wave energy converter platform (e.g., Poseidon P37, Mighty Whale) or an offshore wind turbine system (e.g., Skwid), to which another renewable source has been added. All of the offshore projects tested report that energies used present a positive complemental aspect, bringing a smoother electrical power output. When they are not connected to the grid, power sources are used to supply the platform measurement and control devices. However, projects tested under real sea conditions are still scarce. Most of them were tested at a reduced scale, initially in water tanks before sea installation. Projects Still at the Concept Status The industrial project review can be supplemented by projects that have remained at the concept status without sea installation. As detailed in Table 3, most of these concepts concern wind and wave systems, even if a wind-tidal concept [11] and a floating platform concept combining all of the sources considered in this study exist [82,83]. According to all these projects, the following points can be highlighted: • Wind and wave: many wind-wave system concepts exist. Some of these have been partially tested, either in water tanks or at sea for one of both renewable sources. Wind and tidal: MCT has considered a wind turbine mounted on the tidal turbine structure in the SeaGen W device. However, this project seems only to be a concept according to the large scale tidal projects without the wind turbine recently developed by the company [11,86]. • Wind, solar, tidal, and wave: In 2012 Hann Ocean (Singapore)patented the layout and design of the Hexifloat device, a platform concept allowing four energies to be harnessed [83], but this is still at the concept status today according to the company's website [82]. Other concepts have been developed in the MARINA Platform framework (Marine Renewable Integrated Application Platform), a European project undertaken between 2010 and 2014 to study different aspects of combined offshore platforms, such as feasibility, economical profitability, engineering etc. Thus, several partners have worked on tools, methods, and protocols to ease multipurpose platform design. Among the different platforms proposed [87], three wind-wave hybrid system concepts have been considered: the Spar Torus Combination (STC) [88], the Semi-submersible Flap Combination (SFC), and the large floater with multiple Oscillating Water Columns and one wind turbine (OWC Array) [11]. The concepts reviewed here often considered wind and wave energies. This trend could be explained by the fact that some companies have already developed a wind or wave energy converter and would like to share their structure with another kind of renewable energy converter. Then costs could be reduced (design, equipment, installation, operation, maintenance, etc.) and power production could be increased with a smoother output level, as explained by Pérez-Collazo et al. [5], M. Karimirad [6], and Casale et al. [10]. Positive aspects of combined wind-wave devices are presented in these references. However, the review carried out in this section shows that many of these concepts have not gone beyond the idea step. High development costs can explain this trend. Also, as offshore tidal and wave energy converters alone are still scarce in the world, their maturity level is not as high as land-based renewable energies and offshore wind turbines. Casale et al. have suggested [10] building hybrid systems around proven and mature offshore systems, for example wind turbines after these technologies have been individually validated and tested. Thus, this consideration could help concepts to overcome this step, which is seen in a few cases where wave or wind energy converters have been tested on wave tanks or at sea [84, 85,89,90]. For several projects listed in our review, little information is available to explain their current status and perspectives. It is supposed that some companies have cancelled their hybrid device concept, focusing on separated technologies. Energy Island Concepts Energy islands [10] or island systems [5] are considered to be large multipurpose platforms including several renewable energy sources and, in contrast to projects reviewed in the previous sections, infrastructures for other activities and functionalities [5,10]. C. Pérez-Collazo et al. divided this kind of project into two categories: artificial islands built on a reef or dyke, and floating islands, considered as very large floating platforms [5]. However, all projects in this field show that they are only at the steps of concepts and ideas. Among them, the following projects can be quoted: [99]. The last of these seems to have the highest potential for near-term development. Economic, environmental, logistical requirements, social, and design aspects have been considered. In addition to the renewable energy converters used in these concepts (solar energy, wind energy, and OTEC), other infrastructures and services have been proposed, such as leisure (Leisure Island) or aquaculture (Green & Blue) [100]. All of these island concepts have apparently not gone beyond the idea stage. Also, the powers considered are higher than the floating platform power scales reviewed in previous sections. Thus, island projects seem to be far from reaching industrial and commercial status, concerning high costs, technical challenges, and the facilities required to build such projects [100]. Financial support should be found to overcome the concept status. However, sharing infrastructures with other concerns could help project development by involving different industrial and economic sectors [10,100]. Review of Academic Research Concerning Hybrid Systems with Marine Energies Multisource systems with marine energies are still at early stages of developmental processes. Thus, several academic analyses studied combined renewable energies exploited in the sea. These analyses are often at an earlier stage than industrial development processes and they study above all theoretical hybrid systems. Among the different papers describing such systems from the electrical engineering point of view, two categories can be found. The first discusses energy management system and control aspects, whereas the second concerns sizing optimization aspects with method and tool design. Several papers are reviewed in the following sections according to this classification. Energy Management System and Control Studies Hybrid systems using marine energies have been modeled and simulated by several authors to design appropriate energy management systems and control strategies. As in the industrial project review presented in the previous section, these academic works can be subclassified with respect to the considered sources, as the requirements and specifications can be different according to the renewable source. Technical information and main outcomes are summarized in Table 4. • Wave and wind: an off-grid wind-wave system with battery storage and variable AC load has been studied by S.Y. Lu et al. [101]. Another wind-tidal hybrid concept called HOTT (Hybrid Off-shore and Tidal Turbine) has been studied in several papers concerning wind power fluctuation compensation [106][107][108][109]. Thus, M.L. Rahman et al. proposed [106] the use of a tidal generator as a flywheel storage system, with a one-way clutch ensuring mechanical separation. The tidal generator produces or stores electrical power depending on the inverter control. In a previous paper [107], wind power fluctuations are compensated by tidal generator control for the lowest frequencies and by battery control for the highest ones. The authors stated that tidal compensation reduced the battery capacity, whereas the highest long-term fluctuation compensations required a tidal turbine power increase. The battery storage system was studied in a previous paper [108]. Tidal generator control for wind power fluctuation is also considered in a previous paper [109]. Concerning the grid connection, two solutions for large-scale turbines have been studied by S. Pierre [110]. The DC-link connection between the two generators before the grid-tied inverter brings an easier fluctuation smoothing ability. The separated solution consisting of two back-to-back converters for the AC grid connection allows the extracted power to be maximized. Finally, Y. Fan et al. presented [111] a novel hybrid wind-tidal architecture, where a hydraulic accumulator is used as a storage and balance system, placed between both hydraulic pumps and the electrical generator. Hydraulic pumps transform the output turbine mechanical energy into hydraulic energy. Fluctuations of output turbine mechanical powers are limited by hydraulic pumps control, while the hydraulic accumulator is controlled according to the load demand. • Wind, tidal, and wave: C. Qin et al. [112] simulated the compensation of short-term output power fluctuations induced by intermittent wind and wave energies (from seconds to minutes). Thus, the tidal generator was used to smooth the output power, according to the tidal current speed. When the tidal turbine cut-in speed is surpassed, a tidal generator produces electrical power. Thus, its pitch angle and rotational speed are controlled simultaneously to reduce output power fluctuations. If the tidal current speed is lower than the cut-in speed, the tidal generator is used as a flywheel storage system to compensate for variations, after tidal turbine mechanical separation. According to the articles reviewed in this Section, wind and tidal energies seem to be widely considered. Wind energy fluctuations are often cited as a weakness and a challenge to improve the renewable development in island areas. Different solutions have been investigated to limit these fluctuations. Among them, tidal energy attracted attention, concerning tidal generator control [105][106][107]109,111,112] and the possibility to use it as a flywheel storage system [106,112]. Another point of interest observed in academic research is the transient state system stability, not only for resource fluctuations but also for load change [101,102,104,107]. Tidal energy has also been considered to smooth wave energy fluctuations [112]. Storage solutions such as batteries [101,102,107] or supercapacitors [104] are sometimes used to smooth generated power fluctuations. Sizing Optimization Studies To ensure a high reliability level, the hybrid system should be designed carefully. A storage solution allows the load requirements to be met in terms of power and energy. To avoid an over-sized or under-sized system and to ensure reasonable costs, a sizing optimization must be carried out. Wind/solar systems with a battery and/or Diesel generator have been widely studied in terms of sizing optimization, as described in recent reviews [113][114][115][116]. As ocean energies have only been considered recently, such studies for marine energy hybrid systems are still rare. Several authors have proposed sizing optimization studies for both the renewable source sizes and the storage solutions considered to supply island systems. Hybrid photovoltaic, wind, and tidal system sizing optimization was proposed in a few articles. In previous papers [117,118], O.H. Mohammed et al. considered the case of the remote Ouessant French Island, where the energy load is estimated at around 16 GWh per year, for a maximum power demand of 2 MW. To find the best sources and storage combinations according to the equivalent loss factor reliability index [117,118] and economic constraints [118], several sizing optimization algorithms have been developed: cascade calculation, genetic algorithms and particle swarm optimization. The combination of the three renewable sources is found to be more reliable than cases where only one source is considered [117]. In a previous paper [118], the levelized cost of energy is divided by seven between a configuration based only on solar energy (763.7 $/MWh) and a solution based on PV, wind and tidal energies (127.2 $/MWh). Also, the levelized cost of energy is lower when artificial intelligence approaches are considered for the sizing optimization, as the obtained values reach 94 $/MWh with a genetic algorithm and a particle swarm optimization, whereas a cascade algorithm results in a 149 $/MWh cost [118]. A metaheuristic solution called the crow-search algorithm has been proposed by A. Askarzadeh [119] to optimize a hybrid wind/solar/battery system into which tidal energy is included. Concerning the results, the author concludes that a hybrid solar/wind/tidal system is more cost-effective than a partial combination of these three sources. In the simulation conducted for a one year period, tidal turbines generate almost 25% of the total generated energy and the resulting tidal turbines net present cost for the optimized system represents 20% of the total cost. Moreover, batteries can reduce the cost and improve the reliability index. The net present cost related to a battery reaches 13% of the total net present cost, to ensure a maximum unmet load ratio of 10%. For the study carried out, the proposed crow-search algorithm is reputed to be more efficient than the particle swarm optimization and the genetic algorithm, giving the fastest convergence rate. The developed control schemes for both generators allow extraction of the maximum available power. Tidal energy is said to be more predictable and more available than wind energy. Two coupling modes have been considered for the tidal and wind system AC grid connection. The first one considers a DC-link coupling before a grid connected inverter. The second one considers two separated AC-DC-AC converters between source generators and AC grid. The tidal generator is used as a flywheel storage system when the tidal current speed is lower than the cut-in speed. Tidal generator control can smooth the short-term wind and wave power fluctuations. The sizing optimization for a wind/tidal hybrid system with battery storage has also been described in several articles. Among them, S.G. Mousavi proposed [120] the use of a genetic algorithm to determine the optimal size of a wind/tidal/micro-turbine/battery system, according to an economic analysis, i.e., evaluating for a year the capital cost, the battery replacement cost, the fuel cost and the operation, and maintenance costs. The objective function aims to find the optimal size with the lowest total annual system cost (sum of all the costs), considering the maximum load demand of the standalone system. The optimal configuration is based on a power capacity of 315 kW for wind turbine, 175 kW for tidal turbine, 290 kW for microturbine, and a capacity of 3.27 kAh for lead acid battery, leading to $312,080 total cost. M.B. Anwar et al. presented [121] a methodology to size grid-connected large scale marine current and wind turbines (mounted on the same monopile), with a battery storage station to meet the grid code requirements. Sizing optimization aims to maximize the available power, at the same time respecting the injected power fluctuation requirements given by the grid. Sizing optimization studies for systems dealing with marine energies are less numerous than studies carried out for solar/wind/battery systems. The first trends of these studies show that optimizing the size of a hybrid system which includes marine energies is necessary, as it allows the cost to be reduced and the reliability index to be improved [117][118][119]. The amount of power generated is expected to be higher and the intermittency to be reduced, but battery storage is still required to ensure the load energy requirements. Overview of Multisource Systems Based on Marine Renewable Energies The review of industrial projects and academic research dealing with hybrid systems based on marine energies has shown that such systems have not yet reached commercial status. The interest of industries and researchers in this kind of multisource system is now clear. However, no significant results and operating experience exist to date and projects have often remained at a concept status [10]. Most projects considered a combination of two renewable energy sources. Although the advantages are numerous, some obstacles limit their development. This section aims to summarize the aspects found overall across the different projects and studies reviewed in previous sections, according to synergies and positive aspects, weaknesses and obstacles, and finally feasibility aspects. Positive Aspects, Synergies, and Applications Combining several renewable energy sources in maritime areas presents many advantages and highlights some possible synergies. Thus, further developments in forthcoming years are expected, as potential applications are numerous. According to several authors [5,6,9,10] and to the projects reviewed in previous sections, positive aspects and benefits brought about by marine energy hybrid systems concern many fields, as they can: • Increase the energy production rate of an area (area share); • Reduce the non-production hours, by managing the power flows harnessed from energies presenting different intermittency and variability characteristics (output power smooth). A storage solution can improve the reliability index and ensure the load requirements. Thus, the use of Diesel generators can be reduced; • Provide sustainable electrical energy for maritime activities, such as fishing, aquaculture, water desalination, oil and gas industries, etc.; • Share the infrastructure and equipment, allowing the global weight to be reduced; • Attenuate the platform movement and improve its stability; • Reduce some costs, with initial savings (infrastructure, mooring and anchoring systems, transmission, connection equipment, etc.) and lifetime savings linked to the operation and maintenance costs, compared to a separate device solution; • Reduce the visual impact by placing the platform far from the coast (offshore systems). Moreover, the design of multisource systems based on marine energies presents some positive aspects of synergies which could improve and accelerate their development. According to the synergies explained in several references, four categories can be found [5,6,10]: • Areas sharing synergies: between renewable energy systems and other facilities (aquaculture, desalination, fishing etc.). Sharing areas allows the sea use densification to be improved, sharing the power produced for the surrounding activities and limiting the studies to a single place. • Infrastructures, installation, and equipment sharing synergies: this kind of synergy concerns the installation equipment, the logistics (port and vessels), the grid connection, the supervisory control system, the storage and the operation, and maintenance. For each of these items, costs could be reduced by combining different kinds of sources. • Process engineering synergies: hybrid systems based on marine energies can be combined with several marine activities, such as desalination, hydrogen production, aquaculture, breakwaters, algae production, oil and gas sector, etc. • Legislative synergies: a common regulation is necessary to develop such hybrid systems. Thus, a legal regulatory framework, maritime spatial planning, a simplified licensing procedure, and a grid and auxiliary infrastructures planning are needed, as explained in a previous paper [5]. Hybrid systems including marine energies can be used in numerous applications in remote and maritime areas, allowing the use of Diesel generators to be reduced by replacing them with sustainable energy sources and/or storage. Among all of them, the following overall categories can be defined: • Floating buoys: such as mooring or drifting buoys, usually used to measure meteorological or oceanographic parameters. Most of these buoys are currently based on solar energy and battery; • Floating platforms: larger than floating buoys, they are used to produce electrical power, either for an island or for local use (aquaculture, oil and gas, fishing, etc.). Most of the projects presented in Tables 2 and 3 are based on floating platforms [68,69,71,75,79,[81][82][83]; • Islands or coastal areas: several energy resources could be harnessed by onshore sources, such as PV panels and wind turbines, and by offshore systems, for example by the use of offshore wind turbines, tidal turbines, and wave energy converters [1,[40][41][42]117]; • Artificial islands: as presented in several concepts [10,95,97,100], these are built on a reef or dyke. However, no further developments beyond the concept stage exist; • Transport: maritime transport could use marine energy for their energetic needs [100]. Obstacles, Weaknesses and Issues The reviews presented in the previous sections have shown a mismatch between the number of projects that led up to sea test conditions and those that remained at a status concept. Indeed, hybrid system development requires careful consideration of several aspects to avoid premature project shutdown, by events such as financial, installation, logistical, equipment, environmental, legislation, etc. Several possible obstacles and weaknesses are cited in previous papers [5,6,122] • Marine environmental constraints: floating systems should undergo severe conditions when they are placed offshore, such as weather (storm, hurricane), strong waves, salinity, biofouling, corrosion, etc.; • Mooring and anchoring system reliability, which should be able to resist local environmental conditions. Several projects have encountered issues either at an early stage of development or during the operational test phase, sometimes involving the premature project end. However, little information is available concerning the reasons of the end of a project. The following points can be highlighted, according to several publications: • Damage or failure during the installation or operation phases, as happened for the SKWID wind/tidal hybrid system [80]. For example, failure can concern the structure, the power take-off technology, or the mooring and anchoring systems [122]; • Project ended prematurely due to high costs and lack of funding. This aspect has been seen at different steps, and it is thought that some companies cease to exist since there is a lack of information concerning recent activities. Also, some concepts appeared to be ambitious and thus costly. This could explain the lack of further development. Feasibility and Design Methodology To overcome some of the obstacles previously listed and make a system sustainable, feasibility aspects should be carefully studied. Thus, design methodologies and recommendations have been proposed by J.S. Martinez et al. [11] and B. Zanuttigh et al. [122] for the integration of energy converters in multipurpose platforms. The following methodology has been proposed previously [11] during the MERMAID project: • Resource assessment according to the selected site; • Power take-off technology selection allowing the power production to be maximized; • Offshore structure technology selection (fixed or floating); • Technology integration, by either platform sharing or area sharing (offshore energy farms); • Environmental impact assessment, concerning pollution, recycling, etc.; • Feasibility of combining with other activities. Thus, the feasibility of such hybrid systems should start by a local evaluation, as the available resources can differ significantly [10]. Moreover, it has been advised [10] to use mature technologies, to avoid technology failure during the operational phase. Social acceptance must be considered by involving all the actors concerned in the project, including industries, political groups, investors, local communities, etc. Some authors advised developing individual renewable energy systems in the same area (this was for offshore wind farms) [123][124][125], then developing hybrid platforms that share the same structure [10,11]. Conclusions Ocean energy can provide sufficient energy for the electricity supply of remote maritime areas, since the worldwide resources are major. Thus, combined systems including photovoltaic, wind, tidal current, and wave energies, which harness several kinds of energy, are a possible solution to replace the traditionally used genset-based systems to supply islands or floating systems. These four resources currently demonstrate the best maturity levels among all existing renewable energy sources, even if tidal kinetic energy and wave energy are still earlier in their development process than photovoltaic and wind energy converters. After an overview of these four energy resources, this paper reviewed the industrial and academic hybrid systems based on marine energies. It appears that the development of such systems is still at an early stage in the development process, as shown by the number of projects that have remained at concept status. Several projects are currently close to full-scale mega-watt operational tests, such as the Poseidon P80 device [58,59] and the W2Power device [74]. Other projects have reached sea tests with small-scale prototypes. This review has also shown a lot of concepts that are more or less realistic given the limited amount of available information concerning further development. On the one hand, concerning possible obstacles for the development of hybrid systems based on marine energies, the required long development times and high costs, especially of insurance, can explain this situation. Moreover, the severe marine environment constraints make the design of hybrid systems more complex, especially for the mooring system which requires a high reliability level. On the other hand, the review of research dealing with energy management aspects and sizing optimization shows the promising aspects of such systems. Indeed, combining different renewable energy resources reduces the output power variations as their temporal characteristics differ, so less storage capacity is needed and Diesel generator use can be reduced. Other positive points have been listed in this article, such as sharing area, equipment, infrastructure, etc. The process engineering synergies should help the development of hybrid systems based on marine energies, with respect to all possible combinations with other sectors and activities: desalination, aquaculture, transport, oil and gas, etc. As a result, a development of hybrid systems based on marine energies is expected in forthcoming years, following the improvement of both tidal kinetic current and wave energy converter maturity levels. Author Contributions: A.R. made the review and wrote the paper, F.A., F.D.-R., S.B., and Q.T.T. gave helpful comments and revised the paper. All authors read and approved the final manuscript. Funding: This work was supported by the project "Monitoring and management of marine renewable energies" granted by the Pays de Loire region.
2019-04-16T13:28:34.988Z
2018-07-20T00:00:00.000
{ "year": 2018, "sha1": "b3623d520dfa362a003c87e5fcc4fd36b748787a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/11/7/1904/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "07cf7451eedecf8c75e442d99c8316fc3a3abd7e", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
261653003
pes2o/s2orc
v3-fos-license
Regulation of biofilm formation in Klebsiella pneumoniae Klebsiella pneumoniae is an important Gram-negative opportunistic pathogen that is responsible for a variety of nosocomial and community-acquired infections. Klebsiella pneumoniae has become a major public health issue owing to the rapid global spread of extensively-drug resistant (XDR) and hypervirulent strains. Biofilm formation is an important virulence trait of K. pneumoniae. A biofilm is an aggregate of microorganisms attached to an inert or living surface by a self-produced exo-polymeric matrix that includes proteins, polysaccharides and extracellular DNA. Bacteria within the biofilm are shielded from antibiotics treatments and host immune responses, making it more difficult to eradicate K. pneumoniae-induced infection. However, the detailed mechanisms of biofilm formation in K. pneumoniae are still not clear. Here, we review the factors involved in the biofilm formation of K. pneumoniae, which might provide new clues to address this clinical challenge. Introduction Klebsiella pneumoniae is an important pathogenic Gram-negative, nonmotile bacterium that is responsible for a variety of common infections including urinary tract infections (UTIs), pneumonia, bacteremia, purulent liver abscesses, and wound infection (Karyoute, 1989;Bouza et al., 2001;Luo et al., 2014;Mohamudha et al., 2016;Bart et al., 2021;D' Abbondanza and Shahrokhi, 2021;GBD 2019Antimicrobial Resistance Collaborators, 2022).Given the rapid spread of extensively-drug resistant (XDR) (mainly carbapenem resistant K. pneumoniae [CRKP]) and hypervirulent strains worldwide, K. pneumoniae has become a major problem for public health.In 2020, the Antimicrobial Testing Leadership and Surveillance (ATLAS) program collected a total of 6,753 K. pneumoniae isolates from 57 countries across six regions worldwide.Of these isolates, 1,118 (16.6%) were CRKP strains (Lee et al., 2022).Because of the limitation of treatment options, CRKP is considered an urgent clinical threat.It was estimated that antibiotic resistant K. pneumoniae was responsible for more than 600,000 deaths globally in 2019 (Antimicrobial Resistance Collaborators, 2022).Hypervirulent K. pneumoniae (hvKP), a more virulent evolving variant of K. pneumoniae, is known to cause community-acquired, metastatic, and life-threatening infections such as pyogenic liver abscesses (PLA), central nervous system infection and endophthalmitis, which require rapid recognition and site-specific treatment (Russo and Marr, 2019).Moreover, via acquisition of carbapenem-resistant plasmids or hvKPspecific virulence determinants, XDR-hvKP strains are emerging (Han et al., 2022;Tian et al., 2022).The prevalence of the XDR-hvKP and its potential threat to human health are of concern. Capsule The capsule affects different stages of biofilm formation of K. pneumoniae.It controls the initial adhesion through a series of behaviors, such as improving the regular initial spatial distribution and preventing bacterial interactions as much as possible (Balestrino et al., 2008).Davis et al. pointed out that the capsule was necessary for constructing an appropriate initial covering of mature biofilm structure (Davis and Brown, 2020).The expression of capsular polysaccharide also ensures that K. pneumoniae forms a typical threedimensional mature biofilm structure (Balestrino et al., 2008). The capsule is a toxic bacterial component.So far, researchers have identified 134 capsule synthesis loci (K loci) (Wyres et al., 2016).In a study using signature-tagged mutagenesis (STM) to screen a K. pneumoniae mutant library with unique characteristic markers to identify genes related to biofilm formation, the authors found that mutations in capsule gene cluster sites could lead to defects in biofilm formation (Boddicker et al., 2006).Mutants that insert transposons in the capsular wza and wzc loci show defective biofilm formation (Wu et al., 2011).Balestrino et al. (2008) discovered that the biofilm forming ability of K. pneumoniae ORF4 (wza homologous, transport of capsular polysaccharides) and ORF14 (glycosyl transferase, capsule biosynthesis) mutants on polyvinyl-chloride (PVC) was significantly reduced. Further, treC and sugE have been shown to affect biofilm formation of K. pneumoniae by regulating the production of capsular polysaccharide (CPS).TreC encodes trehalose-6-phosphate hydrolase, and its deletion affects bacterial utilization of trehalose.TreC mutants show reduced mucus viscosity and produce less CPS, thereby reducing biofilm formation and preventing formation of advanced biofilm structures in K. pneumoniae.Adding glucose to the culture medium of K. pneumoniae treC mutant strains can restore CPS production and biofilm formation.However, a sugE (encoding an intima protein) mutant that increases biofilm formation in K. pneumoniae shows higher mucus viscosity and produces more CPS.It was suggested that the absence of sugE in K. pneumoniae would lead to changes in bacterial membrane structure and activate the downstream cascade, thus increasing CPS production during biofilm formation (Wu et al., 2011). Biofilm formation in isolates containing magA (K1), rmpA+, rmpA2+, the virulence factors related to capsule production, is more obvious than in isolates with negative virulence factors.However, multivariate regression analysis showed that wcaG was the only independent risk factor for biofilm (Zheng et al., 2018).The wcaG positive genotype was involved in K1 and K54 capsular types, and was less associated with K16 and K58 capsular types (Turton et al., 2010).In addition, wcaG encodes the protein participating in the biosynthesis of fucose, and the deletion mutation of wcaG affects most capsule polysaccharide genes (Ho et al., 2011), thereby suggesting that wcaG may affect biofilm formation by changing the composition of capsule polysaccharides (Zheng et al., 2018). One study also showed that the capsule could inhibit biofilm formation in K. pneumoniae.A recent study showed that strains lacking the wbaP gene, which was related to capsule production, formed stronger biofilms.While the strains with the super capsule containing wzc mutation could not form biofilms (Ernst et al., 2020).When carbohydrates were added to the medium, the biosynthesis of CPS increased, but the biofilm formation in K. pneumoniae decreased (Chen et al., 2020).A previous study showed that the expression of CPS in K. pneumoniae physically interrupted the function of type 1 fimbriae, hindered the biofilm formation mediated by fimbriae, and reduced the adhesion of bacteria to the surface (Schembri et al., 2005).In addition, CPS could inhibit bacterial surface interactions on non-biological substrates (dos Santos Goncalves et al., 2014).It was found that the capsule was costly in nutrient rich media, but it provided obvious adaptive advantages under conditions of malnutrition.Further, among strains forming more biofilms, the capsule often played a positive role in biofilm formation.The authors suggested that this was not because of the presence or absence of capsules, but was instead caused by the amount of capsule expressed by a given strain, which then affected biofilm formation.Moreover, the function of the capsule was not conservative in different isolates, but relied on other elements of the genome or serotype (Buffet et al., 1946). The mechanism by which the capsule influences biofilm formation and the conditions under which it has positive or negative regulation on biofilm formation are still unclear.However, there seems to be a relationship with the O antigen, because the polysaccharide capsule is retained on the outer surface of the bacteria by interacting with the repeat sugar molecule of the lipopolysaccharide (LPS) molecule, namely "O antigen." The waaL gene encodes a ligase involved in the connection of the LPS repeat O antigen to the LPS core, and inactivation of this gene is understood to lead to a significant reduction in capsule retention and an increase in biofilm formation (Singh et al., 2022). Fimbriae The K. pneumoniae genome encodes for several types of fimbriae.Fimbriae are hair-like protein appendages extending from the cell surface (Wilksch et al., 2011).Fimbriae promote K. pneumoniae adhesion to non-biological surfaces, resulting in catheter related infections (Schroll et al., 2010).Type 1 and type 3 fimbriae, the most studied fimbriae, are encoded by fim and mrk gene clusters, respectively.In addition, ecp and kpa to kpg gene clusters found in recent years also encode fimbriae (Wu et al., 2010;Alcántar-Curiel et al., 2013). In K. pneumoniae, biofilm formation is mainly mediated by type 3 fimbriae, and the Mrk protein is encoded by the operon containing the mrkABCDF gene (Allen et al., 1991).Type 3 fimbriae are mainly made up of the main fimbriae subunit, MrkA, which polymerizes to form a spiral fimbriae axis (Murphy and Clegg, 2012).In addition, ΔmrkA mutants are unable to attach to the abiotic surface to form biofilms (Di Martino et al., 2003;Jagnow and Clegg, 2003).Further, MrkA protein expression is significantly upregulated during biofilm thickening (Vuotto et al., 2017).MrkB and MrkC have sequence characteristics representing the periplasmic chaperone and usher translocatase, respectively.MrkD, present on the top of the fimbriae surface, also has adhesion characteristics of appendages, and determines specificity of fimbriae binding (Murphy and Clegg, 2012).The mrkA and mrkD genes play a key role in the biofilm formation of K. pneumoniae (Fang et al., 2021).The mrkA gene contributes to rapid biofilm formation while mrkD contributes to form dense K. pneumoniae biofilms (Ashwath et al., 2022).A gene cluster, mrkHIJ, adjacent to the type 3 fimbriae operon is related to the regulation of type 3 fimbriae expression.MrkH is a new transcription activator of the mrk gene cluster, which regulate mrkHI expression and contains the PilZ domain.MrkH binds to the region upstream of the mrkA promoter and activates the expression of the mrkABCDF operon.Therefore, mrkH is often referred to as a "biofilm switch" as it can initiate expression of genes involved in producing type 3 fimbriae (Wilksch et al., 2011;Tan et al., 2015).The biofilm formation capacity of K. pneumoniae carrying the mrkH box was clearly higher than strains without it (Fang et al., 2021).Wu et al. (2012) found that mrkHI transcription could be activated by MrkI.MrkI is a LuxR-like regulatory factor.The mrkI mutant reduces the mannose-resistant Klebsiella-like (MR/K) hemagglutinins (HA) activity and the number of type 3 fimbriae on the cell surface, leading to a significant reduction in biofilm formation, which can be rescued when providing wild type mrkI copies (Johnson et al., 2011;Wilksch et al., 2011).The expression of mrkHI is also actively regulated by Fur, which usually acts as a transcriptional activator to directly activate the transcription of mrkHI.The deletion of Fur reduces mrkH, mrkI, and mrkA transcription, thereby reducing type 3 fimbriae expression and biofilm formation (Wu et al., 2012).In addition, at least two components of pulmonary surfactant, phosphatidylcholine and cholesterol, promote the transcription of type 3 fimbriae genes and biofilm formation of K. pneumoniae (Willsey et al., 2018). The type 3 fimbriae-dependent adhesion is probably the initial stage of K. pneumoniae colonization and biofilm formation on non-biological surfaces (Duguid, 1959).Type 3 fimbriae mediate the binding to the surface of damaged epithelium as they can bind to the extracellular matrix of urinary and respiratory tissues (Tamayo et al., 2007). Type 3 fimbriae not only participate during the initial stages of K. pneumoniae biofilm formation, but also mediate the c-di-GMP dependent bacterial growth mode transformation from planktonic to biofilm.c-di-GMP is an important second messenger in bacteria (Tamayo et al., 2007).The activity of diguanylate cyclase (DGC) and phosphodiesterase (PDE) can regulate the intracellular concentration of c-di-GMP in bacteria (Simm et al., 2004;Hengge, 2009).The mrkHIJ gene cluster is associated with the regulation and sensing of c-di-GMP (Lin et al., 2016).When activated by c-di-GMP, MrkH recruits RNA polymerase to the mrkHI promoter to auto-activate mrkH expression.Increased MrkH production subsequently drives the expression of mrkABCDF, leading to type 3 fimbriae biosynthesis and biofilm formation (Tan et al., 2015).MrkJ encodes a hypothetical phosphodiesterase (PDE) which contains an EAL domain (the sequences encoding diguanylate cyclase and phosphodiesterase A share a lengthy consensus motif, comprising two adjacent domains termed GGDEF and EAL) mediating the hydrolysis of c-di-GMP (Tal et al., 1998;Johnson and Clegg, 2010).Because of intracellular accumulation of c-di-GMP, the absence of mrkJ leads to an increase in the production of type 3 fimbriae and biofilm formation (Johnson and Clegg, 2010;Wilksch et al., 2011).YjcC possesses PDE activity in the recombinant protein of its EAL domain.After receiving oxidative stress signal input, YjcC actively regulates oxidative stress responses by changing the level of c-di-GMP and has a negative impact on type 3 fimbriae expression and biofilm formation (Huang et al., 2013).YfiN harbors DGC domain plays a positive role in the expression of type 3 fimbriae (Wilksch et al., 2011).OmpR/EnvZ is a two-component system that senses osmotic signals and controls downstream gene expression in many species of Enterobacteriaceae.In response to osmotic stresses, the phosphorylated form of OmpR of K. pneumoniae regulates the expression of type 3 fimbriae to influence biofilm formation via modulating the level of intracellular c-di-GMP and MrkHIJ (Lin et al., 2018). Type 1 fimbriae, also known as mannose sensitive fimbriae, can bind soluble mannose as a competitive inhibitor, as the name suggests.Type 1 fimbriae are encoded by the fim gene cluster, which is composed of eight genes (fimAICDFGHK) (Gomes et al., 2020). Regulation of fim gene expression is controlled by reversible DNA elements (fimS).Type 1 fimbriae are composed of a major fimbriae subunit, FimA, and a minor apical adhesion protein, FimH (Alcántar-Curiel et al., 2013).FimH is a component that promotes adhesion to the host surface and contains mannose (Sahly et al., 2008).The increased expression of the fimH gene plays an important role in the binding of bacteria to surfaces, leading to strong biofilm formation (Ashwath et al., 2022).The regulatory gene fimK constitutes the fim operon, and FimK has an EAL domain with PDE characteristics, which can regulate intracellular levels of c-di-GMP (Johnson and Clegg, 2010).Mutant strains that cannot produce FimK have a higher fimbriate shape than wild type K. pneumoniae and can be planted in the urinary tract of mice (Johnson and Clegg, 2010).FimK can reduce type 1 fimbriae, and inhibit biofilm formation and intracellular bacterial communities (Rosen et al., 2008).Type 1 fimbriae are key causes of UTIs (Struve et al., 2008) and they have high affinity of mannose residues on bladder cell surfaces (Rozen and Skaletsky, 2000), which can promote the adhesion and invasion of epithelial bladder cells, thus forming biofilm-like intracellular bacterial communities (Rosen et al., 2008).The regulation mechanisms of the biosynthesis of type 3 and type 1 fimbriae are shown in Figure 1. Type 1 and type 3 fimbriae can contribute to biofilm formation and compensate for each other (Stahlhut et al., 2012;Murphy et al., 2013;Ashwath et al., 2022).One study using the catheter bladder model, found that type 1 and type 3 fimbriae enhance biofilm formation on catheters (Stahlhut et al., 2012).Moreover, it was reported that gene clusters of type 3 and type 1 fimbriae have a cross regulatory effect, and the up-regulation of type 1 fimbriae can make up for the loss of type 3 fimbriae expression (Schroll et al., 2010).Type 3 fimbriae may have a more significant effect on biofilm formation than type 1 fimbriae (Fang et al., 2021).Bacterial strains that cannot produce type 1 fimbriae are as proficient as bacterial strains that can produce such fimbriae in biofilm formation (Clegg and Murphy, 2016). Alcantar-Curiel et al. found that except for type 3 and type 1 fimbriae, the Escherichia coli common pilus (ECP) fimbriae gene cluster in the K. pneumoniae genome has an operon that is homologous to the E. coli ECP fimbriae.The ECP fimbriae gene cluster contains the ecpRABCDE gene and importantly, 90% of K. pneumoniae strains can produce ECP fimbriae.Ultrastructural and immunoassay analysis of K. pneumoniae showed that ECP can bind bacteria to each other, thus forming specific micro-colonies on cultured epithelial cells and stable biofilms on inert surfaces.ECP likely also plays an important role in cell adhesion, biofilm formation and several niche colonization, especially for isolates lacking MrkD adhesin or the entire type 3 fimbriae (Alcántar-Curiel et al., 2013).Wu et al. (2010) found seven new fimbriae gene clusters in K. pneumoniae, namely kpa, kpb, kpc, kpd, kpe, kpf, and kpg.The loss of kpgC resulted in an obvious decrease in biofilm formation, adhesion Quorum sensing (QS) During biofilm formation, QS mediates inter-specific or intraspecific interactions through which bacterial cells communicate with each other (Bassler et al., 1993;Miller and Bassler, 2001).The QS system regulates the synthesis of fimbriae, exopolysaccharides, adhesins, and other substances through signaling molecules, thus affecting biofilm formation in bacteria (Yang et al., 2013;Gu et al., 2021).Depending on bacterial cell density, bacteria will produce and detect specific signaling molecules called auto inducers (AIs) to coordinate their gene expression (Bassler et al., 1993;Miller and Bassler, 2001).There are two main types of intercellular QS regulatory systems, namely type I and type II. Type I QS is mainly used for intraspecific communication, which is usually related to the LuxI/LuxR system.LuxI synthetase produces N-acyl homoserine lactone (AHL) as an AI, and the LuxR transcription factor is their homologous receptor.However, K. pneumoniae does not produce AHL (Balestrino et al., 2005), rather encodes SdiA, which is an orphan LuxR receptor that reacts with exogenous AHL molecules produced by other bacteria (Pacheco et al., 2021).SdiA plays a repressive role in the expression of type 1 fimbriae in K. pneumoniae.Cells lacking SdiA regulator presents a hyperfimbriated phenotype that render the ΔsdiA mutant strain with a greater ability to form biofilm and agglutinate yeast cells (Pacheco et al., 2021). Type II QS has an interspecific communication function, enabling bacteria to react not only to AI-2 produced by other species, but also to their own AI-2 (Chen et al., 2002;Zhang et al., 2020).De Araujo et al. observed that K. pneumoniae lacking AI-2 output (ΔtqsA) or input (ΔlsrCD) systems showed an increased surface coverage after growth in dynamic micro-fermentation but decreased biofilm thickness.In addition, production of AI-2 relies on the presence of luxS but the biofilm structure of ΔluxS mutants is different.In these mutants the surface coverage rate is lower, and fewer large colonies are formed.Mutations related to luxS and AI-2 transport systems both induce increased expression of wbbM and wzm in connection with LPS synthesis, which indicates that QS affects biofilm formation through LPS in K. pneumoniae (De Araujo et al., 2010).The regulation mechanisms of the QS in biofilm formation of K. pneumoniae is shown in Figure 2. Nutritional condition Nutritional conditions are also an important factor for biofilm formation.Excess nutrition may promote the planktonic growth model, while malnutrition environments are more favorable for the biofilm growth model (Stanley and Lazazzera, 2004).The regulation mechanisms of quorum qensing in biofilm formation of K. pneumoniae.Klebsiella pneumoniae encodes SdiA as an orphan LuxR receptor to down-regulate the expression of fimA (type 1 fimbriae) and luxS.SdiA also regulates the promoter region of the fimS at OFF orientation.AI-2 transport systems (tqsA and lsrCD) and luxS of K. pneumoniae both regulate the expression of wbbM and wzm in connection with LPS synthesis.Previous studies have found that high concentration of sugars (such as glucose) prohibit biofilm formation in K. pneumoniae and E. coli (Jackson et al., 2002;Sutrina et al., 2015).Glucose-rich medium inhibits the production of cyclic AMP (cAMP), a well-known second messenger that has important effects on gene regulation (Rickenberg, 1974).Furthermore, cAMP forms a homodimer (CRP-cAMP) with its signal transduction target, cAMP receptor protein (CRP), and then combines with the CRP binding site in the DNA promoter region to regulate mRNA transcription.External glucose inhibits the function of CRP-cAMP in K. pneumoniae.CRP indirectly regulates the expression of type 3 fimbriae through the c-di-GMP signal pathway (Lin et al., 2016).In addition, CRP mediates catabolite repression.The absence of CRP increases the concentration of c-di-GMP and reduces the activity of PDE in cells.The expression of mrkHI depends on c-di-GMP which in turn increases the expression of MrkH and MrkI, leading to the high expression of type 3 fimbriae.It was reported that inserting an open-reading frame containing CRP-activation domain into K. pneumoniae resulted in biofilm deficiency (Boddicker et al., 2006).However, other studies found that crp mutant K. pneumoniae strains could not express MrkA, the major subunit of the fimbrial shaft, which indicated that CRP was required for fimbriae production and biofilm formation (Ou et al., 2017;Panjaitan et al., 2019).These studies indicate the important regulating role of CRP in biofilm formation of K. pneumoniae. Cellobiose also affects biofilm formation in K. pneumoniae.It has previously been shown that the celB deletion mutation, which leads to cellobiose deficiency, clearly decreased biofilm formation in K. pneumoniae.Moreover, celB encodes the cellobiose-specific subunit IIC of enzyme II (EIIC) of a carbohydrate phosphotransferase system (PTS, a sugar transport system in bacteria) (Wu et al., 2012).Horng et al. (2018) showed that a non-characteristic enzyme II complex homolog of PTS in K. pneumoniae actively regulated biofilm formation by enhancing eDNA and capsular polysaccharide production. Different carbon sources can also affect the biofilm formation of K. pneumoniae.The isolates formed more robust biofilms when grown with fucose as the sole carbon source than with glucose or glycerol.It was related to the positive modulates of fucose to hypermucoviscosity of K. pneumoniae (Hudson et al., 2022). The presence of bile salts can stimulate biofilm formation in K. pneumoniae, which is related to the production of poly-β-1,6-Nacetyl-d-glucosamine (PNAG) (Chen et al., 2014).PNAG is a common bacterial surface polysaccharide and a significant component of the biofilm EPS (Chen et al., 2020).PAGA, which is encoded by pgaABCD (Cywes-Bentley et al., 2013), mediates the intercellular binding of bacterial species and surface adhesion.The biofilm formation in pgaA mutants was shown to be significantly decreased (Wu et al., 2011).The loss of pgaC in K. pneumoniae reduces PNGA production, and significantly affects the enhancement of 1% bile salt mixture on K. pneumoniae biofilm (Chen et al., 2014). Iron is indispensable in K. pneumoniae growth and virulence factor expression (Chhibber et al., 2013;Chen et al., 2020).A study showed that a certain concentration of iron (0.16 mM FeCl 2 ) could promote biofilm formation in K. pneumoniae by inhibiting succinic acid.This may be due to a reduction in protein and polysaccharide expression in the biofilm EPS since succinic acid participates in pyruvate metabolism and amino acid synthesis (Liu et al., 2022).Chen et al. observed that biofilm formation was strongest when K. pneumoniae was cultured in LB broth supplemented with 50 μM iron.When the strain was cultured with an iron chelator, biofilm formation decreased (Chen et al., 2020).Chhibber et al. studied the biofilm formation of K. pneumoniae in the presence of Co [II] (iron antagonist ions) and depolymerase producing phage (degrading extracellular polysaccharides on biofilm structure).A significant reduction was observed in the growth of younger biofilms (1-3 days old) when 500 μM CoSO 4 and 10 μM FeCl 3 supplemented medium was used.Moreover, a complete eradication of the younger biofilms was observed when both elements were present (Chhibber et al., 2013). Drugs The use of some drugs will instead promote biofilm formation.In the presence of sub-MICs of cefotaxime, the biomass increased and was positively related to the antibiotic concentration (Hennequin et al., 2012).When CRKP strain was under antibiotic pressure, the expression of the Psp and Pho family genes [PspB-PspC complex is a pressure receptor that plays a role in molecular switch during the process of biofilm pressure response (Flores-Kim and Darwin, 2016)] was induced, thus further mediating the downstream stress responses, and compensating for the adsorption, colonization, and biofilm formation (Bowen et al., 2021).Cadavid et al. (2018) found that in K. pneumoniae, hydrochlorothiazide and acetaminophen could promote biofilm formation. Antibiotic-resistant genes Antibiotic-resistant genes in special plasmids can regulate the biofilm formation of K. pneumoniae (Maeyama et al., 2004).Multidrug-resistant (MDR) K. pneumoniae often forms stronger biofilms than non-MDR strains (Shadkam et al., 2021).The plasmid encoding cephalosporin enzyme was shown to obtain a transcription factor, namely AmpR, which was involved in upregulating capsule synthesis and antiserum killing, regulating the expression of type 3 fimbriae and biofilm formation (Hennequin et al., 2012).In addition, compared to the control strains, strong biofilm formation was found in NDM-1 producing K. pneumoniae.Moreover, the resistance genes blaNDM-1 of K. pneumoniae were observed to be maximally up-regulated in 24 h-biofilms (Al-Bayati and Samarasinghe, 2022). Bacterial efflux pumps are made up of transmembrane proteins, which export a variety of harmful substances including different types of antibiotics from the intracellular environment to the external environment.This process is one of the causes of MDR (Du et al., 2018).The role of efflux pumps in biofilm formation is still controversial.Tang et al. found that the efflux pump inhibitor, CCCP, had a dose-dependent effect on biofilm formation (Knight et al., 2018).In another study, researchers found that the up-regulation of the AcrAB multidrug efflux system was observed only in XDR strains with biofilm growth that could be considered an essential factor in the biofilm-forming ability in K. pneumoniae (Vuotto et al., 2017).In turn, biofilms have been shown to up-regulate K. pneumoniae efflux pump genes acrA, emrB, oqxA, and qacEΔ1 (Tang et al., 2020).However, some studies have suggested that there is no correlation between the expression of efflux pump genes (acrA, kexD, kdeA, kpnEF and ketM) and biofilm formation (Türkel et al., 2018(Türkel et al., ). 10.3389/fmicb.2023.1238482 .1238482Frontiers in Microbiology 07 frontiersin.org Physical environment The physical of bacteria will affect biofilm formation.Some physical and chemical properties of the surface on which bacteria grow may interfere with biofilm formation by damaging the initial bacterial attachment to the surface (Bos et al., 1999;Li and Logan, 2004).Biofilm production of K. pneumoniae decreased at 37°C compared to at 30°C, but the difference was not significant (Hostacká et al., 2010).In another experiment with 17 CRKP isolates, biofilm formation was greater at 37° C than at 25° C (Gual- de-Torrella et al., 2022).An increase in the pH of culture medium led to an increase in biofilm formation, and in K. pneumoniae this increased by 151-319% at pH 8.5 and by 113-177% at pH 7.5, respectively compared to at pH 5.5 (Hostacká et al., 2010). Klebsiella pneumoniae growing under simulated microgravity (SMG) conditions formed a thicker biofilm than those growing under normal gravity conditions.Moreover, under SMG conditions, the cellulose production and expression of type 3 fimbriae of K. pneumoniae were enhanced.Therefore, K. pneumoniae isolated from orbital spacecrafts poses a potential threat to the health of astronauts (Wang et al., 2016). Double-stranded DNA breaks Recently, the CRISPR-Cas9 technique has been implemented to eliminate certain bacteria by use of bacteriophages or bacterial conjugation.This technique allows targeted editing of genomes by inducing double-stranded-DNA breaks (DSBs).However, a novel type of biofilm ("R-biofilm") was found in clinical isolates of K. pneumoniae after DSBs.R-biofilms are mainly made up of extracellular proteins and/or DNA, which may be released by dead bacteria.In addition to bacterial SOS reaction (severe DNA damage in cells will result in SOS reaction), new signaling pathways also participate in the formation of R-biofilms.Furthermore, R-biofilms form a fixed ring or disk shape with better ductility, which can protect living bacterial cells in the body from harmful conditions such as exposure to ethanol, hydrogen peroxide, and ultraviolet radiation.The discovery of R-biofilms indicated the limited effect of the current popular Cas9-mediated sterilization tools, because the resulting DSBs may facilitate the formation of these new protective biofilms (Liu et al., 2020). Conclusion In the past decades we have gained considerable knowledge about the molecular mechanisms involved in the biofilm formation of K. pneumoniae.Similar to other bacteria, biofilm formation of K. pneumoniae is an adaptive response to various stressors such as nutritional deficiency, physical environment change, and drugs (especially antibiotics).Biofilm formation is not a precisely conserved process, the pattern of biofilm formation of K. pneumoniae is similar to other Gram-negative bacteria (Ruhal and Kataria, 2021).For example, O antigen of LPS is related to the production of capsule polysaccharide of K. pneumoniae and influences biofilm formation, which is common in Gram-negative bacteria (Fedtke et al., 2007;Lee et al., 2016).In general, all flagellated bacteria approach by motility and condition the surface by the secretion of polysaccharides to help cells adhere.As mentioned previously, K. pneumoniae use type 1 and type 3 fimbriae to adhere to surfaces (Schroll et al., 2010).Pseudomonas aeruginosa also use flagellar motility to reach surfaces and subsequently use type IV pili motility to crawl on surface (Zhao et al., 2013). Regulation by the two-component system via c-di-GMP are involved the biofilm formation of most Gram-negative bacteria, including K. pneumoniae (Ruhal and Kataria, 2021).QS plays significant roles in biofilm formation and dispersal (Solano et al., 2014).The QS molecules are various in different bacteria.K. pneumoniae encodes SdiA as an AI to inhibit biofilm formation (Pacheco et al., 2021).However, P. aeruginosa produce AHL as an AI to influence biofilm formation (Pesci et al., 1997).Owing to the rapid spread of CRKP and hvKP, the most interesting aspect of K. pneumoniae biofilm formation is the impact of carbapenemresistant plasmids or hvKP-specific virulence plasmids on biofilm formation.The various factors and genes affecting biofilm formation in K. pneumoniae are shown in Figure 3 and Table 1.Some controversy remains regarding certain factors in different studies.Moreover, the mechanisms by which the above factors affect the biofilm formation of K. pneumoniae requires further study.For instance, the role and the molecular mechanisms of the capsule of K. pneumoniae in biofilm formation is still unclarified.In summary, realizing the commonality and specifics of biofilm formation between K. pneumoniae and other bacteria will lead to a deep understanding of bacterial interactions within natural or host infection environment.Furthermore, it would be helpful to develop new therapeutic strategies for K. pneumoniae biofilm. FIGURE 1 FIGURE 1The regulation mechanisms of the biosynthesis of type 3 and type 1 fimbriae of K. pneumoniae.When activated by c-di-GMP, MrkH recruits RNA polymerase to the mrkHI promoter to auto-activate mrkH expression.Increased MrkH production subsequently drives the expression of mrkABCDF, leading to type 3 fimbriae biosynthesis and biofilm formation.The expression of mrkHI is also actively regulated by Fur, which usually acts as a transcriptional activator to directly activate the transcription of mrkHI.MrkJ encodes PDE which contains an EAL domain mediating the hydrolysis of c-di-GMP.YfiN harbors DGC domain plays a positive role in the expression of type 3 fimbriae by changing the level of c-di-GMP.CRP-cAMP indirectly regulates the expression of type 3 fimbriae through inhibiting the c-di-GMP signal pathway.Type 1 fimbriae are encoded by the fim gene cluster.The regulation of fim gene expression is controlled by fimS.The regulatory gene fimK constitutes the fim operon, and FimK can regulate intracellular levels of c-di-GMP. to animal cells, and intestinal colonization in mice.Further, ΔkpaC and ΔkpeC mutants were also found to weaken biofilm formation and adhesion to Arabidopsis cells, respectively.The deletion of the kpjC usher-coding gene was shown to significantly reduce biofilm formation, while the loss of the kpaC usher gene was shown to only affect early and late stages of biofilm formation(Khater et al., 2015).The kpf gene cluster encodes type 1-like fimbriae, while the kpfR gene encoding the transcription inhibitor of the kpf gene cluster negatively regulates the expression of fimbriae.K. pneumoniae lacking the kpfR gene showed a hyperfimbriated phenotype and enhanced adhesion to epithelial host cells and biofilm formation(Gomes et al., 2020). TABLE 1 Genes related to biofilm formation of Klebsiella pneumoniae.
2023-09-10T15:31:51.146Z
2023-09-07T00:00:00.000
{ "year": 2023, "sha1": "54cc3a04f48863d1b6baa8ea0d4204cbcdc1a23b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "8dfd55a94fec2c11411f204b5a7bf58805485022", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
35028809
pes2o/s2orc
v3-fos-license
Symbol, Conversational, and Societal Grounding with a Toy Robot Essential to meaningful interaction is grounding at the symbolic, conversational, and societal levels. We present ongoing work with Anki's Cozmo toy robot as a research platform where we leverage the recent words-as-classifiers model of lexical semantics in interactive reference resolution tasks for language grounding. Introduction Grounding is essential in meaningful interaction (Clark, 1996;DeVault, Oved, and Stone, 2006;Schlangen, 2016). Grounding is a term used to denote several distinct aspects of language and communication. We take up three aspects here, though Lücking and Mehler (2014) have identified others: (1) symbol grounding (Harnad, 1990) where aspects of language are connected with aspects of the things that language denotes, such as visual features (e.g., the word red is linked to aspects of visual perception), (2) conversational grounding (Clark, 1996) where aspects of events that occur between two or more people are recorded for later use and recall, and (3) societal grounding (DeVault, Oved, and Stone, 2006) which connects symbol and conversational grounding with the accepted uses of language used in a particular language community. These aspects of grounding are summarized in Figure 1. All three types of grounding overlap with each other which allows for meaningful communication. To illustrate, consider a child who sees a pine cone and experiences firsthand its visual and tactile features. A nearby adult says "that's a pine cone" because the adult has already established through societal grounding that "pine cone" denotes such an object. By hearing this, the child learns through symbol grounding that certain visual and tactile features are linked to the words "pine cone" and both the child and adult establish through conversational grounding the event that the child has heard the denotation. Grounding on all three levels in this example occurred through an interactive process which establishes grounding of linguistic meaning between words and the perceived world, between individuals, and between individuals and language communities at large. Copyright c 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Symbol Societal Conversational Figure 1: Comparison of grounding types. An individual perceives objects and grounds symbols-conventional denotations for those objects-interactively through conversational grounding with someone else. The conventional denotations are socially grounded through interaction with members of a language community. It is through this face-to-face spoken conversation setting, the basic and primary setting of language (Fillmore, 1981), where interlocutors can denote objects (often with pointing gestures) in their shared environment which forms the foundation for language acquisition (McCune, 2008), and from which words denoting more abstract concepts are built. A key question is how semantic meaning should be represented and acquired through this co-located grounding process. We present ongoing work on grounding with a toy robot. We leverage the words-as-classifiers (WAC) model of lexical semantics (Kennington and Schlangen, 2015), recently yielding state-of-the-art results in a reference resolution task using real images and deep learning to represent the object regions (Schlangen, Zarriess, and Kennington, 2016). The model is flexible, interpretable, and simple in that each word is treated as its own classifier. Background & Related Work This work builds on related work in co-located, language grounding (Roy, 2005) and recent work in grounded language semantic learning in various tasks and settings, notably learning descriptions of the immediate environment (Walter et al.); navigation (Kollar et al., 2010), and verbs (She and Chai, 2016). A common task to evaluate models convincingly is reference resolution to real-world objects. In most cases, the set of candidate objects are simultaneously visible within a scene. This project goes beyond this work: the robot's limited perspective allows it to see one or two objects in front of it at a time. The robot must settle on an object potentially without being able to see all of the objects-arguably a more realistic task (similar in spirit to navigation tasks such as Han and Schlangen (2017)) and language grounding setting; i.e., the two interlocutors do not share the same perspective. Moreover, previous work has assumed that humans will treat and interact with the robot in such a way that the robot will perform symbolic grounding, but it's not necessarily the same setting where humans acquire their first language: as children. It has been shown that humans treat robots differently depending on how they perceive the robot's gender (Eyssel and Hegel, 2012), social categorization (Eyssel and Kuchenbrandt, 2012), personality (Tay, Jung, and Park, 2014), and intelligence (Novikova et al.). In this work, we take this knowledge into account by using a robot that is more likely to be perceived and treated as a child by humans. We leverage the recently released Anki Cozmo robot as a platform to research spoken language grounding using the WAC model. The Cozmo robot (example in Figure 2) is a small robot that has a well-documented SDK and growing community support. 1 The robot itself has arms that can lift or push small objects, track wheels for movement, a simple text to speech synthesizer (i.e., the robot itself has a small speaker), and a black and white camera which is embedded in a small movable head that has animated eyes. Some built-in capabilities include facial recognition and some basic functionality for detecting specific types of objects (e.g., some blocks that are included with the robot). The hardware that makes up the robot offer enough degrees of freedom to make it a flexible and versatile research platform; the size and affordances of the robot make it manageable for researchers who are not roboticists. The SDK is written in Python making it easily extensible by the myriad of machine learning and natural language processing libraries. Importantly, the robot is affordable (under $180) and very portable. Our group has already acquired two Cozmo robots and we have found them to be accessible, usable, and flexible, even for fairly novice programmers. Language Grounding: Our Approach We follow a simple strategy for language grounding and acquisition: assuming that the system can detect (i.e., not recognize) objects-a precondition for learning words that denote objects (Bloom, 2000, p.61)-we apply the WAC model to learn novel words with minimal interactions. We also take into account the essential pragmatic scaffolding that must be in place for language grounding to take place: the Cozmo robot can track a person's face and facial features which we will leverage for positive and negative feedback when the robot performs certain tasks that involve word usage. Learning in real-time interaction is no trivial matter, but here the 1 https://developer.anki.com/en-us Cozmo platform is useful: instead of using potentially complicated pointing recognition, we can assume that an object under discussion is the one directly in front of the robot. Evaluation of our model can be done by a reference resolution task similar to a game of fetch where a human player refers to an object and the robot must find that object as soon as possible. Our preliminary work using the Cozmo SDK has shown promise. We have applied some of our own object detection to the camera feed using OpenCV (see Figure 3) as well as the YOLO object detection model (Redmon and Farhadi, 2016). Having detected the objects, we can extract low-level object features for the WAC model which does the object recognition and grounds the words in the referring expressions to the objects. In our preliminary experiments, the WAC model selects the correct object about half of the time with minimal training data. Supporting the WAC model will be additional standard dialogue system modules, such as a conversational speech recognizer and a dialogue manager. We build off of our own previous work for evaluating conversational speech recognition (Baumann et al., 2016) to determine the best option, and dialogue management in an interactive setting with a robot (Kennington et al., 2014) using the OpenDial toolkit (Lison and Kennington, 2015). The outcome of this research will be improved understanding of how lexical semantic meaning is learned and represented through natural interaction. We are exploring a setting where Cozmo interacts with children to perform simple tasks, as Cozmo is marketed as a toy for children to learn procedural 'coding'. In our observations, children find Cozmo aesthetically pleasing and enjoyable to interact with. We anticipate several challenges: for WAC, the robot's integrated camera has a limited, black and white perspective (i.e., the WAC model cannot make direct use of color information in this setting). Verb learning of robot actions will also be challenging (e.g., move, pick up, push, etc.); we will build off of very recent work by She and Chai (2017). The task and setting will also challenge the WAC model due to differences in perspectives (e.g., the word left will mean something different depending on the perspective of the users and the robot). Though we are not roboticists, we feel it important to bring together dialogue systems and robotics researchers to work towards natural, spoken interaction with robots.
2017-09-29T16:36:53.000Z
2017-09-29T00:00:00.000
{ "year": 2017, "sha1": "1403b7a094a0487cf7c6eb6b1021fe1b30f90f0d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c9b29b4f5008ef8352a92231e6baba05b9bf2675", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
62590405
pes2o/s2orc
v3-fos-license
Behind the Shell: Rigid Persons Clung onto It Introduction"The iron cage revisited" (DiMaggio & Powell, 1983) is a well-known paper that discusses "isomorphism" in organizations (Yasuda & Takahashi, 2007). The paper begins with an introduction of the final part of The Protestant Ethic and the Spirit of Capitalism (Weber, 1930). The key phrase, "iron cage," is derived from Weber.In Japan, the 1970s began to see the use of the term "iron cage" as Weber's metaphor for bureaucracy. This tendency has remained unchanged. For instance, in Yamanouchi (1997), a representative exposition of Weber, one can see "the modern bureaucratic order, which Weber called 'iron cage' " (Yamanouchi, 1997, p. 95), "modern bureaucracy named 'iron cage' " (Yamanouchi, p. 96), and "the 'iron cage' of bureaucracy" (Yamanouchi, p. 98).The imagery of "iron cage" as a symbol of bureaucracy was used to criticize the bureaucracy or Weber himself (Arakawa, 2007). For example, one stereotypical explanation stated that the "iron cage" (bureaucracy) has turned modern human beings into apathetic gears in a machine. The first page of DiMaggio and Powell (1983) also says that "the imagery of the iron cage has haunted students of society as the tempo of bureaucratization has quickened" (DiMaggio & Powell, 1983, p. 147).However, it may be surprising to note that the term "iron cage" is nowhere to be found in Weber (1920). The source for the term is Talcott Parsons' English translation of Weber (1930). The original German of Weber (1920) uses the term "Gehause." No dictionary defines this term as "cage."The word "Gehause" appears only once on pages 37 and 203 and twice on page 204, for a total of four times in Weber (1920). Correspondingly, Parsons' translation uses the phrase "order of things" on page 54, "iron cage" and "cage" on page 181, and "cage" on page 182 (Weber, 1930). In other words, "iron cage" is used only once (Weber, 1930, p. 181).The term "iron cage" became widely known due to the publication of Weber's biography, Iron Cage (Mitzman, 1969); however, Mitzman himself did not use the term "iron cage" in the text (Arakawa, 2007). Instead, he used the translation "a housing hard as steel" and criticized Parsons' translation in the footnote, saying "[the original German word "Gehause"] has a significance beyond the phrase 'iron cage' used by Parsons" (Mitzman, 1969, p. 172).Since then, English-speaking researchers have been critical of the term "iron cage." For example, Sayer (1991, p. 144) noted that a "shell on a snail's back" is a more appropriate term than "iron cage," saying it denotes "a burden perhaps, but something impossible to live without, in either sense of the word" and that "a cage remains an external restraint." In fact, the original "Gehause" has the meaning of a "snail's shell," and there are many examples of this word simply being translated as "shell" (Arakawa, 2007).A Deeper Meaning of ShellWhat was Weber trying to make known with the word "Gehause"? According to Weber (1920), asceticism emerged in medieval times and called on individuals to leave the secular world, enter monasteries, and serve God. Then, the Reformation took place and gave rise to Puritanism, which emphasized the virtue of having faith in Christ without leaving the secular world. This faith was accepted by the nascent middle class at the time. The monastic lifestyle was no longer seen as particularly holy. Instead, "a sanctified work-life in the secular world" was believed to be in accordance with God's will (Otsuka, 1991, p. 401). Secular jobs were sacred vocations, a calling from God ("Beruf" in German). It was in this setting that Parsons' translation of "iron cage" appeared.In Baxter's view the care for external goods should only lie on the shoulders of the "saint like a light cloak, which can be thrown aside at any moment". But fate decreed that the cloak should become an iron cage....To-day the spirit of religious asceticism-whether finally, who knows? … Introduction "The iron cage revisited" (DiMaggio & Powell, 1983) is a well-known paper that discusses "isomorphism" in organizations (Yasuda & Takahashi, 2007). The paper begins with an introduction of the final part of The Protestant Ethic and the Spirit of Capitalism (Weber, 1930). The key phrase, "iron cage," is derived from Weber. In Japan, the 1970s began to see the use of the term "iron cage" as Weber's metaphor for bureaucracy. This tendency has remained unchanged. For instance, in Yamanouchi (1997), a representative exposition of Weber, one can see "the modern bureaucratic order, which Weber called 'iron cage' " (Yamanouchi, 1997, p. 95), "modern bureaucracy named 'iron cage' " (Yamanouchi,p. 96), and "the 'iron cage' of bureaucracy" (Yamanouchi,p. 98). The imagery of "iron cage" as a symbol of bureaucracy was used to criticize the bureaucracy or Weber himself (Arakawa, 2007). For example, one stereotypical explanation stated that the "iron cage" (bureaucracy) has turned modern human beings into apathetic gears in a machine. The first page of DiMaggio and Powell (1983) also says that "the imagery of the iron cage has haunted students of society as the tempo of bureaucratization has quickened" (DiMaggio & Powell, 1983, p. 147). However, it may be surprising to note that the term "iron cage" is nowhere to be found in . The source for the term is Talcott Parsons' English translation of Weber (1930). The original German of uses the term "Gehäuse." No dictionary defines this term as "cage." The word "Gehäuse" appears only once on pages 37 and 203 and twice on page 204, for a total of four times in . The term "iron cage" became widely known due to the publication of Weber's biography, Iron Cage (Mitzman, 1969); however, Mitzman himself did not use the term "iron cage" in the text (Arakawa, 2007). Instead, he used the translation "a housing hard as steel" and criticized Parsons' translation in the footnote, saying "[the original German word "Gehäuse"] has a significance beyond the phrase 'iron cage' used by Parsons" (Mitzman, 1969, p. 172). Since then, English-speaking researchers have been critical of the term "iron cage." For example, Sayer (1991, p. 144) noted that a "shell on a snail's back" is a more appropriate term than "iron cage," saying it denotes "a burden perhaps, but something impossible to live without, in either sense of the word" and that "a cage remains an external restraint." In fact, the original "Gehäuse" has the meaning of a "snail's shell," and there are many examples of this word simply being translated as "shell" (Arakawa, 2007). A Deeper Meaning of Shell What was Weber trying to make known with the word "Gehäuse"? According to , asceticism emerged in medieval times and called on individuals to leave the secular world, enter monasteries, and serve God. Then, the Reformation took place and gave rise to Puritanism, which emphasized the virtue of having faith in Christ without leaving the secular world. This faith was accepted by the nascent middle class at the time. The monastic lifestyle was no longer seen as particularly holy. Instead, "a sanctified work-life in the secular world" was believed to be in accordance with God's will (Otsuka, 1991, p. 401). Secular jobs were sacred vocations, a calling from God ("Beruf" in German). It was in this setting that Parsons' translation of "iron cage" appeared. Takahashi In Baxter's view the care for external goods should only lie on the shoulders of the "saint like a light cloak, which can be thrown aside at any moment". But fate decreed that the cloak should become an iron cage.…To-day the spirit of religious asceticism-whether finally, who knows?-has escaped from the cage. (Weber, 1930, p. 181) The "light cloak" on the shoulders became hard. Therefore, it is far more natural to think of the cloak as a "shell" rather than an "iron cage." Those who worked diligently out of faith eventually created a new society-a capitalistic society, which presupposed individuals with a strong desire to work. Eventually, a behavioral pattern was established in accordance with individuals' sense of divine calling. This pattern became a shell. Individuals were able to survive in the capitalistic society without religious faith as long as they wore this shell. According to Orihara (1969, pp. 292-296), this theme was a sublimation of Weber's experience. Weber had a smooth career and became a young professor at his mid-30s, but developed a neurological disorder. He found himself unable to accomplish even the bare minimum of his work assignments, and spent his days in therapy and suffering, traveling from place to place. This period, however, gave him great ideas. Weber is said to have admitted that he had been clinging onto academic work until that time as if he were clinging onto a type of talisman, even though he had no idea what he was protected from. In other words, the shell of acting out of an obligation for one's divine calling protects one just as a snail's shell and allows one to live in a capitalistic world without religious faith. Truth be told, the shell itself does not bind individuals. Rather, individuals themselves are clinging onto the shell, which is a form of talisman. In other words, individuals are not forced to do it. Instead, they are willing to do it. Behind the shell This is what Puritanism, which stresses austere lifestyle devoted to work, eventually became (Orihara, 1969, p. 294). In other words, people "seek protection, and enter into 'iron shells' of their own volition (or perhaps despite their distaste for doing so)" (Arakawa, 2001). Rigidity at the Flip Side of Shell A shell has two sides, though that does not mean that the front and back have separate functions. More accurately, if we took a shell with the front acting as a talisman and flipped it over, we would see rigid persons clinging onto it. In management studies, some researchers have recently begun to think along similar lines. Leonard-Barton (1992) noted an organization's up side core capabilities and down side/flip side core rigidities. People are rigidly clinging onto the shell, their talisman, believing that this is their "core." However, it does not matter to this rigidity whether the shell has competitive advantage or not. Even though the shell may be losing, or may have already lost, its competitive advantage, people still cling onto the flip side. Thus, they are driven to their ruin by inches. On the other hand, if a shell does have a competitive advantage, this would excuse the rigidity. We call this a case of rigidity but not one in which people are driven to their ruin by inches. This combination of excuses and rigidity is part of the discussion on population ecology. Hannan and Freeman (1984) noted that selection in populations of organizations favors forms with high levels of accountability (i.e., excuses) and high reliability/small variance of performance (i.e., rigidity). As a result, selection in populations of organizations favors organizations whose structures have high inertia (Hannan & Freeman, 1984, p.153; Assumption 1 and Theorem 1). For example, computer companies that have sold their products by relying on the superiority of their hardware and paying less attention to developing software applications (Leonard-Barton, 1992, p. 119) certainly denote some rigidity. However, this rigidity itself is not necessarily good or bad. When companies are increasing sales and profits because of their excellent hardware, their clinging onto the development of hardware should create further excellent hardware. This is a great advantage for the company. However, when times change and various software applications give the company a competitive advantage, it becomes clear that even the best hardware will not necessarily sell. In this case, the rigidity of clinging onto the excellent hardware is seen as a problem. Shell: Fossilization of Product Design Just as Leonard-Barton (1992) used the example of computer hardware, various product designs are frequently used as concrete cases of theories and models in management studies. These product designs tend to be shells. One example is the Model T Ford automobile that sold during the early years of the 20th century. More than 15 million vehicles were sold over the course of 20 years between 1908 and 1927. The car was famous for ushering in motorization and causing a change in the lifestyles of average citizens. Ford experienced dramatic growth by clinging onto the shell of the Model T, though the company eventually was driven to ruin by inches as its product design fossilized (Takahashi, 2011b(Takahashi, , 2011c. The Model T is one of the most popular product designs in management studies and has been extensively researched (Abernathy & Wayne, 1974;Takahashi, 2013b;Yamada, 2014). Various theories of the Model T have emerged, labeled diversely. One such label is "dominant design," a product design that dominates an era and is universally recognized (Abernathy, 1978;Akiike, 2013). The Model T is often used as an example of dominant design, as is IBM's System/360 computer released in 1964 (Teece, 1986) 1996, p. 150). IBM was also driven to ruin by inches as its computer design fossilized. However, some readers may believe that, considering the above examples, the shell could be labeled other names 1 such as "dominant design." In the computer world, there is an even more important shell than that seen in the case of the System/360: the world's first general-purpose electronic digital computer, the ENIAC. Since only one machine was manufactured, the ENIAC was actually neither a case of dominant design, neither was it a de facto standard, having no patents attached to it. However, it filled a role similar to the Model T (Takahashi, 2011d. "Shell" is a perfect designation for the ENIAC. Ford clung onto the Model T, IBM to the System/360, and the mainframe computer industry to the ENIAC. Hence, they emerged largely due to their shells and were eventually driven to their ruin by inches. Growth and Gradual Decline by Clinging onto Shells For companies that have experienced periods of growth and maturity, it is easy to find shells other than product design. By clinging onto these shells, companies may achieve high growth and eventually be driven to their ruin by inches. This is the general function of the shell. We can find several examples of shells other than product design in the field (or "gemba" in Japanese). A. Retail networks: Company A succeeded in creating a network of retail stores across the country in a period of growth for product "a." The company can prosper for a while by clinging onto its current sales method. However, the company has entered into a period of maturity. Store owners also have begun to get older, with most having no successors. The number of stores has been declining in recent years, and sales have been gradually falling. In other words, Company A is going into a gradual decline. B. Parent company sales power: Company B is a subsidiary of Company β. This parent company has directed all other subsidiaries and partners to order from Company B. This led to strong performance during Company B's period of growth, and the company seemed as if it could not fail as long as it clung onto the influence of Company β. However, once Company β entered a period of maturity, the sales of Company B began to decline. In other words, Company B is going into a gradual decline. C. Real estate: Company C was started by a landowner's son, adjacent to a large shopping center. The shopping center had many customers when it opened, and sales were strong. After this initial boom, however, Company C began to lose customers. Company C's land and buildings were owned by the father, which has enabled it to avoid incurring debts because the father charged a reduced rent. D. Patents: Company D was famous for a patented device that a former president developed. The current president increased sales by providing business solutions with customization of this device. The profit margin was high because of the patents. However, similar products that did not violate the patents began to emerge, and the company is going into a gradual decline. E. Franchise: Company E operated franchise stores in a certain region. The franchise agreements did not allow expansion into other regions. Simultaneously, this arrangement generated a satisfying profit for Company E. However, Company E had already saturated the market of this region. Still, an independent operator entered the region. Consequently, Company E is going into a gradual decline. In each of these examples, companies grow rapidly by clinging onto their shells and coming under their protection. However, they eventually are driven to their ruin by inches. Takahashi (2013a) constructed imaginary stories based on these simple, conventional cases, but many readers suspected that Company X in the stories was their own company, or that Company Y was a company they knew well. Thus, similar examples abound around us in Japan. Here, there, and everywhere, we can find out the companies that are going into a gradual decline and driven to their ruin by inches. Interestingly, in the 1980s, Tichy and Devanna (1986) found something similar to shells of the American steel and auto companies in a bind. They begin with an explanation of the "boiled frog phenomenon," an analogy of a typical biological reaction of a frog in a pot of water. If the frog is suddenly put into a pot of hot water, it will jump out of the pot to escape the heat. On the other hand, if the frog is placed into a pot of cold water, which is then slowly heated, the frog will not recognize the gradual increase in heat and remain in the pot, eventually boiling to death. The American steel and auto companies were victims of this phenomenon as they experienced a gradual decline due to a "cultural cocoon" that kept them from realizing a change in temperature. 2 Thus, it has been suggested that we can identify cocoons around companies driven to their ruin by inches. Cultural cocoons may be related to the experience of shells. Concluding Remarks In conclusion, it should be emphasized that Companies A, B, C, D, and E are driven to their ruin by inches, but do not immediately become bankrupt. The examples of the real estate of Company C and the patents of Company D remind us of "rent." Originally, rent is a fee paid by land users to landowners. However, "rent" of the system of national accounts (SNA) includes patent fees and copyrights, 3 in addition to fees for the use of land. The cases of Company C and Company D fit this mold. Moreover, in the field of strategic management, the concept of rent is a broad one. In other words, it is profitability above profit standards (Takahashi & Shintaku, 2002), and is thus not limited to real estate or patents. Retail networks, the sales power of a parent company, and franchise are all sources of rent and allow companies to avoid quick bankruptcy due to a large deficit. From the viewpoint of strategic management, shells are the source of rent. Therefore, when a company is young and weak, it has tendency to clinging onto its shell. Over time, the benefits of the shell diminish. Even so, the shell still contributes to the company's earnings, allowing the company to make profits. As long as companies cling onto the shell, both managers and employees fully understand that they cannot expect further growth, and it is obvious 2 The "cultural cocoon" is an attractive idea, though an incorrect one to use in explaining the boiled frog phenomenon. The effective temperature hypothesis is a better one. Refer to Takahashi (1989Takahashi ( , 1993Takahashi ( , 2003Takahashi ( , 2013c or Takahashi, Ohkawa, and Inamizu (2014) for more detail. that the company is going into a gradual decline and driven to ruin by inches. Therefore, such companies are common in our mature society.
2019-02-14T14:09:38.333Z
2015-02-15T00:00:00.000
{ "year": 2015, "sha1": "29d9bb993636b734f320632406c6ec46c9071add", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/abas/14/1/14_1/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7f9c0cc9c4778b01cf017e2ea10a3bc83b9a9c9a", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
220150862
pes2o/s2orc
v3-fos-license
Caffeine as a tool for investigating the integration of Cdc25 phosphorylation, activity and ubiquitin-dependent degradation in Schizosaccharomyces pombe The evolutionarily conserved Cdc25 phosphatase is an essential protein that removes inhibitory phosphorylation moieties on the mitotic regulator Cdc2. Together with the Wee1 kinase, a negative regulator of Cdc2 activity, Cdc25 is thus a central regulator of cell cycle progression in Schizosaccharomyces pombe. The expression and activity of Cdc25 is dependent on the activity of the Target of Rapamycin Complex 1 (TORC1). TORC1 inhibition leads to the activation of Cdc25 and repression of Wee1, leading to advanced entry into mitosis. Withdrawal of nitrogen leads to rapid Cdc25 degradation via the ubiquitin- dependent degradation pathway by the Pub1 E3- ligase. Caffeine is believed to mediate the override of DNA damage checkpoint signalling, by inhibiting the activity of the ataxia telangiectasia mutated (ATM)/Rad3 homologues. This model remains controversial, as TORC1 appears to be the preferred target of caffeine in vivo. Recent studies suggest that caffeine induces DNA damage checkpoint override by inducing the nuclear accumulation of Cdc25 in S. pombe. Caffeine may thus modulate Cdc25 activity and stability via inhibition of TORC1. A clearer understanding of the mechanisms by which caffeine stabilises Cdc25, may provide novel insights into how TORC1 and DNA damage signalling is integrated. Background The tightly regulated timing of mitosis in S. pombe occurs via the reciprocal activities of Cdc25 and Wee1 on Cdc2 inhibitory phosphorylation. Wee1 negatively regulates Cdc2 by phosphorylation of tyrosine residue 15 (Tyr15), and this is counteracted by the phosphatase activity of Cdc25 [1][2][3]. Cells must advance or delay mitosis under nutrient stress or genotoxic/environmental stress conditions respectively, several signalling pathways converge on the regulation of the Cdc25-Wee1 dual switch to effect accelerated entry into mitosis or a "double-lock" checkpoint mechanism. These pathways include the Target of Rapamycin Complex 1 (TORC1), the DNA damage response (DDR) and the environmental stress response (ESR) pathways [3][4][5][6][7] (Fig. 1). The methylxanthine caffeine is among the most widely used neuroactive substances in the world [8][9][10][11]. Caffeine exerts various effects on cellular and organismal physiology and is known to inhibit several members of the phosphatidylinositol 3 kinase-like kinase (PIKK) family including ataxia telangiectasia mutated (ATM) and ataxia telangiectasia and rad related (ATR) kinase homologue Rad3 and TORC1 in vitro [10,[12][13][14]. Early studies suggested that caffeine overrides DNA damage checkpoint signalling, by inhibiting Rad3 and its homologues but this view remains controversial [12,15]. Interestingly, TORC1 appears to be the major cellular target of caffeine in vivo [15][16][17]. The Tor2-containing TORC1 complex is a negative regulator of Cdc25 activity that determines the timing of mitosis in response to nutrient availability [18,19]. We and others have previously demonstrated that caffeine induces Cdc25 accumulation in mammalian and S. pombe cells [20,21]. The mechanisms by which caffeine stabilises Cdc25 in S. pombe remain unclear, but do not result from increased cdc25 + mRNA expression. Furthermore, Cdc25 expression was required for caffeine-mediated DNA damage checkpoint override in S. pombe. Intriguingly the effect of caffeine on cell progression under normal growth conditions mimics that of TORC1 inhibition [21]. Caffeine may thus modulate the activity of several pathways that converge on the regulation of Cdc25. In fact, caffeine clearly activates the ESR pathway [21,22]. One interesting question concerns how, the regulation of Cdc25 activity, phosphorylation and ubiquitin-dependent degradation of Cdc25 activity is integrated [23][24][25]. Given that cross talk occurs between the TORC1, DDR and ESR pathways [26][27][28], understanding how caffeine modulates Cdc25 activity and stability in S. pombe may shed further light on how these pathways interact [4,6,21,29]. Although the coregulation of Cdc25 and Wee1 is crucial for the proper timing of mitosis or cell cycle arrest and is effected via the same pathways [30]; this review will focus mainly on Cdc25 regulation for simplicity. Main text Cell cycle dependent regulation of Cdc25 activity, phosphorylation and ubiquitin-dependent degradation by the 26S proteasome Cdc25 levels oscillate during cell cycle progression in a manner similar to cyclins, rising steadily throughout the cell cycle, before becoming hyper-phosphorylated and degraded during mitosis [1,2,23,31]. Expression of Cdc25 appears to be dependent on TORC1 activity, as nutrient deprivation leads to a rapid loss of expression [1,2]. In the absence of a nitrogen source, cdc25 + mRNA translation ceases and the protein is rapidly degraded via the ubiquitin-dependent 26S proteasome pathway [32][33][34]. Wee1-mediated phosphorylation of Cdc2 tyrosine residues negatively regulates the activity of Cdc2-Cdc13 Maturation Promoting Factor (MPF). Cdc25 removes inhibitory phosphorylation on the Cdc2, leading to an autocatalytic positive feed-back loop, repression of Wee1 activity and full Cdc25 activation [1,31,35]. The HECT-type ubiquitin ligase Pub1 targets Cdc25 for ubiquitin-dependent 26S proteasome degradation in S. pombe. Deletion of pub1 + raises Cdc25 levels and renders cells resistant to Wee1 activity. Furthermore, the cyclic expression pattern of Cdc25 appears deregulated in pub1Δ mutants [32,34]. Of note is that pub1Δ mutants exhibit several phenotypes, suggesting additional Pub1 substrates. Interestingly, Pub1 also controls the ubiquitin-dependent regulation of amino acid uptake potentially linking nutrient absorption to Cdc25 and cell division via Sty1 and TORC1 [36][37][38]. The Anaphase-Promoting Complex (APC) may also facilitate the degradation of Cdc25 at mitosis [39,40]. Cdc25 is a highly unstable protein with a relatively short half-life [2,34]. Cdc25 levels oscillate through the cell cycle, peaking at mitosis and then rapidly decline just prior to cytokinesis [1,2,23,34]. Recent studies by Lucena et al. [23] reveal that Cdc25 in S. pombe becomes highly phosphorylated in G2, becomes dephosphorylated and then hyper-phosphorylated between mitosis and cytokinesis. Cdc25 levels then decline as the cells proceed through mitosis. Phosphorylation of Cdc25 during normal cell cycle progression is dependent on Cdc2 phosphorylation sites [23,41]. The decrease in both phosphorylated and total Cdc25 levels was strongly associated Caffeine was initially thought to inhibit Rad3 activity resulting in DNA damage checkpoint override. More recent studies have identified the TORC1 complex as the major target of caffeine in vivo. TORC1 delays mitosis by negatively regulating Cdc25 and activating Wee1. TORC1 inhibition advances the timing of mitosis suggesting caffeine can modulate cell cycle progression by inhibiting this complex. Caffeine activates the Sty1 regulated environmental stress response (ESR) pathway, leading to partial Cdc25 inhibition by Srk1. Depending on the degree of activation, Sty1 can also modulate Cdc25 activity to advance mitosis. The Mad2 spindle checkpoint protein is involved in the regulation of the DNA replication checkpoint. Caffeine's effect on cell cycle progression is partially inhibited by Mad2. *MTs (Microtubules). Green arrows indicate target activation. Red lines indicate inhibitory signalling with a rise in cyclin Cdc13 levels [23]. Dephosphorylation of Cdc25 at mitosis is regulated by the protein phosphatase 2A and its regulatory subunit Pab1 (PP2A Pab1 ). In mutants lacking pab1 + , Cdc25 remains hyperphosphorylated throughout the cell cycle and the timing of mitosis exceeds that of wild type cells. The degradation of Cdc25 still occurs in strains expressing mutant isoforms lacking Cdc2 phosphorylation sites, as well as in pab1Δ mutants. In addition, the relative abundance of Cdc25 during the cell cycle in pab1Δ mutants is unaffected [23]. We have also detected a Cdc25 expression negative-feedback loop in S. pombe [21]. Similarly, Clp1 phosphatase activity enhances the rate of Pub1-mediated Cdc25 degradation and timing of mitosis [34,39,42]. In clp1Δ mutants, Cdc25 remains phosphorylated throughout the cell cycle and the cell cycle is lengthened relative to wild type cells. Levels of Cdc25 are also elevated relative to wild type cells in clp1Δ mutants [23,34]. Clp1 also cooperates with the Pub1 and APC E3-ligases to facilitate the rapid degradation of Cdc25 at mitosis [34,39,40,42]. PP2A Pab1 and Clp1 phosphatase activity and Cdc25 degradation are thus important for regulating the timing of mitosis. In fact, high Cdc2 activity delays the timing of mitosis in S. pombe by inhibiting the septation initiation network (SIN) [34,39,40]. Hence, the link between Cdc25 phosphorylation, activity and degradation remains unclear (discussed further below) [24]. Importantly, under normal cell cycle conditions TORC1 inhibits the Greatwall kinase phosphorylates Endosulfine, which is a potent inhibitor of PP2A Pab1 phosphatase activity. When nitrogen is withdrawn or TORC1 is chemically inhibited, PP2A Pab1 is indirectly inhibited, Cdc25 becomes hyperphosphorylated and entry into mitosis in these cells is advanced. This activity also links the Sty1 regulated environmental stress response pathway to TORC1 and Cdc25 regulation [43,44]. Lucena et al. also reported that Cdc25 phosphorylation and dephosphorylation still occur in pab1Δ mutants [23]. This study did not address however, the role of Srk1dependent Cdc25 phosphorylation during the normal cell cycle (reviewed below). As Srk1-dependent phosphorylation of Cdc25 does not involve the phosphorylation of Cdc2 consensus sites, sequential and differential phosphorylation or combinations thereof may determine the precise timing of mitosis [23,25,45]. TORC1 thus regulates the timing of mitosis by modulating PP2A Pab1 activity to inhibit Cdc25 and activate Wee1. In contrast, TORC1 inhibition results in Cdc25 activation and the degradation of Wee1 [18,19,44]. As PP2A Pab1 and Clp1 also regulate the phosphorylation, activity and localisation of Wee1, these pathways serve to integrate Cdc25 and Wee1 activity for the proper timing of mitosis [18,19,30]. DNA damage checkpoints and Cdc25 inhibition Stalled replication during S-phase or DNA strand breaks in G2, activate the Rad3 regulated DNA damage response pathway and respective downstream activation of Cds1 and Chk1 kinases (reviewed in [3,4]). The Cds1 and Chk1 kinases in turn, phosphorylate key inhibitory serine and threonine residues on Cdc25. In addition to inhibiting Cdc25 activity within the nucleus, the phosphorylation of these residues also facilitates binding of the 14-3-3 protein Rad24, nuclear export and sequestration within the cytoplasm [46][47][48][49]. Interestingly, Cdc25 levels accumulate in the cytoplasm under conditions of cell cycle arrest following DNA damage. Cdc25 levels also accumulate when cell cycle mutants cease dividing at the restrictive temperature. This "stockpiling" of inactive Cdc25 may facilitate rapid cell cycle re-entry following the completion of DNA damage repair [31]. Later studies indicated that Cdc25 nuclear export is not required for DNA damage checkpoint enforcement, indicating that Cds1 or Chk1 mediated phosphorylation is sufficient to inhibit the activity of the phosphatase [47,50]. Other studies suggest that additional redundant pathways exist, for the regulation of Cdc25 mutants that cannot be phosphorylated [51,52]. When the 9-12 major inhibitory phosphorylation sites are mutated (Cdc25 (9A )-GFP int , Cdc25 (12A) -GFP int ), S. pombe cells are still able to activate an effective DNA damage response. This form of DNA damage checkpoint activation, results from the rapid degradation of these mutant Cdc25 isoforms and a Mik1 dependent cell cycle arrest [51,52]. The Cdc25 (9A) -GFP int and Cdc25 (12A) -GFP int expression levels are relatively stable under normal cell cycle conditions, accumulate in the nucleus to a greater extent than the wild type Cdc25 -GFP int but have a slightly shorter half-life. Enforced nuclear localisation of Cdc25 (Cdc25-NLS-GFP int ) does not affect replication checkpoint activation and stockpiling of the phosphatase occurs as with the wild type isoform. The levels of Cdc25-NLS-GFP int are also relatively higher, than in wild type Cdc25-GFP. In contrast, Cdc25 (9A) -NLS-GFP int is degraded when the replication checkpoint is activated. Cdc25 (9A) -NLS-GFP int also appears to be relatively unstable compared to Cdc25-NLS-GFP int , suggesting Cdc25 phosphorylation prevents degradation during the normal cell cycle [51,52]. These observations indicate that Cdc25 degradation occurs in the nucleus following stalled replication or DNA damage. They also suggest that activation of the replication or DNA damage checkpoints, induces an increase in the rate of non-phosphorylated Cdc25 degradation. In this regard, it is important to note that Cut8 localises the 26S proteasome to the nucleus, accumulates following DNA damage and is required for DNA repair. However, mutants lacking cut8 + are checkpoint proficient [53]. As wild type Cdc25 degradation is not required for replication stress or DNA damageinduced cell cycle arrest, it would be interesting to study the impact of a cut8 deletion on Cdc25 (9A) -GFP int and Cdc25 (12A) -GFP int degradation. Cds1 or Chk1-mediated phosphorylation of the major inhibitory phosphorylation sites is thus sufficient to prevent degradation by the 26S proteasome. Other lines of evidence suggest, that the Rad3 regulated checkpoint pathways regulate Cdc25 expression and stability even under normal growth conditions. Deletion of rad3 + or cds1 + suppressed cdc25 + mRNA expression but induced the accumulation the Cdc25 protein. Unlike wild type cells, rad3Δ mutants continue to express Cdc25 even in stationary phase [21]. Similarly, the rate of degradation of Cdc25 (9A) -GFP int and Cdc25 (12A) -GFP int mutant protein is delayed in a cds1Δ background [52]. Rad3 may thus regulate Cdc25 stability in a Cds1-dependent manner even under normal growth conditions. Cds1 also accumulates in response to TORC1 inhibition following glucose withdrawal, providing a further link between TORC and DNA damage checkpoint signalling [54]. While the Pub1 E3-ligase targets Cdc25 to the 26S proteasome for degradation, deletion of pub1 + did not prevent the degradation of Cdc25 (9A) -GFP int mutant protein in the presence of hydroxyurea [52]. Furthermore, pub1Δ mutants have elevated Cdc25 levels, fail to adequately degrade the phosphatase at mitosis and are sensitive to genotoxic agents [23,32,55]. Interestingly, mutants also display sensitivity to caffeine ([56], Alao and Sunnerhagen, unpublished results). It is thus possible that the APC mediates the degradation of Cdc25 isoforms lacking major inhibitory phosphorylation sites, following the activation of the replication or DNA damage checkpoints [39,40,52]. Indeed, APC mediated degradation of mitotic cyclins and regulators is required for proper exit from mitosis and progression through cytokinesis [39]. Clp1 is also required for full activation of Cds1 in response to replication stress [57]. Interactions between the replication checkpoint and spindle checkpoint pathways also contribute to the enforcement of cell cycle arrest under genotoxic conditions. These interactions may also contribute to the regulation of Cdc25 stability, via differential combinations of positive (Cdk1, Plo1 mediated) and negative (Cds1, Chk1, Srk1 mediated) phosphorylation of serine/threonine residues [3,21,29,40,58] in DNA damage checkpoints and Cdc25 inhibition section [59][60][61][62][63][64][65]. In Cdc25 (9A) -GFP int mutants Mik1 is required for effective maintenance of the replication checkpoint [51,52]. Thus, while Rad24 binding slightly enhances Cdc25 stability under normal growth conditions, it prevents the degradation of the phosphatase when the DNA damage or replication checkpoint pathways are activated. The existence of these redundant mechanisms suggests that even modest Cdc25 activity during DNA damage checkpoint activation can contribute to inappropriate progression through mitosis [21,51,52]. Genomic studies have also revealed a role for the DNA damage response pathway, in mediating resistance to caffeine. Mutants with rad3Δ, rad51Δ, or rad54Δ mutations also show sensitivity when grown on solid media in the presence of caffeine [22]. Caffeine may thus induce DNA damage, but the underlying mechanisms remain unclear. It is interesting to note however, that these findings hint at caffeine-induced DNA damage and Rad3 activation in S. pombe. Caffeine also appears to accelerate the timing of mitosis under genotoxic conditions, rather than delaying cell cycle progression. Together, these observations provide additional evidence that Rad3 is in fact not a target of caffeine in this organism [21]. Effect of caffeine on Cdc25 expression and stability Caffeine can inhibit several members of the PIKK family, and inhibition of Rad3 and its homologues ATM and ATR was thought to be the mechanism underlying checkpoint override [10,[12][13][14]. This paradigm has proved controversial, as checkpoint override by caffeine can occur in the absence of ATM, ATR or Rad3 inhibition [15,21,66]. It has also become apparent, that TORC1 and not ATM homologues are the preferred target of caffeine in vivo [15][16][17]. TORC1 regulates the timing of cell division in response to nutrient availability via the S. pombe Greatwall kinase homologue Ppk18 [18,67]. Inhibition of TORC1 activity activates Cdc25, induces Wee1 degradation and advances cells into mitosis. The exposure of S. pombe cells to caffeine advances mitosis in a manner that resembles TORC1 inhibition [21]. Caffeine also moderately activates the Sty1-regulated ESR pathway [21,22]. Modest Sty1 activation can drive cells into mitosis in a manner dependent on Plo1 and Cdc25 [6,43]. Activation of Sty1 has been shown to induce Cdc25 stabilisation, presumably as a consequence of Srk1-mediated phosphorylation, Rad24 binding and sequestration within the cytoplasm [25]. Caffeine may thus modulate cell cycle progression by partially inhibiting TORC1, moderately activating Sty1 or otherwise modulating Cdc25 activity to advance mitosis. In fact, Cdc25 expression was necessary for caffeine-mediated DNA damage checkpoint override in our studies [21]. Previous studies have shown that caffeine induces the accumulation of Cdc25B in mammalian cells [20]. We have similarly demonstrated that caffeine induces the accumulation of Cdc25 in S. pombe under normal cell cycle conditions as well as under environmental stress or genotoxic conditions. This effect on Cdc25 occurs at the post-translational level since caffeine suppresses cdc25 + mRNA expression. Interestingly, rad3∆ and cds1∆ deletions also stabilised Cdc25 protein levels while supressing its mRNA expression. Furthermore, caffeine is more effective at advancing mitosis in rad3Δ and cds1Δ mutants relative to wild type cells [21]. We also noted that DNA damage checkpoint mutants do not just fail to arrest cell division but are accelerated into mitosis following DNA damage. This change in cell cycle kinetics resembles the effect of caffeine on cells exposed to genotoxic agents [21,29,68,69]. Caffeine thus mimics the loss of DNA damage checkpoint signalling in S. pombe, without inhibiting Rad3 activity [21]. This effect of caffeine also mimics that of the Tor2 inhibitors rapamycin and torin1 on cell cycle progression in S. pombe [44]. Mutants lacking functional Clp1 or Srk1 that normally negatively regulate Cdc25 are more sensitive to caffeine mediated DNA damage checkpoint override than wild type cells. The phosphorylation of Cdc25 is therefore not required for the stabilising effect of caffeine on the phosphatase but influences its effect on cell cycle progression [21]. Caffeine inhibits the degradation of Cdc25 mutants (Cdc25 (9A) -GFP int and Cdc25 (12A) -GFP int ) lacking the major inhibitory phosphorylation sites [21,51,52]. In contrast to the stockpiling of wild type Cdc25 when cells are arrested, the Cdc25 (9A) -GFP int and Cdc25 (12A) -GFP int mutants are degraded in the presence of genotoxic agents. Redundant mechanisms thus exist, to clear excess non-phosphorylated Cdc25 from the nucleus when DNA damage checkpoint signalling is activated [51,52]. Caffeine clearly stabilised these mutants in the presence of genotoxic agents [21]. As Cdc25 (9A) -GFP int and Cdc25 (12A) -GFP int are relatively stable under normal cell cycle conditions, caffeine must inhibit a pathway that targets non-phosphorylated Cdc25 for ubiquitindependent 26S proteasomal degradation under genotoxic conditions. The ability of caffeine to override checkpoint signalling in cells expressing these mutants, is also enhanced relative to the wild type protein [21,51,52]. The rapid degradation of Cdc25 isoforms that cannot be phosphorylated (Cdc25 (9A) -GFP int and Cdc25 (12A) -GFP int ) [51] following genotoxic insults, hints at an increase in 26S proteasome mediated protein degradation. This redundant mechanism clears Cdc25 that is unphosphorylated from the nucleus [51,52]. These studies also demonstrated that Cdc25 protection from degradation occurs via Chk1 and Cds1 inhibitory phosphorylation. As these isoforms are relatively stable under normal cell cycle conditions, genotoxic conditions must somehow enhance the targeting of unphosphorylated Cdc25 to the 26S proteasome [51,52]. Caffeine thus suppresses Cdc25 degradation independently of Cds1, Chk1 and Srk1-mediated phosphorylation [21]. In fact, exposure to 0.6 M KCl induced the degradation of Cdc25 (9A) -GFP int in a manner similar to what was observed with genotoxic agents (Alao and Sunnerhagen, unpublished results). Mechanisms underlying caffeine-induced Cdc25 stabilisation By what mechanism(s) could caffeine affect the rate of Cdc25 degradation via the 26S proteasome? Caffeine has been reported to induce the ubiquitin-dependent degradation of certain proteins in mammalian cells [70]. The rapid degradation of Cdc25 isoforms that cannot be phosphorylated under genotoxic conditions, hints at the activation (or increased activity) of a ubiquitin-dependent degradation pathway. Alternatively, a general increase in the overall rate of ubiquitin-dependent degradation may occur under these conditions. Clearly further studies on the regulation of Pub1 (the E3-ligase targeting Cdc25) activity under normal and genotoxic conditions, in the presence and absence of caffeine are warranted. Such studies may also provide novel insights into the regulation of Cdc25 stability in S. pombe. Similarly, Cut8 is required to localise the 26S proteasome to the nucleus and plays an important role in DNA damage repair. Cut8 accumulates in response to DNA damage but is not required for checkpoint activation [53,71]. The accumulation of Cut8 in the presence of genotoxic agents suggests a possible increase in the levels of ubiquitin-dependent protein degradation and could also drive progression through mitosis. Inhibiting Cut8 accumulation could be one possible mechanism, whereby caffeine attenuates the ubiquitin-degradation of nuclear Cdc25 (Alao and Sunnerhagen, unpublished results). Interestingly, cut8Δ mutants also display sensitivity to caffeine [22]. These observations suggest that caffeine is itself a DNA damaging agent [22,53] and may complicate studies on the effect of the drug on the DNA damage response pathway. Nevertheless, the ability of caffeine to override checkpoint signalling and drive cells trough mitosis appears to underlie its chemo-and radio-sensitising effects [9]. Lastly, studies on the effect of caffeine-mediated TORC1 inhibition in the context of mitotic progression are also potentially important. TORC1 mediates the timing of mitosis, by co-ordinating the phosphorylation, activity and expression levels of Cdc25 and Wee1 [18,23,44]. The effect of caffeine on cell cycle progression resembles that of more typical TORC1 inhibitors by accelerating the timing of mitosis in S. pombe [21,44]. Caffeine could thus advance the timing of mitosis, by indirectly increasing Cdc25 activity while inhibiting the activity of Wee1. Comparing the effects of TORC1 inhibitors on checkpoint activation with those of caffeine would be interesting. New antibodies that detect hyperphosphorylated Cdc25 and Wee1 have recently been reported. Studies on the effect of caffeine on cell cycle progression in various genetic backgrounds (e.g. mutants of the TORC1 signalling pathway such as pab1Δ) using these tools would also be useful [23]. Conclusion Despite more than two decades of research, the precise mechanisms whereby caffeine overrides checkpoint signalling remain unclear [9,10,17,21,66]. The more recent findings that TORC1 and not Rad3 appears to be the major target of caffeine in vivo, is particularly relevant in this regard [15]. It is thus likely that caffeine override DNA damage checkpoint signalling independently of Rad3 inhibition. Modulation of TORC activity by caffeine could account for its effects on cell cycle progression [17,44] (Fig. 1). Furthermore, caffeine also targets other pathways, at least some of which interact with each other [21,29]. Clearly, understanding how caffeine suppresses the degradation of Cdc25 in S. pombe is of central importance. Studies of this nature may shed light not only on the molecular pharmacology of caffeine, but also on how signalling pathway crosstalk impacts on cell cycle regulation. With the new insights and tools available, we can look forward to many more years of exciting research in this area.
2020-06-29T14:30:22.801Z
2020-06-29T00:00:00.000
{ "year": 2020, "sha1": "3ff8d7c8dad79fec77a6d760425b6c69b280ef8c", "oa_license": "CCBY", "oa_url": "https://celldiv.biomedcentral.com/track/pdf/10.1186/s13008-020-00066-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ff8d7c8dad79fec77a6d760425b6c69b280ef8c", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
251778030
pes2o/s2orc
v3-fos-license
CYBER WARFARE AS PART OF RUSSIA AND UKRAINE CONFLICT Russia and Ukraine have had tense relations since 2014, with violence erupting as a result. Cyber attacks have become an integral aspect of this conflict, in addition to the border issue and the separatist movement in Ukraine. Tensions between the two countries grew in 2021, and a significant cyber attack on the Ukrainian government website occurred in early 2022. Because the Ukrainian government claims that Russia was the brains behind the cyber assault, it has exacerbated the dispute between the two countries. Based on the timing of the wars and cyber attacks related to Ukraine, the goal of this study is to examine the relationship between cyber assaults and the political policies of the two countries toward the conflicts that occur. It also describe about the cyberspace dilemmas related to find the evidence of the real actors of cyber attacks. Introduction In 1991, Ukraine declared independence from the Soviet Union. Ukraine was close to the heart of Russia until recently, compared to other parts of the Soviet Union. Even so, Russians consider Ukraine to be a part of their culture. The relationship, however, is far from equal. Russia has been waging war against Ukraine for many years. Tensions between Russia and Ukraine flared in March 2014, when Russian troops seized control of Ukraine's Crimea region and annexed the peninsula after Crimeans opted to join the Russian Federation in a disputed local referendum. The rights of Russian people and Russian-speakers in Crimea and southeastern Ukraine must be protected, according to Russian President Vladimir Putin. According to Bloomberg, Ukraine had its deadliest cyber strike in four years on January 14, 2022, when nearly 70 of its government entities were targeted by enormous cyber attacks. According to Ukrainian authorities, the hack did not result in any significant data leaks, but investigators are still working on the case and gathering evidence. Based on these circumstances, the purpose of this research is to determine whether there is a link between the cyber assaults in Ukraine and the conflict between Russia and Ukraine. The next section will discuss the timing of cyber attacks and the growth of tensions between Russia and Ukraine in order to address this question. The Conflict Timeline of Russia and Ukraine As previously noted, the crisis between Russia and Ukraine has drawn the United States into a direct role in its density. The two countries have had several bilateral encounters, both at the foreign ministry level and between the two great powers' leaders. However, no deal has yet been achieved between the two warring countries that will lead to peace. President Joe Biden had previously stated that if Russia continued to intervene against Ukraine, it would face severe economic consequences. Biden also stated that if the situation intensified, he would send defense assistance to Dec-21 A ransomware attack on Australian power firm CS energy was blamed on a Russian gang. This announcement came after Australian news outlets blamed the attack on Chinese government hackers. Nov-21 Personal information of roughly 3,500 people, including government officials, journalists, and human rights advocates, was targeted by a Russian-speaking organization. To obtain access to private email accounts and financial information, the group utilized malware on Android and Windows devices. Oct-21 The Russian Foreign Intelligence Service (SVR) has initiated a campaign aimed at resellers and other technological service providers who customize, deploy, and manage cloud services, according to an American corporation. Sep-21 The EU officially condemned Russia for its role in the 'Ghostwriter' cyberattack, which targeted numerous EU member nations' elections and political processes. Russian hackers have been hacking government officials' social media accounts and news websites since 2017, with the purpose of instilling doubt in US and NATO military. Sep-21 Ghaleb Alaumary was sentenced to more than 11 years in jail by the US Department of Justice for assisting North Korean cybercriminals in money laundering. ATM cash-out operations, cyber-enabled bank robberies, and business email compromise (BEC) schemes were among the services he provided. In the United States and the United Kingdom, these attacks targeted banks, professional soccer clubs, and other unspecified businesses. Sep-21 a cyberattack on the United Nations targeted users on the UN network in order to obtain long-term intelligence. The hacker gained access to their networks by purchasing stolen user credentials on the dark web. Sep-21 A series of cyberattacks against private and public IT systems, according to the Norwegian government, were carried out by criminal actors supported by and operating out of China. According to their investigation into the attacks, the perpetrators tried to obtain classified material about Norway's national defense and security intelligence. Aug-21 a cyber-espionage outfit associated to one of Russia's secret services attempted spear-fishing attacks on the Slovak government. Aug-21 In order to coordinate anti-Kremlin voting in the parliamentary elections next month, Russia targeted and blocked information on a smart voting software built by Kremlin foe Alexei Navalny and his friends. Jul-21 The Russian defense ministry said it was the victim of a DDoS attack that forced the shutdown of its website, claiming the attack originated outside the Russian Federation. Jul-21 Russian hackers took advantage of a flaw in Kaseya's virtual systems/server administrator (VSA) software to launch a ransomware assault on the company's network. Around 1,500 small and midsized firms were hacked, and the criminals demanded $70 million in payment. Jul-21 The Ukrainian Ministry of Defense stated that Russian hackers hacked its naval forces' website and published false reports concerning the multinational Sea Breeze-2021 military exercises. Jun-21 DDoS assaults were allegedly launched against Vladimir Putin's annual phone-in event, according to Russia. Jun-21 Hackers affiliated to Russia's Foreign Intelligence Service put malicious software on a Microsoft system, giving them access to accounts and contacts. The majority of the consumers targeted were based in the United States and worked for IT firms or the government. Jun-21 From 2019 through 2021, the Russian GRU tried a series of brute force access attacks against hundreds of government and private sector targets around the world, targeting firms using Microsoft Office 365® cloud services, according to the US and British governments. Jun-21 The tracking data of two NATO ships, the HMS Defender of the United Kingdom Royal Navy and the HNLMS Evertsen of the Royal Netherlands Navy, was allegedly fabricated off the coast of a Russiancontrolled naval base in the Black Sea, according to the United States Naval Institute (USNI). The two vessels were positioned near the entrance to a key Russian naval base, according to the forged data. Jun-21 More than 30 senior Polish officials, ministers and lawmakers from political parties, as well as several journalists, had their email accounts hacked, according to reports. Jun-21 REvil, a Russian-linked hacking outfit, targeted Sol Oriens, a small government contractor that works for the Department of Energy on nuclear weapons issues. Jun-21 In 2017, hackers working for Russian intelligence services are thought to have penetrated the internal network of the Netherlands police force. The attack happened as the country was investigating the downing of Malaysia Airlines Flight 17 (MH17) in 2014. May-21 A ransomware attack hit JBS, the world's largest meat processing company, situated in Brazil. Facilities in the United States, Canada, and Australia were all shut down as a result of the attack. REvil, a Russianspeaking cybercrime outfit, was blamed for the attack. May-21 The Health Service Executive, Ireland's national health service, was targeted by a ransomware attack (HSE). The HSE system was shut down by government officials after the attack was discovered. The attackers used the Conti ransomware-as-a-service (RaaS), which is said to be run by a cybercrime group based in Russia. May-21 A ransomware attack was launched on the Colonial Pipeline, the country's major petroleum pipeline. The pipeline was shut off by the energy corporation, which later paid a $5 million ransom. DarkSide, a Russian-speaking hacking gang, is blamed for the attack. May-21 A Russian defense business involved in designing nuclear submarines for the Russian navy was hacked by a Chinese hacking organization. Apr-21 As tensions between the two countries increased in early 2021, Spearphishing cyberattacks were performed against Ukrainian government officials by Russian hackers. Apr-21 In response to charges of Russian government-sponsored doping of Russian athletes, Swedish officials revealed that the Swedish Sports Confederation was hacked by Russian military intelligence in late 2017 and early 2018. Mar-21 After breaking into the email system of the US State Department, suspected Russian hackers seized thousands of emails. Mar-21 In the run-up to Germany's national elections, suspected Russian hackers sought to obtain access to the personal email accounts of German lawmakers. Mar-21 Suspected Russian hackers briefly took control of the websites of Poland's National Atomic Energy Agency and the Ministry of Health in order to broadcast false alerts about a nonexistent radioactive threat, according to Polish security authorities. Mar-21 In unconnected efforts in 2020, Russian and Chinese intelligence agents targeted the European Medications Agency, seizing material linked to COVID-19 vaccines and medicines. Mar-21 Ukraine's State Security Service announced that it has foiled a large-scale attempt by Russian FSB hackers to obtain access to confidential government information. Mar-21 Russian hackers targeted important Lithuanian officials in 2020, according to Lithuania's State Security Department, and used the country's IT infrastructure to launch assaults against organizations working on a COVID-19 vaccine. Feb-21 Russian hackers gained access to a Ukrainian government file-sharing system and sought to spread harmful documents that would infect computers that downloaded them. Feb-21 A multi-day distributed denial-of-service attack against the website of Ukraine's Security Service was revealed by Ukrainian officials as part of Russia's hybrid warfare activities in the country. Feb-21 A Russian hacking outfit was behind a four-year assault against French IT companies, according to the French national cybersecurity agency. Dec-20 Facebook discovered that two groups of Russians and one group of individuals associated with the French military were conducting dueling political information operations in Africa using phony Facebook accounts. Dec-20 Russian hackers infiltrated the software supplier SolarWinds and exploited their access to monitor internal operations and exfiltrate data at over 200 businesses around the world, including various US government institutions. Nov-20 Seven firms participating in COVID-19 vaccine research were targeted by one Russian and two North Korean hacking outfits. Oct-20 A Russian cyber espionage operation hacked an undisclosed European government agency. Oct-20 A Russian cyber gang accessed U.S. state and local government networks, as well as aviation networks, according to the FBI and CISA. Oct-20 Attacks on Russian aerospace and defense businesses were carried out by a North Korean cyber outfit. Oct-20 The National Cyber Security Centre of the United Kingdom discovered evidence that Russian military intelligence hackers were plotting a disruptive cyber strike on the 2020 Tokyo Olympics, which were eventually postponed. Oct-20 Six Russian GRU officers were indicted by the US for their roles in hacking incidents such as the 2015 and 2016 attacks on Ukrainian critical infrastructure, the 2017 NotPetya ransomware epidemic, and election meddling in the 2017 French elections, among others. Oct-20 Microsoft and the US Cyber Command worked together to take down a Russian botnet before of the election in the United States. Oct-20 Chinese hackers are suspected of being behind a series of attacks on entities in Russia, India, Ukraine, Kazakhstan, Kyrgyzstan, and Malaysia, according to US government officials. Sep-20 Government institutions in NATO member countries and NATO cooperating countries were attacked by Russian hackers. The phishing scheme infects target PCs with malware that installs a permanent backdoor, and the campaign uses NATO training material as bait. Sep-20 Norway reported that it had successfully guarded against two cyberattacks that targeted the emails of many members and staff of the Norwegian parliament, as well as public employees in the Hedmark region. It eventually blamed the strike on Russia. Aug-20 In preparation for operations on Ukraine's independence day, Ukrainian officials announced that a Russian hacking group had begun a phishing campaign. Russian hackers hacked into news sites and substituted authentic articles with false comments from military and political authorities in an attempt to undermine NATO among Polish, Lithuanian, and Latvian audiences.. Aug-20 Hackers linked to Russian intelligence attempted to steal information relating to the development of the COVID-19 vaccine, according to Canada, the United Kingdom, and the United States. Jul-20 Norway reported that it had successfully guarded against two cyberattacks that targeted the emails of many members and staff of the Norwegian parliament, as well as public employees in the Hedmark region. It eventually blamed the strike on Russia. Jul-20 The United Kingdom revealed that it believes Russia tried to sabotage its general election in 2019 by stealing and exposing information relating to the UK-US Free Trade Agreement. Jul-20 According to media sources, the CIA was given permission to launch cyber operations against Iran, North Korea, Russia, and China by a presidential decision made in 2018. Disruption and information leakage were among the operations. Jul-20 Trump acknowledged that he gave the go-ahead for a US Cyber Command operation in 2019 to take the Russian Internet Research Agency down. May-20 Russian hackers linked to the GRU have been exploiting a weakness that might allow them to take remote control of US servers, according to the NSA. May-20 German officials discovered that a Russian hacker outfit linked to the FSB had hacked into the networks of German energy, water, and electricity corporations through their suppliers. Apr-20 Poland claims the Russian government is behind a series of cyber attacks on Poland's War Studies University, which are part of a disinformation effort aimed at sabotaging US-Polish relations. Jan-20 A Russian hacking organization attacked a Ukrainian energy business where Hunter Biden was previously a board member and which has been Jurnal Diplomasi Pertahanan, Volume 8, Nomor 2, 2022 E-ISSN 2746-8496 54 widely mentioned in the impeachment process in the United States. Dec-19 In a spear phishing attack, Russian government hackers targeted Ukrainian diplomats, government officials, military officers, law enforcement, journalists, and nonprofit organizations. Research Methodology The research methodologies used in this study are descriptive qualitative research methods. Qualitative research, according to Sugiono, is research in which the researcher is a primary instrument, data gathering methodologies are merged, and data analysis is inductive (Sugiono. 2010 : 9). Descriptive research, on the other hand, is a study that uses data to try to solve an issue. In descriptive research, the analysis process entails presenting, evaluating, and interpreting data (Narbuko & Ahmadi. 2015). In this research, the author employed a narrative style and a literature review to illustrate the link between the Russia-Ukraine conflict and Russian cyber attacks on Ukraine. The author of this study is looking for a link between political events in the two countries and the cyber attacks that happened in 2020 to 2021. The Correlation We can see that there is a probable correlation between Ukraine's or the US's political policies and the date of cyber assaults in Ukraine in the Feb-21 Russia is accused by the US of hindering a peaceful end to the conflict. According to Biden, the United States will "never" accept the takeover of Crimea. Feb-21 Russian hackers obtained access to a file-sharing system used by the Ukrainian government and attempted to circulate malicious documents that would infect computers that downloaded them. Feb-21 Ukrainian officials reported that Russia's hybrid warfare actions in the country included a multi-day distributed denialof-service attack against the website of Ukraine's Security Service. Mar-21 As long as Putin encourages pro-Russian separatists, EU President Charles Michel announces that the EU would maintain sanctions on Russia. Mar-21 The State Security Service of Ukraine reported that it has thwarted a large-scale attempt by Russian FSB hackers to get access to sensitive government data. Apr-21 Ukraine has voiced its dissatisfaction with the current increase in violence. Moscow expressed alarm about the region devolving into a "full-fledged battle." Apr-21 Russian hackers attempted spearphishing attacks on Ukrainian government officials as tensions between the two countries grew in early 2021. Jun-21 Ukraine's president has pleaded with the country's Western allies to help. So, while we conclude that there is a link between cyber assaults in Ukraine and the war between Russia and Ukraine based on this analysis, the perpetrators of the attacks must be legally confirmed. And because this is ineffective, it is preferable to invest in a strong cyber defense and reliable cyber resilience than than focusing all efforts on locating the true perpetrator. Conclusion We can draw some conclusions about the study's questions based on the research completed. In the context of the war between Russia and Ukraine, the cyber attack in Ukraine is highly helpful for Russia, which is currently under a lot of pressure from the US and NATO over the Ukrainian border issue, regardless of whether it was carried out on Russian government orders or not. Non-state actors or anyone acting on behalf of particular countries or groups can exploit cyberspace, which is extremely harmful for global security because it can be used by third parties to increase tensions between countries. It's tough to track down the players who carry out cyber assaults since most people who are victims of these attacks don't want to talk about it because it jeopardizes a country's or organization's security credibility. To perform a more in-depth study with more diverse objects and involve technological research on the types of assaults and their attack media, it is important to conduct a more in-depth study with more diversified objects and involve technological research on the types of attacks and their attack medium. In addition, a research of the users' security awareness is required. Because, in the security chain theory, the weakest part of the chain is belong to human.
2022-08-25T15:21:15.164Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "5f5338bd41b3d2dba132a4afd0171ec8e38ae058", "oa_license": "CCBY", "oa_url": "https://jurnalprodi.idu.ac.id/index.php/DP/article/download/1005/845", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9c4160f02d04ccff4b5eed16ea54051af059ee2a", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
80561058
pes2o/s2orc
v3-fos-license
Factors Contributing to Non-suicidal Self Injury in Korean Adolescents Purpose: Non-Suicidal Self-Injury (NSSI), a highly prevalent behavior in adolescents, refers to the direct destruction of one’s body tissue without suicidal intent. To date, the prevalence of adolescent self-injury in South Korea and its associated factors remain unknown. This study aims to determine the prevalence of self-injury in Korean adolescents as well as its associated factors. Methods: We assessed 717 middle school students by means of an anonymous self-report survey. Information about demographic characteristics, lifestyle, anxiety and depression, self-esteem, and parenting behavior was obtained. Data were analyzed using x test, t-test and multiple logistic regression. Results: NSSI was reported by 8.8% of respondents. Univariate analyses showed associations of exposure to alcohol use, anxiety, depression, self-esteem, and parenting methods with self-injury. In multiple analyses, alcohol use, anxiety, and parental abuse were associated with lifetime self-injury. Conclusion: The rate of NSSI in the South Korea was found to be lower than those of other countries. As our study suggests that alcohol use, anxiety, and parental abuse are associated with lifetime self-injury, health care providers at school should take these factors into account when developing prevention and intervention programs for adolescents. INTRODUCTION Non-Suicidal Self-Injury (NSSI) refers to the behavior that intentionally and repeatedly hurts one's own body, although there is no intention to attempt suicide [1].This contrasts with the basic behavior of humans seeking pleasure and avoiding pain.Reported prevalence rates of NSSI varied among studies but the prevalence of NSSI in adults is known to be 4~6% [2].On the other hand, prevalence rates of NSSI in adolescents were reported to be quite high.In a study of the prevalence of NSSI through the survey of a community sample of 1,862 adolescents in three countries, including European countries and the United States, approximately 24% of the adolescents reported at least one NSSI episode [3].In addition, according to a study of systematic literature review of the prevalence of NSSI in adolescents in schools and local communities, 18% of adolescents reported engaging in NSSI [4].These study results show that NSSI in adolescents has recently been widespread even in cases other than clinical situations such as mental disorders in Western countries. In general, NSSI begins in early adolescence of the ages between 12 and 14 years [5], but because it is a behavior that injures one's own body in secret, it is typically not easily detected by the family or in the school until the problem becomes serious, so it is necessary to pay attention to the risk of NSSI among middle school students who are in early adolescence.The most common type of self-injurious behavior is cutting the skin with a knife, and other major types of NSSI include burning, scratching, hitting the areas of the body, and hindering wound treatment.In the case of adolescents, 80% of those reporting engaging in NSSI are reported to exhibit the behavior of cutting or stabbing the skin with a sharp object [6], so although NSSI does not indicate suicidal intention, it is likely to cause physical damage.In addition, NSSI is a high-risk behavior among adolescents' problem behaviors since it can easily lead to suicidal ideation and is likely to lead to death [7] and it can cause psychological distress to the friends and family members around self-injuring adolescents. NSSI is known as the behavior caused by reciprocal influences of various factors [1].Looking at an explanation of NSSI from the perspective of learning theory [8], the motivations for NSSI are explained in terms of the positive reinforcement as a behavior to attract others' attention and the negative reinforcement of using NSSI as a coping mechanism to alleviate emotional distress or discomfort.In other words, concerning the most common motivation for NSSI in adolescents, it is said that individuals engage in NSSI in order to fill the emotional holes that cannot be easily healed because of past psychological trauma or abuse experiences such as separation from parents, emotional abuse, and sexual abuse in their childhood and draw the attention of others [9,10], or NSSI functions as a means of feeling regulation which temporarily alleviates the negative emotions such as anxiety, depression, and stress when such negative emotions cannot be tolerated [3,6,9,11]. Although high prevalence rates of NSSI in adolescents are eliciting high levels of concern and interest around the world, the research on NSSI in Korea has been limited to studies which considered it as the accompanying syndrome in the research of students with disabilities [12] and some adolescents with psychiatric problems [13].In Korea, there has been no research on the prevalence of NSSI in adolescents in general, and few studies on the risk factors for NSSI have been performed.The reason for this is that NSSI is not easily detected and the conceptual division between NSSI and suicide has been ambiguous for a long time [14].However, the results of many recent foreign studies have indicated that NSSI is a clinical problem that has different characteristics from suicide and requires a distinct diagnosis for it.Thus, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders [15] includes the diagnosis of NSSI disorder and mentions it as an area which requires further research. Therefore, this study aimed to investigate the status of NSSI in Korean adolescents and identify the characteristics of self-harming adolescents in order to provide basic data for understanding the NSSI of adolescents and explore effective ways to conduct an intervention. Study Design This study was a cross-sectional study to investigate the prevalence of NSSI in Korean adolescents and to identify the factors related to the NSSI of adolescents. Subjects The subjects of this study were a total of 717 middle school students in first to third grade attending middle schools located in U metropolitan city of Korea who understood the purpose of the study and voluntarily agreed to participate in the study.The number of subjects to achieve the purpose of this study was determined using the G*Power 3.1 program.In the logistic regression analysis, the sample size needed was 696 persons at the significance level ⍺=.05, the statistical power 1-β=.90, the effect size, OR=1.5 [10], and the previously known prevalence of NSSI, ρ1=.1 [4].Thus, the questionnaires were distributed to 766 persons in consideration of the dropout rate (10%) and all of them were collected.Then, the data of 717 persons were used for the final analysis excluding 49 questionnaires (6.3%) which had insufficient responses. 1) Non-suicidal self-injury Non-suicidal self-injury (NSSI) was measured using the Deliberate Self-Harm Inventory [DSHI], which is a self-report scale developed by Gratz [16] to assess self-injurious behaviors.The questionnaire consists of a total of 17 items.Each item is constructed in forms such as "Have you ever cut your wrist, arm or other parts of your body without intention to die?" or "Have you ever burned yourself with a cigarette?" to assess whether or not each type of self-injurious behavior has been done.If the answer to a question is 'yes', participants are required to further answer the questions such as when they did it, how many times they did it, when was the last time they did it, and whether they had been admitted to hospital for the action.However, in this study, the respondents were asked to report only whether or not they had the experience of NSSI and the frequency of NSSI.In this study, to use the experience of NSSI as a dependent variable, the subjects were classified as belonging to the 'NSSI group' if they had at least one experience of self-harming behavior, and as belonging to the 'non-NSSI group' if not.The tool was used in this study after obtaining the permission to use it from the developer of the assessment tool. 2) Demographic characteristics and health behavior characteristics The demographic characteristics of subjects included the grade level, sex, subjective academic achievement for the past one year, and the status of living together with family members.As well as the health behavior characteristics included smoking, drinking, the level of stress, and the subjective health status.The academic performance for the past one year was inquired about by dividing it into 'high-achieving,' 'medium-achieving,' and 'low-achieving.'With respect to family members and the status of living together with family members, the respondents were asked to describe all the family members and indicate whether they live with each parent for the mother and father.Regarding smoking, the respondents were asked to select "yes" or "no" in response to the question whether they had smoked at least a cigarette during the past 30 days, and with respect to consuming alcohol, they were also asked to select "yes" or "no" in response to the question whether they had had at least one glass of alcoholic drink in the last 30 days.For the level of stress, it was measured on a 5-point scale from 'very much' to 'not at all' about how stressed they were in daily life, but when analysis was conducted, the responses were re-coded into three categories of 'a lot,' 'a little,' and 'not at all.'The subjective health status was also assessed using a five-point scale from 'very frail' to 'very healthy' as to how your health condition was in general, and then the responses were re-coded into three categories of 'weak', 'moderate,' and 'healthy.' 3) Hospital Anxiety and Depression (HAD) The scale developed by Zigmond and Snaith [17] to assess the clinical level of anxiety and depression was translated in Korean and standardized by Oh et al. [18].The scale consists of 14 items, composed of 7 odd-numbered items on anxiety and 7 even-numbered items on depression, and it is a 4-point scale which gives 3 points for 'mostly agree' and 0 for 'strongly disagree.'In the study of Oh et al. [18], Cronbach's ⍺ for anxiety was .89 and Cronbach's ⍺ for depression was .86.In this study, Cronbach's ⍺ for anxiety was .81 and Cronbach's ⍺ for depression was .75.If the total score for each of anxiety and depression is 8 points or more, it was judged to indicate anxiety and depression.We received the permission to use the tool from the researchers who standardized it in Korea. 4) Self-esteem Self-esteem was assessed by the Self-Esteem Scale (SES) developed by Rogenberg [19], and to be freely used by all researchers in the world without the copyright.This instrument is a self-report 5-point Likert scale consisting of 10 items with scores ranging from 1 to 5 points, which in-clude 5 items of positive self-esteem and 5 items of negative self-esteem.The higher the total score, the higher the self-esteem.Cronbach's ⍺ was .86 at the time of development, and Cronbach's ⍺ in this study was .86. 5) Parenting Attitudes of Parents In order to measure parenting attitudes of the parents of the subjects, we used the questions of the parenting attitude II used in the Korean Children & Youth Panel Survey (KCYPS) (2015) of the National Youth Policy Institute [20].The subdomains of this scale consist of 4 items of parental neglect and 4 items of abuse, and they include items such as "My parents (guardian) are interested in how I live in school and ask me about that" and "If I do something wrong, my parents (guardian) try to beat me unconditionally."Scores are determined by assigning 4 points for 'always,' 3 points for 'mostly,' 2 points for 'seldom' and 1 point for 'not at all'.In the case of neglect, the scores are converted inversely, so the higher the score, the worse care the parents take of the participant.In the case of abuse, a higher score indicates that the respondent receives more abuse.Cronbach's ⍺ in this study was .74. Data Collection The data were collected from Korean middle school students by randomly selecting two schools in each of five wards of U metropolitan city in Korea.The purpose and contents of the study were explained to the school principals and the public health teachers of the schools, and prior approval for the questionnaire survey was obtained.Between November 1, 2015 and December 10, 2015, one class for each grade of the first, second, and third grades of the school was assigned, and we distributed notices for parents to the students to participate in the study and obtained prior informed consent forms from the parents and students.The researcher visited the participants on the appointed date and distributed the questionnaires and it took 10 to 15 minutes for the participants to complete the survey.Since the questionnaires were collected at the site, the response rate was 100%, but a total of 717 questionnaires were analyzed excluding 49 cases which included unanswered items. Ethical Considerations In order to protect the subjects, after we received the approval of the institutional review board of the university of the first author (IRB No. 50).In accordance with the Declaration of Helsinki, the informed consent form included a description of the subject's anonymity and confidentiality, and the subjects were informed that withdrawal would be possible at any time during the course of the survey, if the participant wanted to.The completed questionnaires were submitted to the researcher in the classroom immediately. Statistical Analysis The collected data were processed using the SPSS 23.0 program and analyzed according to the study purpose as follows: First, in order to investigate the prevalence and characteristics of NSSI in adolescents, they were analyzed in terms of the frequency and percentage.The comparison of NSSI type characteristics between male and female adolescents was analyzed by the x 2 test or fisher exact test.Second, the demographic characteristics and health behavior characteristics according to the presence or absence of NSSI experience were analyzed by the x 2 test.The levels of anxiety-depression, self-esteem, and parenting attitude were analyzed by the t-test. Third, in order to identify the factors influencing the NSSI of adolescents, multiple logistic regression analysis was performed by entering variables such as drinking, anxiety, depression, self-esteem, and neglectful and abusive parenting attitudes, in which there was a significant difference between the NSSI group and the non-NSSI group.The statistic results were expressed with the Odds ratio and 95% confidence interval. Prevalence and Characteristics of NSSI With respect to gender, 8.0% of boys and 9.7% of girls reported engaging in NSSI.In terms of the grade level, 10.6% of first grade middle school students, 9.0% of second grade students, and 0.6% of third grade students reported engaging in NSSI.Regarding whether or not the participant lives with the father, 8.4% of those living with their father and 14.3% of those not living with their father reported engaging in NSSI.Regarding whether the participant lives with the mother, 9.9% of those living with their mother and 14.3% of those not living with their mother reported engaging in NSSI.In terms of academic performance, 11.1% in the high-achieving group, 11.3% in the lowachieving group, and 7.5% in the middle-achieving group reported engaging in NSSI, and 9.8% of the group with a high level of perceived stress reported engaging in NSSI. 22.2% of the group of smoking participants and 7.1% of the non-smoking group reported engaging in NSSI.There was no statistically significant difference in gender, education level, living with the father, living with the mother, academic performance, stress level, cigarette smoking, and health status between the NSSI group and the non-NSSI group among middle school students. 7.1% of the non-alcohol-consuming group and 34.1% of the alcohol-consuming group reported engaging in NSSI, and the difference was statistically significant (p <.001) (Table 1). NSSI Type Characteristics Carving letters or pictures on one's own body was the most frequent self-harming behavior (49.2%).In addition, 30.2% of adolescents reporting engaging in NSSI reported banging or hitting the head to the point of getting a bruise, 27.0% reported scratching the body severely to the point of getting wounds, 22% reported sticking a sharp object into the skin, 12.7% reported having the skin torn or ripped by biting or having the skin bruised by beating, and 12.7% reported preventing wounds from healing, but the differences between boys and girls were not significant.However, cutting oneself with a knife was the second most frequent method of NSSI (31.7%), and 75.0% of those reporting having done it were female students compared to males of 25.0%, showing a significant gender difference between male and female students (p =.014)(Table 2). Comparison of Anxiety, Depression, Self-Esteem, and Parenting Attitude according to the Presence or Absence of NSSI in Adolescents There was a significant difference in the levels of anxiety, depression, and self-esteem between the NSSI group and non-NSSI group among adolescents; the level of anxiety was 5.34±3.28points and 7.79±4.12points, respectively (p <.001), the level of depression was 4.80±3.18points and 6.23±3.66 points, respectively (p<.001), and the level of self-esteem was 30.60±4.94 points and 27.06± 5.69 points, respectively (p <.001).The score for uninvolved parenting attitude was 6.30±2.10 and 7.50±2.10,respectively (p <.001), and there was a significant difference between the NSSI group and non-NSSI group of adolescents.The score for the abusive parenting attitude was 6.40±2.27and 8.31±3.13,respectively (p <.001), and there was a significant difference between the two groups (Table 3). Factors Contributing to NSSI in Adolescents In the Hosmer-Lemeshow test, which is a goodness of fit test for logistic regression, the p-value was .159,so a regression model constructed adopting the null hypothesis was found to fit the data.The explanatory power of the regression model for the dependent variable was 20.3%, and the accuracy of the regression model for classification of adolescents into the NSSI group and the non-NSSI group was 91.7%.Among the six independent variables in the regression model, drinking, anxiety, and abusive parenting attitude were significant.With respect to drinking, the rate of NSSI experience was 4.95 times higher in the alcoholconsuming group than in the non-alcohol-consuming group.If the anxiety score increased by 1 unit, NSSI experience increased by 1.12 times.If the score for abusive parenting attitude increased by 1 unit, NSSI experience increased by 1.21 times (Table 4). DISCUSSION This study was conducted to investigate the prevalence of NSSI and the factors related to NSSI in Korean adoles-cents.Of the adolescents participating in this study, 8.8% reported having experienced NSSI.The subjects of this study were the first to third grade middle school students, and the prevalence rate was similar to 7.5% in the study of Hilt et al. [21], a study of the same age group in North America.However, Hilt et al. [21] is a study of a 12-month prevalence of NSSI, whereas the present study is research on lifetime prevalence, so a direct comparison is difficult to conduct.A study of a systematic review of the lifetime prevalence of NSSI among adolescents in schools and communities in various countries in the western region revealed that on average, about 18% of adolescents had NSSI experience [4].However, depending on whether the prevalence of NSSI is the lifetime prevalence or the prevalence within 12 months, there are differences in the prevalence of NSSI among studies, and considering that the lifetime prevalence is generally higher, the prevalence of NSSI of 8.8% in middle school students in Korea is relatively low compared with the result of Hilt et al. [21].However, it is difficult to draw conclusions about the prevalence of NSSI in Korean adolescents because data collection was limited to a local area in this study.Considering the risk of NSSI, it is necessary to identify the prevalence of NSSI by includ- Prevented wounds from healing No Yes 55 ( ing the relevant item in a large-scale survey at the national level such as the online survey of youth health behavior. In the present study, concerning the characteristics of the adolescents' NSSI, carving letters or pictures on the body was the most frequent self-injurious behavior.However, in a study of Canadian adolescents [6], the rates of behaviors such as cutting the skin such as the wrist (41%) and hitting oneself (33%) were found to be high.In addition, in the study of adolescents with the experience of NSSI in the Netherlands and Belgium [22], the rate of banging or hitting the head was the highest at 12.9%, while in this study, the rate of banging the head was the third highest.Although the sequence was slightly different, among NSSI characteristics, carving a letter or a picture on the body, cutting the wrist with a knife, and banging the head were among the behaviors which showed high rates in the previous studies as well although ranked differently.This may have been influenced by socio-cultural factors, and it is thought that further future research is needed because there is a lack of prior studies to clarify what cultural characteristics cause such behaviors. In this study, drinking, anxiety, and abusive parenting attitude were the factors that explain NSSI in adolescents.Among them, drinking was found to be the most influential factor of NSSI.This finding is similar to the results of the study which reported that drinking was directly associated with NSSI in female students [12].Drinking is a predictor of impulsive behaviors among young adolescents [23] and is likely to lead to other dangerous health behaviors [3], which may lead to the possibility of NSSI and suicide.In the United States, smoking and marijuana use were associated with a higher prevalence of NSSI rather than drinking.In the Netherlands, drinking and smoking were not associated with NSSI, but the use of drugs such as marijuana was related to the high prevalence of NSSI.In this regard, according to Giletta et al. [3], NSSI is a social behavior that is not legally recognized, smoking, drinking and using drugs are similarly prohibited for adolescents, and the mechanisms for selecting each are similar.Therefore, the factors affecting NSSI may change depending on whether each of smoking, drinking, and using drugs is a socially acceptable behavior and the purchase of each of them is legally permitted in the nation.However, in Korea, both smoking and drinking are unacceptable behaviors for teenagers, and smoking is subject to stricter social regulation than drinking, which is thought to explain the results different from the interpretation presented by Giletta et al. [3].Thus, further research is required to understand the interaction between drinking or smoking and NSSI. In this study, adolescents' experience of abuse by their parents was found to be associated with NSSI, and this finding supports the results of previous studies [24][25][26].If you are exposed to abuse such as blame or disdain by your parents, the risk of NSSI may increase because your selfdisparagement becomes worse and you lack the ability to comfort yourself or control your impulse.According to the self-punishment hypothesis [1], if children who were exposed to abuse by their parents become adults and face stressful situations, they exhibit self-injurious behavior to control and cope with stress or unpleasant emotions, re-experiencing familiar abusive experiences they experienced and learned repeatedly from their parents in the past. In addition, in this study, anxiety was found to be significantly related to NSSI.This result supports the previous theory of motivations for NSSI which stated that controlling negative emotions is the main function of NSSI [1], and it is consistent with empirical studies which reported that the higher the level of anxiety, the higher the rate of NSSI [3,6].However, although previous studies reported that depression was also a variable highly associated with NSSI [3,6], but it was not a related variable in this study, so further repeated research needs to be conducted in the future. Other research reported that self-esteem was a factor explaining NSSI in female students [12], but it was excluded in the regression analysis in this study.In this study, although self-esteem was not related, NSSI was found to increase in adolescents with negative emotions and low selfesteem.Thus, in order to prevent NSSI among adolescents, a program including a strategy for strengthening positive attitudes toward oneself and appropriately dealing with negative feelings is considered necessary. This study has the following limitations.First, there is a possibility that there were recall bias since data were collected by the self-administered questionnaire method.Second, it may have been due to social desirability bias that the prevalence of NSSI was found to be lower in this study than in foreign studies, because NSSI is a socially unacceptable behavior.In addition, since it is a cross-sectional study, this study has a limitation on clearly explaining the causal relationship between variables.Despite these limitations, this study obtained some meaningful results in that this study provided basic data for informing adolescent health care professionals that they should consider NSSI as one of the health risk behaviors of adolescents even in community-based environments by investigating the status of NSSI in early adolescents in Korea and the related factors, and contributed to utilization of the data as the primary data in planning a youth health promotion project. CONCLUSION The purpose of this study was to investigate the prevalence of NSSI in adolescents and to identify the factors influencing NSSI.The prevalence of NSSI in Korean adolescents was found to be 8.8%, and drinking, anxiety and abusive parenting attitude were reported to account for 20% of self-harming behaviors.This study provided basic data for the development of nursing interventions that can reduce NSSI in adolescents through understanding of NSSI experiences of adolescents. Based on the study results described above, we present the following suggestions. First, repeated investigations involving more subjects are needed to identify the prevalence rate of NSSI in adolescents by methods such as the online survey on youth health behavior.Second, mental health promotion programs are needed to help to deal with negative emotions such as abusive parenting experiences and anxiety in a positive way, which were considered as risk factors for NSSI in adolescents. Table 1 . Comparative General Characteristics according to the Presence of NSSI (N=717) Table 4 . Logistic Regression Analysis for Variables Predicting Engagement in Non-suicidal Self-injury (N=717)
2019-03-17T13:08:08.500Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "1b194da32ad2d056b5a2d38d3a26cd586dd8cd64", "oa_license": "CCBYNC", "oa_url": "http://synapse.koreamed.org/Synapse/Data/PDFData/0200JKACHN/jkachn-28-271.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "47ce0c48eb365b9ae49ebcef6af14a7acc1e67d3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }