id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
3530645
|
pes2o/s2orc
|
v3-fos-license
|
EFFEctS oF AScoRBIc AcId on tHE VASoActIVE IntEStInAL PEPtIdE SYntHESIS In tHE ILEuM SuBMucouS PLEXuS oF noRMAL RAtS
Background The aging process is a deteriorating process that attacks the gastrointestinal tract, causing changes in the number and size of neurons from the enteric nervous system. The activity of free radicals on enteric neurons is helped by the significant reduction of antioxidants. Aim Evaluate the effect of the ascorbic acid supplementation on the neurons that produce the vasoactive intestinal peptide (VIP) in the submucous plexus of the ileum of normal rats for a period of 120 days. Methods Fifteen rats were divided in three groups: untreated control with 90 days, untreated control with 210 days and ascorbic acid-treated rats with 210 days. Ascorbic acid was given for 16 weeks from the 90 day of age by adding it to drinking water (1 g/L prepared fresh each day). The ileums were processed according to the immunohistochemistry technique for whole-mount preparation in order to detect the presence of VIP immunoreactive in the cellular bodies and nervous fibers in the neurons of the submucous plexus. We have verified their immunoreactivity and measured the cellular profile of 80 cellular bodies of VIPergic neurons from each studied group. Results The ascorbic acid supplementation did not alter physiological parameters such as water intake and food consumption of the three studied groups. We observed a significant increase of the cellular profile of VIP-ergic neurons in untreated control with 210 days when compared to untreated control with 90 days. The cellular profile of VIP-ergic neurons in ascorbic acid-treated rats with 210 days was bigger than those observed in others groups. Conclusion The ascorbic acid had a neurotrophic effect on VIP-ergic neurons on the ileum after period 120 days of supplementation. HEAdInGS – Ascorbic acid. Vasoactive intestinal peptide. Ileum. Submucous plexus. Rats. Department of Morphophysiological Sciences, State University of Maringá, Maringá, PR, Brazil. Address for correspondence: Dr. Jacqueline Nelisis Zanoni – Av. Colombo, 5790 – 87020-900 – Maringá, PR, Brazil. E-mail address: jnzanoni@uem.br
ARQGA/1185
IntRoductIon Aging has been defined as a series of changes taking place during life, which reduces the capability of an organism survival.It is associated to metabolic, neuroendocrine, genetic and immunological changes that may contribute to the process of cellular death by apoptosis or necrosis (2) The nature of the aging process has been reason to considerable speculation such as: the aging of DNA codification, progressive deterioration in the synthesis of proteins and also of other macro-molecules, attacks to the immune system and the action of free radicals (15) .Free radicals are molecules that have an extra unpaired electron, which does not have a pair in its external orbit and usually derives from oxygen.They originate in the mitochondria, through energy production from glucose and O 2 and are immediately neutralized by the enzymes within the mitochondria (13,18) .Free radicals play a role in the aging process since they harm directly, constantly cells and tissues and also have a cumulative action (20) .
The consequence of the free radicals action is called cellular oxidation, and is considered the initial stage of several diseases.This process can only be neutralized by the presence of vitamins considered antioxidants (12) .Ascorbic acid is one of these vitamins used to fight off free radicals.In most species, the hepatic metabolism of glucose includes the ascorbic acid synthesis.Human beings, however, do not synthesize it due to the absence of the gulonolactone oxidase enzyme in the metabolic stage.Therefore, it is necessary the daily intake of ascorbic acid to prevent illnesses such as the scorbutus (14) .
Changes in the gastrointestinal neuromuscular functions due to aging process have been seen in animal models as well as in human beings, with the evidence of dysphagia and constipation (10) .The enteric nervous system (ENS) (responsible for the control of gastrointestinal functions) is affected by the aging process with a reduction in the number of myenteric neurons in several intestinal segments being observed (8,16,19) .Some pathological cases (such as diabetes mellitus) accelerate the aging process, as it can be seen by the alteration in the number and size of enteric neurons (6,25,26,27) .
EFFEctS oF AScoRBIc AcId on tHE VASoActIVE IntEStInAL PEPtIdE SYntHESIS In tHE ILEuM SuBMucouS PLEXuS oF noRMAL RAtS
Jacqueline Nelisis ZANONI and Priscila de FREITAS
GASTROENTEROLOGIA EXPERIMENTAL
ABStRAct -Background -The aging process is a deteriorating process that attacks the gastrointestinal tract, causing changes in the number and size of neurons from the enteric nervous system.The activity of free radicals on enteric neurons is helped by the significant reduction of antioxidants.Aim -Evaluate the effect of the ascorbic acid supplementation on the neurons that produce the vasoactive intestinal peptide (VIP) in the submucous plexus of the ileum of normal rats for a period of 120 days.Methods -Fifteen rats were divided in three groups: untreated control with 90 days, untreated control with 210 days and ascorbic acid-treated rats with 210 days.Ascorbic acid was given for 16 weeks from the 90 th day of age by adding it to drinking water (1 g/L prepared fresh each day).The ileums were processed according to the immunohistochemistry technique for whole-mount preparation in order to detect the presence of VIP immunoreactive in the cellular bodies and nervous fibers in the neurons of the submucous plexus.We have verified their immunoreactivity and measured the cellular profile of 80 cellular bodies of VIPergic neurons from each studied group.Results -The ascorbic acid supplementation did not alter physiological parameters such as water intake and food consumption of the three studied groups.We observed a significant increase of the cellular profile of VIP-ergic neurons in untreated control with 210 days when compared to untreated control with 90 days.The cellular profile of VIP-ergic neurons in ascorbic acid-treated rats with 210 days was bigger than those observed in others groups.Conclusion -The ascorbic acid had a neurotrophic effect on VIP-ergic neurons on the ileum after period 120 days of supplementation.HEAdInGS -Ascorbic acid.Vasoactive intestinal peptide.Ileum.Submucous plexus.Rats.
Our objective was to study the VIP-ergic submucous neurons of the ileum of normal rats with and without the ascorbic acid supplementation for a period of 120 days.
MAtERIAL And MEtHodS
The experimental protocols used in this study are in accordance with the ethical principles in animal research prescribed by the Brazilian College of Animal Experimentation (COBEA).
Fifteen male Wistar rats (Rattus norvegicus), weighing about 300 g, were used.The rats were divided in three groups: untreated control with 90 days (C), untreated control with 210 days (C2) and ascorbic acid-treated rats with 210 days (CA).Ascorbic acid was given for 16 weeks from the 90 th day of age by adding it to drinking water (1 g/L prepared fresh each day) (24) .The rats were kept in individual metabolic cages in a room with a maintained photoperiod (6:00 a.m.-6:00 p.m.) and at room temperature (RT) (24º ± 2ºC) with water and food (Nuvital lab chow) ad libitum.
On the sacrifice day, all rats were anesthetized intraperitoneally with thiopental (40 mg/kg-body wt).Blood from groups C2 and CA was collected by cardiac puncture to measure the blood levels of ascorbic acid (11) .Rats from groups C2 e CA were monitored during the four months of the experiment.
Immunohistochemistry and morphological analysis
After an abdominal incision, the ileum segments were collected, rinsed in 0.01M phosphate buffer saline (PBS), pH 7.4, and fixed in Zamboni's liquid for 18 hours (23) at 4º C. The segments were processed according to the immunohistochemistry technique for whole-mount preparation (4) in order to detect the presence of VIP immunoreactive (VIP-IR) in the submucous plexus.
Soon after, the segments were opened along the mesenteric border, washed and dehydrated, diaphanized in xylene and rehydrated.Afterwards, they were placed in 0.01 M PBS pH 7.4.Samples were reduced with the aid of a circular sectioner and the mucosa and muscle layers were dissected under stereomicroscope.The isolated submucous layer was incubated with polyclonal rabbit anti-VIP (Penninsula Labs, USA) overnight at RT at 1:200 under shaking.The samples were washed in PBS and then incubated in sequence with the secondary FITC-conjugated antibody (Penninsula Labs, USA) for 1 h at 1:100 (RT) under shaking.In the control samples, the primary antibody was substituted by goat serum.The wholemounts were placed in glycerol-coated slides.
The immunofluorescence was analyzed on a trinocular biological optic microscope, 40X lens, equipped with immunofluorescence filters (FITC) and a kit to capture images IPPWIN-DCAM.The images were taken by a high-resolution camera, transmitted to a personal computer and then recorded in a compact disc.
The area (µm 2 ) of 80 cellular bodies of immunoreactive VIP-ergic neurons (VIP-IR) from each studied group was
RESuLtS
The plasmatic concentration of ascorbic acid in group C2 was 26.7 ± 2.7 and 48.8 ± 3.8 in group CA, a significant difference (P < 0.05).There were no difference between the water intake and the food consumption when we compared the three studied groups (P > 0.05).The results are shown in Figure 1.
Cellular bodies and VIP-IR neurons fibers from the submucous plexus were found in all studied groups (Figure 2).The lowest immunoreactivity was observed in neurons from group C. We also observed that group C presented neurons with the smallest cell profile, 108 µm 2 .Group CA presented neurons with the largest cell profile 1171.9 µm 2 .
Figure 3 shows the areas of VIP-IR neurons of the three studied groups.The 4-months aging period (group C2) caused an increase of 50.8 % in the area of VIP-IR neurons when compared to those observed at 90 days of age (group C) (P < 0.001).The cellular profile of VIP-IR neurons for group CA was 59.2 % and 17 % larger than those found in groups C and C2 respectively (P < 0.001).
Most VIP-IR neurons in group C had cellular body areas distributed in the following ranges: from 101-400 µm 2 (86.2%), from 401-600 µm 2 (12.5%) and larger than 601 µm 2 (1.3%) (Figure 4).We found a reduction of 38.8 % in the area of VIP-IR neurons for group C2 in the range 101-400 µm 2 when compared to the results obtained for group C (Figure 4).An increase of 18.8% and 20% in the VIP-IR neurons areas in the 401-600 µm 2 and >601 µm 2 ranges respectively, were also found when we compared group C2 to group C; 62.6% of neurons from group CA had a cell profile above the 601 µm 2 range (Figure 4).The supplementation with antioxidants agents is very important, since the aging process compromises the enzymes and antioxidants agents that fight off the free radicals, causing them to accumulate.Our research showed an increase in the area of cell bodies of submucous VIP-ergic neurons in the 4-month period.There was no other work reporting the decrease in the Zanoni JN, Freitas P. Effects of ascorbic acid on the vasoactive intestinal peptide synthesis in the ileum submucous plexus of normal rats Zanoni JN, Freitas P. Efeitos do ácido ascórbico sobre a síntese de peptídio intestinal vasoativo do plexo submucoso do íleo de ratos normais.
CHA et al. (3) studied the changes that take place in neurons containing neuropeptides in the cortex of old rats, and noticed a reduction in the number of VIP-IR neurons and also those expressing to the neuropeptide Y.The cell profile increase observed in age-related VIP-ergic neurons might be a compensatory effect caused by the eventual reduction in the number of these neurons.The increase in the cell profile was accompanied by the increase in the immunoreactivity, being this an indicator that the cell increased its synthesis process of the VIP neurotransmitter.Another hypothesis for the increase in the profile of submucous VIP-ergic is that it might be related to the loss of myenteric neurons taking place during aging process.Although the enteric plexus are spatially separated, the connection between them suggests they form an integrative unit (7) .SEE et al. (21) observed an increase on the cellular body volume of submucous VIP-ergic neurons after myenteric denervation.The authors speculate that in normal conditions the myenteric plexus would have an indirect inhibitory action over the submucous plexus to which it is connected.The removal of the inhibitory impulse for the submucous neurons could result in a VIP production increase, leading to an increase in the area of the cell bodies submucous VIP-ergic neurons.
Our study showed an increase of 17.02% in the area of the submucous VIP-ergic neurons supplemented with ascorbic acid when compared to controls without supplementation at the same age (group C2).It also showed an increase of 59.20% when compared to control at 90 days of age (group C).The ascorbic acid may have had a neurotrophic effect, generating an increase in the cellular profile.This increase may be positive, since it is believed that this neurotransmitter works directly in the intestinal mucosa, causing an increase in the intestinal secretion (5) .As for the absorptive function in the rat intestine, there is considerable evidence for reduction due to age in the water, sugar, and amino acids transportation (1) .
Summing up, a 120-day aging period caused an increase in the area of VIP-ergic neurons of the submucous plexus in the ileum of rats.The ascorbic acid supplementation presented a neurotrophic effect on these neurons.
Zanoni JN, Freitas P. Effects of ascorbic acid on the vasoactive intestinal peptide synthesis in the ileum submucous plexus of normal rats
FIGURE 2 -
FIGURE 2 -Immunostaining of VIP-positive neurons in the submucous plexus from untreated control with 90 days (A), untreated control with 210 days (B) and ascorbic acid-treated rats (C).Bars calibrations: 10 µm
FIGURE 3 -
FIGURE 3 -Means and standard errors of cell body areas from VIP-IR neurons from: untreated control with 90 days (C), untreated control with 210 days (C2) and ascorbic acid-treated rats (CA).All groups, when compared, are significantly different (P <0.001).(n) = 5 rats per group
|
2018-02-26T18:12:10.522Z
|
2005-07-01T00:00:00.000
|
{
"year": 2005,
"sha1": "ed8044918a58c7a12fafe14880467a9828866025",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ag/a/SCcfB894pq6MZVfjDyLjcTg/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ed8044918a58c7a12fafe14880467a9828866025",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214152556
|
pes2o/s2orc
|
v3-fos-license
|
Biomass Assessment of Floating Aquatic Plant Eichhornia crassipes - A Study in Batticaloa Lagoon, Sri Lanka using Sentinel 2A Satellite Images
: Batticaloa Lagoon is one of the estuaries in the country which is frequently affected by floating aquatic plants; mainly Eichhornia crassipes. The present study aimed to develop a relationship between field measured and satellite derived biomass that can satisfactorily estimate the spatial distribution of green and dry biomass of the floating aquatic plants in Batticaloa Lagoon. Cloud free six Sentinel-2A images were acquired for the period of March 2017 to February 2018. Real time field measurements of biomass of floating aquatic plants were obtained in 12 locations in two weeks interval. A buffer zone of 3 km was created around the lagoon to obtain Land Use/Land Cover (LULC) distribution to study the influence of surrounding LULC on floating aquatic plants. A number of band ratios and indices were developed using Sentinel-2A images to establish relationships with the field estimated biomass. The LULC analysis revealed that paddy was the abundant land use in the study area and the cultivation was highly seasonal which impacts the distribution of floating aquatic plants in dry and wet seasons. Among 21 tested band ratios and indices, normalized difference red edge index (NDREI, r 2 =0.78) and band ratio B8/B (r 2 =0.67) for the green biomass and band ratio B3/B4 (r 2 =0.73) and NDREI-Narrow (r 2 =0.61) for the dry biomass in dry and wet season showed strong positive correlation with field biomass. The temporal distribution of the estimated biomass also confirmed the potential of Sentinel-2A images to be used as a source of data for monitoring of floating aquatic plants in the lagoon due to high spatial and spectral resolution of NIR and Red edge bands. These estimated biomass maps can be used to identify the locations which are affected by aquatic plants in order to take proper control measures.
INTRODUCTION
Coastal and inland shallow aquatic ecosystems provide habitats for a wide range of macrophytes including floating, submerged and emergent aquatic plants. Recent studies and reports have revealed that both natural and artificial water bodies in Sri Lanka are infested with aquatic plants (Sobadini, 2006). Around 15 invasive plant species have been identified over the past few decades as responsible for filling of wetlands in Sri Lanka. Among them, Eichhornia crassipes, Pistia stratiotes and Salvinia molesta are widely distributed and create many damages to wetlands than other invasive species. Batticaloa lagoon is one of the 1 Faculty of Agriculture, University of Peradeniya, Sri Lanka estuaries subjected to significant pollution with the spreading of invasive alien floating aquatic plant, the Water Hyacinth (Eichhornia crassipes) Spreading of this invasive alien species is an indication of sedimentation and eutrophication of the lagoon as a result of degradation of the ecosystem due to unplanned development works in the lagoon area (IUCN and CEA, 2006). Vegetation on shallow aquatic system can be determined and monitored by field surveys and in situ measurements in terms of Biomass (BM) which is defined as "the mass per unit area of living plant material" (Downing and Andreson, 1985;Madsen, 1993). Several techniques have been used for biomass measurements including fresh weight, dry weight and carbon weight per unit area (Madsen, 1993). Although field estimation of aquatic plants provides reliable results, the methods are time consuming, labour intensive and cost inefficient, especially when the water bodies are with large surface areas. However, remote sensing can also provide reliable information about the spatial distribution of floating aquatic plants under good spatial and temporal coverage which can be effectively used for mapping based on their canopy spectral response (Villa et al., 2017). The launch of Sentinel-2A Multispectral Imager (Sentinel 2A-MSI) in 2015 opened up a great potential in lake remote sensing (Cho et al., 2008). The imagery with 10 m, 20 m and 60 m spatial resolution gives an opportunity to study even small water bodies. The NIR and NIR narrow bands (8 and 8a) red band (4) and red edge bands (5, 6 and 7) provide the potential for estimating chlorophyll content of the vegetation (Cho et al., 2008;Ha et al., 2017;ESA, 2017). Table 1 presents the specific band designations of Sentinel 2A-MSI. Simple band ratios and Vegetation indices are some of the remote sensing approaches which use spectral bands that are sensitive to plants to estimate biomass (Jensen, 2004;Ayanlade, 2017). Chlorophyll a and b in the plants show high absorption in blue and red regions while high reflectance in NIR region. The red edge band shows the maximum vegetation reflectance between red and NIR spectrum (Jensen, 2004). Studies show that these visible and NIR regions can be related with various biomass measurements of the plants. In the study of Toming et al. (2016) ratios developed from band 3 (Green -560nm), band 4 (Red -665 nm) and band 6 (Red edge-740 nm) in S2A were used to estimate the chlorophyll in the lakes. The findings from the study of Villa et al. (2017) present that the healthy water hyacinth showed a typical vegetation spectral reflectance pattern in the green reflectance peak at around 557 nm, red absorption near 660-670 nm and the NIR peak above 730 nm. The plants in flowering stage showed very distinct spectral reflectance pattern at blue and red compared to green, where the NIR reflectance was comparatively lower than that of vegetative stage. These ratios and indices can be used to correlate diverse plant characteristics such as vegetation cover, vegetation type, water content, field biomass and chlorophyll amount (Cho et al., 2008;Villa et al., 2017) in both aquatic and terrestrial vegetation.
Therefore, the objectives of the study were to develop relationships between field measured biomass and band ratios and indices derived using S2 satellite images to estimate biomass of Eichhornia crassipes present in Batticaloa lagoon and to develop maps of Green and Dry biomass distribution in dry and wet season
Study area
Batticaloa lagoon is located in the Eastern Province of Sri Lanka between 7°24'-7°46' N and 81°35'-81°49' E. The lagoon is about 56.8 km long along meridian axis and it varies widely 0.5 km to 4 km. The Batticaloa estuary is the second largest brackish water system in Sri Lanka ( Figure 1). Three distinct longitudinal zones could be recognized in this lagoon, the upper saline zone, the middle transitional zone with salinity fluctuations and the lower zone which is predominately with fresh water (JUGAS, 2010). The climate of the study area comprises a wet season during North-East monsoonal period (October to February) characterized by high mean precipitation (1250 ± 230 mm) and prolong dry season during the South-West monsoonal period (March to September) marked by low mean precipitation (300 ± 23 mm) (Punyawardena, 2010).
Sampling and field measurements
Green Biomass (GBM) can be used as an index to measure the activity of vegetation (Jensen, 2004). Standing macrophyte biomass can be estimated by harvesting of randomly placed samples of a known area (Downing and Andreson, 1985). A number of field visits were conducted initially to select the sampling points. Twelve sampling sites ( Figure 1) were chosen from locations which are known for occurrence of floating aquatic plants (FAP) (Eichhornia crassipes) in the lagoon. Sampling plot of 30 x 30 m was selected at each site. Measurements were done in triplicate at each point by randomly placing a 1 m 2 wooden frame in the immediate vicinity of the sampling point. The coordinates of the field points were obtained using a handheld GPS (Gramin-eTrex 30) and later converted into point vector data layer for further processing. The sampling points were selected along the lagoon shore considering the presence of FAPs. Field data were collected in real time to coincide with the S2A images during the period of March 2017 to February 2018 in two weeks interval. The fresh weight of floating aquatic plants in each quadrant were weighed to obtain the GBM of the FAPs. According to Madsen (1993) all the plants in the quadrant dug manually together with roots, washed from debris and drained well and their fresh weight were calculated. Samples were sun dried to a constant weight to determine Dry Biomass (DBM), to the nearest 0.1 g. According to Madsen (1993) the fresh and dry weight of whole plants are considered as the GBM and DBM of the aquatic plants. Weight of all plants within a quadrant was transformed to fresh wt (kg/ m 2 ) and dry wt (g/ m 2 ).
Image acquisition, pre-processing and classification
Cloud free S2A (MSI) images with 10, 20 and 60 m resolution in UTM projection over Batticaloa Lagoon were acquired from the website of USGS (http://earthexplorer.usgs.gov/) ( Table 2). The multispectral images were used along with vector layer of Batticaloa Lagoon obtained from 1:50000 scale topographic maps of Survey Department, Sri Lanka to subset the study area. A buffer zone of 3 km was created around the lagoon to obtain the Land Use Land Cover (LULC) distribution around the lagoon area to study the influence on fluctuation of FAPs. Unsupervised classification is one of the techniques that have given better results in identifying the invasive alien plants in water bodies (Toming et al., 2016) along with the land use pattern. The images were classified using unsupervised classification into 36 classes Subsequently with the aid of field observations and high resolution Google Earth Images, LULC classes were identified and similar classed were merged to obtain 10 LULC classes which are dominant in the area.
Development of remote sensing based indices and regression models
All possible band ratios and vegetation indices based on the spectral regions that are sensitive to vegetation (Blue, Green, Red, NIR and Red edge) in estimation of biomass and chlorophyll content were evaluated in this study. Based on the studies by Cho et al. (2008) and Villa et al. (Cho et al., 2008;Toming et al., 2016) were used in this study. The SPSS version 16.0 was used for the correlation and regression analysis between field measurements and remote sensed based indices. The most suitable model which can predict biomass of Eichhornia crassipes were used to develop biomass maps of aquatic plants in Batticaloa Lagoon.
Seasonal distribution pattern of aquatic plants and LULC around the lagoon
Sentinel-2A MSI sensor with 10 m spatial resolution provides a great benefit in the LULC classifications (Ha et al., 2017). It can be used to derive accurate and current spatial land cover information and monitor land cover changes over seasons in land and water bodies. The Sentinel 2A satellite images of 10 m spatial resolution for the period of 2017/2018 were used to identify the distribution of FAPs and LULC pattern around Batticaloa Lagoon. The classification of aquatic plants in the images of the S2A for the period of 2017/2018 revealed ( Figure 2) that the distribution highly varies with the seasonal changes (5-14 km 2 during dry and wet period, respectively). The classified images showed that the aquatic plants are abundant in the southern part of the lagoon where the lagoon water is fresh (IUCN and CEA, 2006). The LULC pattern of the buffer zone for 2017/2018 was classified into ten classes (Figure 3) according to the land cover distribution in the study area. The LULC classification revealed that most of the area surrounding the lagoon is covered by paddy lands and built up areas (including landfill and prawn farming) consisting 53% and 39% of the total, respectively. However, inland water bodies, natural vegetation, sand dunes, sea shore and mangrove/ marsh areas cover only a small portion of the land area. Further, Figure 3 shows that the cultivation of paddy lands around the lagoon area is seasonal. The paddy lands located in the southern part is cultivated during Yala season using fresh water in the lagoon while the paddy lands in the northern part are not cultivated due to high salinity condition in the lagoon (Sugirtharan et al., 2017). The paddy lands in the southern part are left as fallow in Maha season during the North East monsoonal period due to flooding. However, the paddy lands in the northern part of the lagoon are cultivated in Maha season as rainfed cultivation where the chance of flooding is less.
Field observations and estimation of biomass
The growth of the aquatic plants showed a cyclic pattern from vegetative to flowering stage throughout the year. The plants are in vegetative stage during wet season (NE monsoonal period) and reach the flowering stage in the mid of dry season and return to the vegetative stage by the onset of NE monsoonal rain. Plants are subjected to decay due to salinity intrusion and water level reduction in places such as Manmuani, Kurukalmadam, Onthachimadam and Kothiyapuali during the dry season. The collected biomass of FAPs is mainly related to GBM and DBM in dry and wet season. The field GBM of the aquatic plants varied from 4.9 to 10.3 kg/m 2 and from 5.3 to 13.7 kg/m 2 during dry and wet season, respectively. Likewise, the DBM of the aquatic plants varied from 0.365 to 0.935 kg/m 2 and from 0.447 to 1.107 kg/m 2 during dry and wet season, respectively. Table 3 shows the average field biomass of aquatic plants in dry and wet season. The highest values of GBM were obtained from Kittanki and the lowest from Kurukalmadam and the highest values of DBM were obtained from Kittanki and the lowest from Kothiyapulai in both seasons. LULC in the buffer zone of Batticaloa lagoon ( Figure 3) depicts that Kittanki is surrounded by paddy lands and the cultivation is solely depends on lagoon water in dry season. Therefore, there is a high possibility of nitrate and phosphate enrichment in the lagoon water due to application of organic and inorganic fertilizers in farm lands (Sugirtharan et al., 2017). Nutrient enrichment in water enhances the growth and the biomass of aquatic plants in the lagoon shore. The location is also highly exposed to lagoon flooding in the wet season where the possibility of nutrient accumulation is high due to surface runoff from adjacent land uses (Paddy and Built ups). In contrast, Kurukalmadam and Kothiyapulai are less affected with human interventions and surrounded by natural mangrove systems. These points also act as salinity transition zone and therefore, nutrient availability to plant growth is limited and the aquatic plants do not reach their maturity stage in the growth cycle due to increasing salinity.
Among tested remote sensed algorithms, two single band ratios (B3/B2 and B3/B4) and one vegetation index (NDREI) showed strong positive correlation with the in situ data (r 2 = 0.77, 0.76 and 0.78) for the GBM and B3/B4 and NDREI for DBM (r 2 = 0.73 and 0.71), respectively in the dry season. Studies show that the MSI on S2A has a high potential for monitoring chlorophyll content in coastal and inland waters due to its red-edge band (band 5:705 nm) and red band (band 4: 665 nm) in the Chl-a absorption (Toming et al., 2016) and Cho et al. (2017) obtained better results from using empirical relationships between chlorophyll a and reflectance in the "red edge" of the visible spectrum. Ha et al. (2017) where, the narrow region of the NIR band gives more significant values compare to the spectral reflectance of the visible and NIR regions in estimating chlorophyll of aquatic vegetation in wet season. The standard error estimation for the above strongly correlated band ratios and indices were used to select the best ratio to estimate biomass and to produce maps for the aquatic plants in the study area. According to the Mean Standard Error values (MSE), NDREI and B8/B4 ratios showed lowest values of 13% and 6% to the GBM in dry season and wet season, respectively. While, ratios B3/B4 and NDREI_narrow show the lowest values of 12% and 6.5% to the DBM in dry and wet season, respectively. Studies show that the ratios with small MSE, ranges between 7.5% and 15% of in situ data can be used for the remote sensed based estimation in the water bodies (Ha et al., 2017). The GBM showed high values in fully matured vegetative stage (February) and flowering stage (August to September) and lower at emergent stage (May). Similar pattern was observed in field measured GBM where it was low in the months of pre monsoonal period (March to April), increased to peak in mid to late dry period (June to September), gradually decreased to a low level during second inter monsoonal period (October to November) and then increased again in the NE monsoonal season which lasts up to the end of the wet season (December to February).
Spectral reflectance of aquatic plants greatly depends on the morphological features and growth stage (Cho et al., 2008). According to the study of Cho et al. (2008) and Villa et al. (2017) spectral reflectance varies according to the vegetation stage in water hyacinth (Eichhornia crassipes). Healthy water hyacinth shows green reflectance peak at around 557 nm, red absorption near 640-670 nm and the NIR peak near 840-880 nm. However, when the plants are at their flowering stage, the spectral reflectance pattern shows the transition of NIR band to red edge reflectance. This is obvious that, the NDREI derived from NIR and red edge bands to estimate GBM were highly correlated with in situ BM during dry season where the plants are at their flowering stage. In contrast, Red and NIR band ratio shows high correlation with in situ BM in the wet season ( Figure 6) where the plants are in vegetative stage. (Villa et al., 2017). The DBM also varies according to the seasonal pattern since discharge of nutrients from the paddy lands in the buffer zone and the effluent of rice mills impact the growth and distribution of plants in the lagoon system. The LULC distribution in the buffer zone of the lagoon (Figure 3) revealed that lagoon shore area is highly occupied with paddy cultivation. The rice mills too can be noticed in the buffer zone (Thuraineelavanai), where the lagoon water can be polluted by agricultural wastes (Nitrate and phosphate) and rice mill effluents (Pradhan and Sahu, 2004). According to Figure 6 the distribution of biomass is high in places such as Kittanki, Mandoor, Annamalai and Palugamam which are exposed to agricultural pollution and Thuraineelavanai which has a threat of contamination with rice mill effluent.
Seasonal distribution of biomass reveals that there is a significant difference between average GBM in dry and wet season and average DBM in dry and wet season (p<0.05). This shows that the biomass content of the aquatic plants depends on the seasonal availability of the nutrient contents in the lagoon water. This distribution can be seen clearly in estimated BM maps from the S2A scenes ( Figure 6 and 7) developed for dry season (May to September) and wet season (January and February). High values of BM in dry season presence in the areas close proximity to the paddy lands such as Mandoor, Thuraineelavanai, Annamalai and Palugamam which are exposed to agricultural pollution due to cultivation practices. High values of estimated biomass can be seen in places such as Ampalanthurai, Onthachimadam and Paddiruppu in the wet season. The strong water exchange with nutrients to the northern part of the lagoon due to flooding during NE monsoonal period causes to increase GBM and DBM in the upper part of the lagoon. Similar observations were identified in the study of Toming et al. (2016) where the chlorophyll content changes over the season due to water exchanges in the lake and highly impact by the nutrient availability in the lake
CONCLUSIONS
Band ratios, B8/B4 and B3/B4 and vegetation indices NDREI and NDREI_Narrow were the most suitable in GBM and DBM assessment in dry and wet season in Batticaloa lagoon. The estimated GBM of the aquatic plants using NDREI index is ranging from 5.9 to 8.3 kg/m 2 in the dry period and using B8/B4 ratio ranged from 7.7 to 10.3 kg/m 2 in wet period. The DBM of the aquatic plants in the Lagoon ranged from 407.8 to 841.17 g/m 2 in the dry period and from 516.4 to 942.8 g/m 2 . The study identified the potential of S2A MSI images for monitoring the biomass of floating aquatic plants in the coastal aquatic system.
|
2019-11-14T17:10:48.606Z
|
2019-11-07T00:00:00.000
|
{
"year": 2019,
"sha1": "19b5683d89ae7fe2d4691df16f91cde3a3277f31",
"oa_license": "CCBY",
"oa_url": "http://tar.sljol.info/articles/10.4038/tar.v30i4.8327/galley/6441/download/",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7177b29eef267da7c455e0fba6e48049f21d10b1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
245838993
|
pes2o/s2orc
|
v3-fos-license
|
Association Between Sensory Loss and Falls Among Middle-Aged and Older Chinese Population: Cross-Sectional and Longitudinal Analyses
Introduction: Previous studies have suggested that sensory loss is linked to falls. However, most of these studies were cross-sectional designed, focused on single sensory loss, and were conducted in developed countries with mixed results. The current study aims to investigate the longitudinal relationship between hearing loss (HL), vision loss (VL) and dual sensory loss (DSL) with falls among middle-aged and older Chinese population over 7 years. Methods: The data was obtained from the China Health and Retirement Longitudinal Survey (CHARLS). In total, 7,623 Chinese older adults aged over 45 were included at baseline 2011 in this study. Self-reported falls and HL/VL/DSL were accepted. Other confounding variables included age, sex, BMI, educational level, marital status, various physical disorders and lifestyles. The impact of baseline sensory status on baseline prevalence of falls and incident falls over 7 years were assessed using logistic regression analyses. A logistic mixed model was used to assess the association between time-varying sensory loss with incident falls over 7 years after adjusted with multi-confounding factors. Results: Single and dual sensory loss groups had significantly higher prevalence of falls compared to no sensory loss (NSL) group (DSL: 22.4%, HL: 17.4%, VL: 15.7%, NSL: 12.3%). Baseline HL (OR: 1.503, 95% CI: 1.240–1.820), VL (OR: 1.330, 95% CI: 1.075–1.646) and DSL (OR: 2.061, 95% CI: 1.768–2.404) were significantly associated with prevalence of falls. For longitudinal observation over 7 years, baseline HL/DSL and persistence of all types of sensory loss were associated with incidence of falls. Time-varying HL (OR: 1.203, 95% CI: 1.070–1.354) and DSL (OR: 1.479, 95% CI: 1.343–1.629) were associated with incident falls after adjusted with multi-confounders, while VL was not. Conclusion: HL and DSL are significantly associated with both onset and increased incidence of falls over 7 year's observation in middle-aged and elderly Chinese population. Persistence or amelioration of sensory loss status could exert divergent influences on incidence of falls, which should be considered in the development of falls-prevention public health policies for aging population.
INTRODUCTION
Falls and fall-induced injuries are leading causes of morbidity and mortality among older people (1). Falls can cause moderate to severe events, such as bone fractures, head trauma, or even increased risk of early death (1). Among elderly people, falls are the leading cause of death due to injury. The frequency of falls increases with aging. Approximately, 28-35% of people all over the world aged over 65 fall every year, and this number increases to 32-42% for those who aged over 70 (2). With the rapid growth of the world's older population, falls has become a major concern of the public health problems worldwide.
As the world's most populous country, China has accelerated aging population with increasing average life expectancy. It is estimated that the number of Chinese people aged 80 years or older will quadruple over the next two decades (3,4). Until now, according to WHO reports, the annual prevalence of falls among older Chinese population has reached 6.5 to 30.6% (3). Thus, falls, fall-induced injuries and related events in older Chinese population are of great significance. To date, numerous researchers have explored the incidence, risk factors and socio-economic burden of fall and related injuries in Chinese population (4). As for risk factors, some have mentioned the association of sensory loss with falls (5)(6)(7)(8).
Sensory loss, consisting of hearing loss (HL), vision loss (VL), and dual sensory loss (DSL, co-occurrence of vision and hearing loss), is one of the most common problems experienced by older people. Although consensus has yet to be reached, the relationship between sensory loss and falls in older population had aroused great concern in both developed and developing countries (6,(9)(10)(11)(12)(13). Among older Chinese, the prevalence of selfreported HL, VL, and DSL is relatively higher than the prevalence reported in many developed countries (14). Due to traditional attitudes regarding sensory loss as a normal part of aging life, older Chinese people are likely to ignore problems related to sensory loss, which may further contribute to the higher prevalence of sensory loss and incidence of related problems in our population (14).
Very recently, a small number of cross-sectional studies have reported the potential relationship between vision impairment and falls/fall-related injuries among Chinese population (5,6,8). Therefore, longitudinal study is needed. However, researches on hearing loss and falls have yield mixed results (6,10,15,16). Also, the impact of Dual Sensory Loss (DSL) on falls has been barely explored before in our population as well. Thus, allowing for the specific cultural background, attitudes toward sensory loss, and public health system in mainland China, it is necessary to investigate the longitudinal correlation between sensory loss (vision, hearing or both) and falls among older Chinese population.
The China Health and Retirement Longitudinal Study (CHARLS) is the first nationally representative survey of the health status and well-being in middle-aged and older population in China, which provides high-quality longitudinal data of massive amounts of personal health-related information including sensory loss and self-reported falls. The purpose of this study is to verify single/dual sensory loss as risk factors of falls among older Chinese population according to cross-sectional study and longitudinal observation spanning 7 year of follow-up.
Participants and Public Involvement
We obtained data from the China Health and Retirement Longitudinal Study (CHARLS), the first nationally representative longitudinal survey sampling residents (middle-aged and older adults, over 45 years old) from 450 villages/neighborhoods, 150 counties across 28 provinces in China. With response rate over 80%, CHARLS provides the most up-to-date longitudinal data sets for studying the health status and well-being of middle-aged and older population in China. There were 17,708 participants interviewed in the 2011 baseline (Wave 1). According to the purpose of the current study, participants with missing data of any variables at 2011 baseline and lost follow-up in falls and sensory loss were excluded, which led to a final sample size of 7,623 (Figure 1).
Outcome
The main outcome in this study is falls, which was determined by the question "Have you fallen down in the last 2 years?" Possible answers included "yes" or "no." We therefore treated the outcome variable as binary. In each follow-up survey, the participants were asked "Have you fallen down since the last interview?" Baseline assessment of falls was used for cross-sectional study and incidence of falls reported during 3 waves of follow-up were used for longitudinal analysis.
Exposure
The main exposure in this present study are self-reported vision loss and hearing loss. In CHARLS, vision loss (VL) included distal vision impairment and near vision impairment. Distal vision impairment was assessed by asking the participants whether their eyesight was excellent, very good, good, fair, or poor when seeing things at a distance. Reporting of fair or poor eyesight was classified as distal vision impairment. Near vision impairment was assessed by asking participants whether their eyesight was excellent, very good, good, fair, or poor when seeing things up close. Reporting of fair or poor eyesight was classified as near vision impairment. To assess hearing loss (HL), the question was: "Is your hearing excellent, very good, good, fair, poor (with a hearing aid if you normally use it and without if you normally don't)." A response to this question stating fair or poor hearing was classified as HL. Such categorization of sensory loss assessment was used in previous studies (17)(18)(19). DSL referred to participants with both VL and HL.
In realized that the status of sensory loss could alter during 7 years of observation, we thought that it might be less prudent to only consider the baseline sensory loss status and its association with falls. Persistent exposure to specific sensory loss and altered sensory loss statuses during follow-up should be taken into account in longitudinal study as well. Thus, to assess the impact of persistent exposure condition of different types of sensory loss status on incidence of falls in our participants, the answer to vision/hearing status at each follow-up time point should be the same (e.g., participants with cataract-caused vision impairment and visual improvement after cataract surgery would probably report different vision statuses at different timepoints. Such participants would be regarded as break down of persistent vision loss status and excluded). On the other hand, participants without sensory loss at baseline could develop sensory impairment during follow-up for 7 years and vice versa. Thus, we further treated sensory loss statuses of our participants as time-varying variable to tolerate alterations in surveys at different time points over 7 years. The impact of time-varying sensory loss on incidence of falls appeared during follow-up was also assessed.
Socio-Demographic Characteristics
Gender was a binary variable: male and female. Age was treated as a continuous variable. Marital status indicated whether the respondent lived alone or got accompanied. Participants who were separated, divorced, widowed or never married were coded as "living alone" group, while those who were married or partnered were coded as "living with partner" group. Educational attainment was used to represent social economic status, which could probably affect people's access to health services and other social and economic resources. Educational status was categorized into 5 groups: illiterate, less than elementary school, elementary school, middle school and high school or above as previously reported (20,21).
Lifestyle
The lifestyle variables included smoking status, drinking status, and physical activities status. Smoking status was categorized as current/former smoker or never smoked. Drinking was a 3category variable indicating the frequency of drinking: none, less than once a month and more than once a month. Physical activities status was categorized as taking vigorous activity, moderate activity, light activity, or insufficient activity.
Physical Disorders
In CHARLS, most health status and physical disorders were assessed according to self-reports. Only a few diseases could be defined at a relatively precise level based on both selfreported medical history and reference definition like blood test results and physical examinations. Thus, we took only seven main physical disorders into account in the present study: hypertension (22,23), diabetes (24,25), dyslipidemia (22,26), kidney diseases (27), emotional disorders, memoryrelated diseases and stroke (28). The criteria of identifications of physical disorders used in the current study was also adopted by numerous researchers using CHARLS datasets.
Statistical Analysis
Statistical analyses were performed using SAS, version 9.4 (SAS Institute, Cary, NC, US). In this study, the primary exposure of interest was sensory loss status (HL/VL/DSL), while the other independent variables served as control variables and were reported as means ± SD or numbers (%). Baseline characteristics were compared between participants with different sensory loss statuses (4 groups) using the Chisquare test, Cochran-Mantel-Haenszel (CMH) test or analysis of variance depending on the data type and distribution. Logistic regression analyses were conducted to assess the associations between prevalence/incidence of falls and baseline/persistent sensory loss. While the longitudinal associations between timevarying sensory loss and incident falls were examined using mixed logistic models. Mixed logistic models took into account within-subject correlation of time-varying sensory loss and fall over 7 years of follow-up (3 waves). Multi-confounders including
RESULTS
In total, 7,623 Chinese older adults aged over 45 at baseline 2011 were deemed eligible for the current study (Figure 1). Sociodemographic characteristics, physical conditions and lifestyles of the study sample were grouped by sensory status and presented in Table 1. The number of participants at baseline was 7,263 in the current study. For each group, the sample size was 2,136 (NSL), 1,320 (HL), 1,004 (VL), 3,163 (DSL). The DSL group had the highest prevalence of falls (22.4%, n = 3,163). Participants with single sensory loss had relatively higher prevalence of falls than those who without sensory loss (HL: 17.4%, VL: 15.7%, NSL: 12.3%, p < 0.001).
The univariate logistic regression analysis indicated the potential associated factors of fall in our sample at baseline 2011. Sensory loss including VL, HL, and DSL, along with other factors including gender, age, marital status, educational level, smoking status, physical activities status, diabetes, hypertension, kidney diseases, emotional problems, memory-related diseases and stroke were all found to be significantly associated with fall (all p < 0.05, Table 2). Compared to single sensory loss, DSL had a higher odds ratio, which means a potentially greater impact on prevalence of falls (OR-DSL: 2.061, OR-HL: 1.503, OR-VL: 1.330).
The results of the univariate logistic regression indicated probabilities that certain covariables that could confound the relationship between sensory loss and falls in multivariate regression models. Table 3 showed the impact of baseline sensory status on prevalence of falls at baseline and incidence of falls over 7 years according to adjusted multivariable logistic models. At baseline, HL and DSL remained significantly correlated with falls in all 4 Models. VL was found to be significantly correlated with falls in Model 1&2&3, however, after being adjusted with various physical disorders, such correlation become less significant (p = 0.08). Similarly, baseline HL and DSL were found to have significant correlation and prediction for higher incidence of falls over 7 year follow-up longitudinal observation after being adjusted with multiple confounding factors, but it was not the case of VL ( Table 3).
In Table 4, we noticed that, compared to baseline sensory loss only, all types of persistent sensory loss statuses (HL/VL/DSL) were significantly and more strongly correlated with incidence of falls over 7 year of follow-up even after adjusting for multiconfounding factors. Mixed
DISCUSSION
This study contributes to the current literature examining the relationship between sensory loss and falls in Chinese population. We performed cross-sectional study and 7 year follow-up longitudinal observation to verify sensory loss including vision loss, hearing loss and dual sensory loss as risk factors of falls among older Chinese population for the first time.
The overall prevalence of falls in our sample was around 17.85%. such prevalence was similar to that found in other studies performed in other Asian community-dwelling older (5,(29)(30)(31). Since falls has become one of the most common causes of injuries among older people, which could lead to long-term disability or even death, exploration of risk factors associated with falls in older people was warranted. Among various potential risk factors of falls, sensory loss including vision loss and hearing loss has raised great concern in recent years. There were 1,220 participants whose sensory loss status was consistent over 7 years. The analytic sample size was 7,623 in the mixed logistic regression models for association between time-varying sensory loss and incidence of falls.
Single Vision Loss
A decrease in visual acuity could probably lead to inaccurate assessment of environmental obstacles and deficits in daily activities, which eventually prevent older people from avoiding falls and fall-related injuries (5). In our cross-sectional study, we found significant correlation between VL and prevalence of falls in our sample according to univariate logistic regression (OR: 1.330, 95% CI: 1.075-1.646). After adjusting various confounders including age, sex, BMI, marital status, educational level, smoking status, drinking status and physical activity, single VL was still significantly correlated to fall. To our surprise, with physical disorders added into the model, such correlation became less significant (p = 0.08, Model 4, Table 3).
Decline in vision may also contribute to the development of fear of falling, which are related to increased fall risk in older adults (32). However, we found relatively less significant correlation between baseline single VL with incidence of falls over 7 year of follow-up in our participants ( Table 3). Such finding indicated that baseline single VL may not be an appropriate indicator for higher incidence of falls. Similar results were also found that time-varying VL was not significantly correlated with incidence of falls during longitudinal observation ( Table 4). This may be explained by the fact that amelioration of poor vision status is relatively accessible and effective for older people in our country. For example, patients who had cataract at baseline and later underwent successful cataract surgery for vision improvement could report different vision status in the following surveys. Therefore, to persevere same exposure status, we then filtered our participants according to the criteria of consistent VL status to further verify the impact of persistent VL on incidence of falls over 7 years. Persistent single VL was found to be strongly and significantly correlated with incidence of falls over 7 year of follow-up, even after being adjusted for multi-confounding factors in all models ( Table 4). These findings indicated that baseline VL may not be appropriate for prediction of higher incidence of falls in older Chinese population, but the persistence of poor vision status could probably lead to more falls in older Chinese. Alteration or amelioration of poor vision status would possibly lower the appearance of falls in older adults.
Single Hearing Loss
HL has also been regarded as a risk factor of falls. HL contributes to balance difficulties, greater stride length variability and poorer postural control related to fall occurrence in older people (33)(34)(35). Numerous studies carried out across various ethnics and population in different countries have reached a consistent result of the potential correlation between HL and falls (36)(37)(38)(39)(40). Some researchers have also suggested that HL could be a clinical indicator of increased fall risk (11,12). However, some researchers did not find significant correlation between HL and falls (13). Potential reasons may lie in the variability in how HL and falls were assessed and cohort characteristics. Similarly, several cross-sectional studies performed in our population have yielded mixed results as well (6,10,15,16).
Thus, the present study provided important evidence on the correlation between single HL and falls in our older population from a nation-wide level according to both cross-sectional and longitudinal analyses for the first time. Baseline HL, persistent HL status and longitudinal time-varying HL were all found to be significantly associated with prevalence and incidence of falls in our samples even after being adjusted for all other confounders (all models, Tables 3, 4). These findings further indicated that single HL could be regarded as a risk factor of falls in middleaged and older Chinese population. Our findings are consistent to several population-based nation-wide surveys performed in other countries (11,12,36,37,40).
HL, HL-related falls and HL interventions among older adults in our country should arouse enough concern. Interventions like wearing hearing aids for improvement of hearing status has been proved to be very helpful to improve postural stability and offer a significant public-health benefit for avoiding falls, particularly in older people (41,42). However, according to the previous research in over 15 million older Chinese people with HL from the China National Sample Survey on Disability, researchers pointed out that there is less uptake of hearing aid use than expected (43). Reasons for the poor uptake of hearing aids included financial constraints, unfamiliarity with hearing aids, difficulties in manipulating hearing aids, and traditional attitudes toward HL in older people as a normal part of aging life (43). Furthermore, besides amplifying desired sounds, hearing aids would amplify noises as well, thus making users feel too loud and noisy. Such muffled effect also jeopardize their belief in hearing aid (43). Thus, we need to realize that the hearing healthcare services for older people in China is still under-developed and worthy of further improvement in the future.
Dual Sensory Loss
In old age, sensory impairments often coexist. Thus far, the combined effects of HL and VL on falls have been barely explored in our population. According to the present study, the DSL group had the highest prevalence of falls among these sensory loss groups (22.4%, n = 3,163). The correlation between DSL and falls was apparently stronger than that between single sensory loss and falls (DSL: OR: 2.061, 95% CI: 1.768-2.404; HL: OR: 1.503, 95% CI: 1.240-1.820; VL: OR: 1.330, 95% CI: 1.075-1.646). Baseline DSL, persistent DSL status and time-varying DSL were all found to be significantly associated with prevalence and incidence of falls in our samples in both cross-sectional and longitudinal analyses even after being adjusted with all other confounders (all models, Tables 3, 4). These consistent results in the present study indicated the strong correlation between DSL and falls in middle-aged and older Chinese people for the first time.
Some researchers have also noticed the relatively higher risk of combined effect of DSL on falls in older people, which is consistent to our results (44)(45)(46). Older people with DSL may be exposed to jointly negative influences of HL and VL. Concomitant dysfunction of both the cochlear and vestibular sense organs were ubiquitous in older people with HL (36). On the other hand, weakened vestibulo-ocular reflex and worse balance maintenance could also be found in older people with a decrease in visual acuity (5,47). In addition, older people with DSL may develop a greater fear of falling behavior, reduced mobility, restricted activity and a decline in social interactions, which could further lead to sarcopenia, depression, poorer cognitive status, and reduced attentional resources. All these factors could contribute to the increased incidence of falls (17,18,32,36,46,48).
Strengths and Limitations
There are several strengths in our study. First, CHARLS is a national study with large sample size, and the national representativity of CHARLS has been widely recognized and acknowledged. Thus, our work could be generalized to the entire country. Second, to our knowledge, the current study is the first nation-wide Chinese population-based study to verify the sensory-fall association among middle-aged and older population according to both cross-sectional study and longitudinal observation over 7 years. Results in our study could not only be used as evidence for falls-prevention among older population in China but also reference for future studies in other countries (especially in developing countries). Lastly, multiple fall-associated factors were included and adjusted in this study analyses, which could otherwise potentially confound the relationship between sensory loss and falls.
Meanwhile, we acknowledge some limitations. First, data of sensory loss and falls was collected by self-reports. Date for falls and frequency of falls was unavailable as well. Although this method has been used in previous studies (49)(50)(51)(52)(53), possible misclassification of sensory loss status or inaccurate reports may lead to bias. Also, the causal effects of sensory loss on falls could not be reached according to the present study. Second, some previously reported confounding factors of incident falls, such as physical environment, sarcopenia and nutrient intake were not available in CHARLS and were not adjusted in our study. Third, in a longitudinal observation over 7 years in older population, it is inevitable that the attrition in the panel over time could not be completely random. Lost follow up in following visits and the exclusion criteria of the present study could probably lead to sampling bias as well. Whatsoever, CHARLS is the first nationally representative survey of the health status and well-being in middle-aged and older population in China, which provides high-quality data of massive amounts of personal health-related information.
CONCLUSION
Our work is the first to verify sensory loss including vision loss, hearing loss and dual sensory loss as risk factors of falls among older Chinese population according to cross-sectional study and 7 year follow-up longitudinal observation. Hearing loss and dual sensory loss are significantly associated with both prevalence and increased incidence of falls over 7 year's observation in middleaged and older Chinese population. Persist or altered vision loss status could exert divergent influences on incidence of falls. These findings deserve further consideration in the development of falls-prevention public health policies for older population in China.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: The current study is a secondary analysis of public data of CHARLS. The original datasets of CHARLS is accessible on http://charls.pku.edu.cn/.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The Biomedical Ethics Review Committee of Peking University. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
MZ, YZ, HL, and XS designed the research. JL, YH, and YL analyzed the data. YZ drafted the manuscript. YZ and YH contributed equally to this research and should be considered as equivalent authors. All authors read and approved the final manuscript.
|
2022-01-11T14:19:02.478Z
|
2022-01-11T00:00:00.000
|
{
"year": 2022,
"sha1": "bbc6484b5ad9553f2ce8898bf7f2991fad36bf3d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.810159/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df1811ddf4b013973526204be0b14ab59eabd8b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238634204
|
pes2o/s2orc
|
v3-fos-license
|
Improving Frequency Stability of Low-Inertia Systems using Virtual Induction Machine
This paper presents a novel strategy for synchronization of grid-connected Voltage Source Converters (VSCs) in power systems with low rotational inertia. The proposed model is based on emulating the physical properties of an induction machine and capitalizes on its inherent grid-friendly properties such as self-synchronization, oscillation damping and standalone capabilities. A detailed mathematical model of an induction machine is derived, which includes the possibility of obtaining the unknown grid frequency by processing the voltage and current measurements at the converter output. This eliminates the need for the phase-locked loop unit, traditionally employed in grid-following VSC control schemes, while simultaneously preserving the applied system-level and device-level control. Furthermore, the appropriate steps for obtaining an index-1 DAE representation of the induction-machine-based synchronization unit within the VSC control scheme are provided. The EMT simulations validate the mathematical principles of the proposed model, whereas a small-signal analysis provides guidelines for appropriate control tuning and reveals interesting properties pertaining to the nature of the underlying operation mode.
Abstract-This paper presents a novel strategy for synchronization of grid-connected Voltage Source Converters (VSCs) in power systems with low rotational inertia. The proposed model is based on emulating the physical properties of an induction machine and capitalizes on its inherent grid-friendly properties such as self-synchronization, oscillation damping and standalone capabilities. A detailed mathematical model of an induction machine is derived, which includes the possibility of obtaining the unknown grid frequency by processing the voltage and current measurements at the converter output. This eliminates the need for the phase-locked loop unit, traditionally employed in grid-following VSC control schemes, while simultaneously preserving the applied system-level and device-level control. Furthermore, the appropriate steps for obtaining an index-1 DAE representation of the induction-machine-based synchronization unit within the VSC control scheme are provided. The EMT simulations validate the mathematical principles of the proposed model, whereas a small-signal analysis provides guidelines for appropriate control tuning and reveals interesting properties pertaining to the nature of the underlying operation mode.
Index Terms-voltage source converter, induction machine, phase-locked loop, self-synchronization, virtual inertia emulation I. INTRODUCTION The primary frequency control of Synchronous Machines (SMs) is naturally based on the measure of the rotor angular speed of the machine itself. Since the dynamic response of synchronous machines imposes frequency variations, the rotor speeds are clearly the correct measurements to use for frequency control [1]. However, the situation changes substantially when it comes to defining the frequency signal for converter-interfaced generation. Unlike synchronous machines, and depending on the respective mode of operation, power electronics-based distributed generators do not necessarily impose the frequency at the point of connection. In particular, for the purpose of grid-following power converter control [2], a brittle local bus frequency signal has to be estimated by using available measurements, usually AC voltages and/or currents at the point of connection.
In order to regulate a grid-connected Voltage Source Converter (VSC) in the grid-following mode, a control sequence comprising system-level control with outer control loops and a synchronization unit, and device-level controls has become prevalent in industry for providing adequate VSC voltage, active and reactive power outputs [3], [4]. Additionally, the standard of having a Phase-Locked Loop (PLL) as a synchronization unit has been established [5], together with its numerous variants [6]- [9]. However, despite being widely utilized for frequency estimation, this additional, inherently nonlinear estimator introduces additional complexity to the system. As its input signal undergoes fast electromagnetic transients, the PLL can experience numerical issues and be affected by jumps and discontinuities following discrete events in the system such as faults or line outages [1]. Moreover, by the nature of its design, it introduces a non-negligible delay which can limit the performance of controls depending on frequency estimation, and in addition may be extremely difficult to tune [10], [11]. Several publications have recognized the impact of PLLs on the operation of non-synchronous generation [12], [13], but also highlighted the potential instabilities arising from high penetration of power converters employing such synchronization devices [4], [14], [15].
Recent studies have addressed some of the aforementioned issues by developing PLL-less converter regulation in the form of power-synchronization [16], virtual oscillator control [17], [18] and self-synchronizing synchronverters [11]. Although the proposed methods demonstrate numerous advantages, drawbacks have also been observed. Namely, the powersynchronization is mostly focused on VSC-HVDC applications and faces challenges with weak AC system connections. Virtual oscillator control faces obstacles in terms of reference tracking, whereas the synchronverter concept still requires a back-up PLL and improvements in operation under unbalanced and distorted grid voltages. Furthermore, all aforementioned controls apply to the grid-forming operation mode. While grid-forming and grid-supporting VSCs are an integral part of a future low-inertia power grid, the existing systems are primarily composed of renewable generation interfaced to the network via grid-following inverters.
Contrarily, a recently proposed VSC control method under the name of inducverter introduces the notion of a gridconnected converter operating under Induction Machine (IM) working principles and without a dedicated PLL unit [19]. Despite the concept still being at its early stages, it can potentially resolve the issues associated with the conventional PLL-based synchronization loop. However, [19] proposes a control design that integrates the system-level controller and synchronization unit into one compact structure. As a consequence, the frequency regulation and stabilization properties are attributed to the inducverter, whereas these functionalities are actually a sole consequence of the implemented PIbased droop power control. Additionally, the controller is implemented in a hybrid abc-dq frame, where the dq-axis current references are obtained according to the real and reactive power errors and translated to abc voltage references through an adaptive virtual impedance in the abc frame. This significantly complicates the analytical representation of the model and its analysis. As a consequence, for the controller tuning it is suggested to revert back to adopting the parameters of an existing induction machine. Finally, the fact that the cascaded inner control loop is replaced by a simple adaptive lead or lag compensator raises concerns in terms of fast voltage reference tracking and overcurrents during transients.
We improve the aforementioned design by incorporating the Virtual Induction Machine (VIM) framework in [20] as an independent synchronization unit into a detailed state-of-theart converter model and analyze the time-domain performance of a grid-connected VSC through EMT simulations. More precisely, this paper reformulates the mathematical principles of an emulated induction machine from [19], [20] and extends them by deriving an appropriate index-1 DAE representation of such grid-connected VSC control scheme. This allows for a detailed small-signal analysis, which in turn helps to improve the control tuning and reveals that replacing the PLL by a VIM results in a hybrid operation mode with both grid-forming and grid-following properties. It also enables us to draw accurate conclusions regarding the overall emulation properties and system response.
The remainder of the paper is structured as follows. In Section II, the mathematical model of an induction machine and the VIM control principles are presented. Section III describes the VIM control design and its implementation into a state-of-the-art VSC controller, as well as the model formulation as an index-1 DAE system. Section IV showcases the EMT simulation results and model validation, whereas Section V provides an insightful small-signal stability analysis. Finally, Section VI discusses the outlook of the study and concludes the paper.
II. VIRTUAL INDUCTION MACHINE STRATEGY A. Induction vs. Synchronous Machine: Operating Principles
The physical mechanism behind the machine rotor movement and the subsequent synchronization to the grid are the most notable differences between the synchronous and induction machine. While the SM always operates at synchronous speed, the IM relies on a mismatch between the synchronous speed ω s P R ě0 and the machine's rotor speed ω r P R ě0 to operate, i.e., the slip ν P R ą0 : with ω ν :" ω s´ωr denoting the slip frequency. Furthermore, contrary to synchronous generators, induction machines do not have an excitation system in the rotor. Thus, the Elec-troMagnetic Force (EMF) induced in the rotor of an IM is a consequence of its rotation and the subsequent change of the magnetic flux linkage through the circuit. Given that the rotor is closed through an external resistance or a short-circuit ring, the induced EMF generates a current flow in the rotor conductor, which finally produces the synchronizing torque that drives the movement of the rotor. Hence, the IM can never reach the synchronous speed, since there would be no EMF in the rotor to continue its movement.
Considering the previously described properties, it can be observed that the IM with an arbitrary initial rotor speed close to the synchronous speed has a self-start capability, i.e., has the potential to synchronize with a grid of an unknown frequency and voltage magnitude. This implies that a VIMbased synchronization unit has the potential to replace the traditional PLL in the converter control design and eliminate its inherent drawbacks pertaining to time delay and stability margins. Nonetheless, such implementation should not be confused with system-level regulation, e.g., droop control, virtual synchronous machine or virtual oscillator control, as it does not inherently yield an emulation of inertia or frequency oscillation damping. Nevertheless, such services could easily be provided by an appropriate outer control loop, as will be shown later.
B. Induction Machine Emulation Strategy
For the purpose of emulating the operating principles of an IM through VSC control, let us observe a dynamical model of an IM [21] in a synchronously-rotating dq-frame and SI units: where s , ψ q s˘P R 2 and ψ r "`ψ d r , ψ q r˘P R 2 are the vectors of stator and rotor voltages and flux linkages, respectively, and R s P R ą0 and R r P R ą0 are the stator and rotor circuit resistances. The superscripts d and q refer to the vector component in the corresponding axis of the dq-reference frame, rotating at the synchronous speed ω s . The first two expressions in (2) describe the stator voltage, whereas the latter two reflect the voltage circuit balance of a short-circuited rotor, hence v r " 0 2 . Note that the slip frequency ω ν is involved in the last terms of (2c)-(2d). Moreover, the stator and rotor flux linkages can be described as with i s "`i d s , i q s˘P R 2 and i r "`i d r , i q r˘P R 2 denoting the vectors comprising stator and rotor current components in different axes, and L s P R ą0 , L r P R ą0 , L m P R ą0 being the stator, rotor and mutual inductance respectively.
The electric power p e P R transferred between stator and rotor can now be expressed in terms of currents and flux linkages, either on the stator or on the rotor side as: which yields the virtual electrical torque It is worth emphasizing that the expression for τ e P R in (5) is the same as for a synchronous machine [21]. Considering the fact that the converter model will not involve a PLL (and hence the synchronous speed and slip are unknown to the controller), the presence of ω s and ω ν terms in (2a)-(2b) and (2c)-(2d), respectively, poses an obstacle for the control design. In other words, ω s and ω ν are unknown variables and need to be computed internally based on available measurements. For that purpose, a field-oriented IM control first presented in [22] is employed. Considering that the direction of the dq-frame can arbitrarily be selected, we assume that in steady state the rotor flux is aligned with the daxis, resulting in a simplified model with ψ q r " 0. The above assumption resembles the ones used in conventional PLLs, where the calculation of the voltage angle is based on aligning the voltage vector with the d-axis of a synchronously-rotating reference-frame [23].
According to the proposed approximation, (3b) is reformulated as and the expressions for rotor voltage components in (2c) and (2d) can now be rewritten as Substituting (6a) into (7a) yields which in frequency domain can be expressed as Similarly, the slip frequency of the induction machine is computed by combining (6b) and (7b): and subsequently substituting ψ d r from (9), which gives The expression (11) describes the dynamics of the slip frequency as a PD controller K ν psq applied to the ratio of dqcomponents of the stator current. As such, this term is clearly sensitive to the variations in grid frequency and machine power output. Nevertheless, in order to complete the PLL-less control design, one needs to determine the synchronous speed. Having in mind that ω s " ω r`ων , an exact estimation of the rotor's angular velocity is sufficient for achieving the targeted objective. Let us observe the power balance of an induction machine via the swing equation and the mechanical dynamics of the rotor, given by: Here, J P R ą0 is the rotor's momentum of inertia, and τ m P R, τ e P R and τ d P R correspond to the mechanical, electrical and damping torque. By declaring ∆ω r P R as a deviation of ω r from an initial (nominal) rotor speed 1 ω ‹ 0 P R ě0 , the expression (12) becomes By expressing all three torque components in (13) as functions of converter input signals, one could finalize the closed-form VIM formulation. We elaborate on mathematical details and derivations in the remainder of this section. The electrical torque component is defined in (5), but can be further simplified by substituting the following expressions for stator flux linkage components: previously obtained from (3). The electrical torque in timedomain is therefore of the form whereas in frequency domain it yields In (16), K e psq represents a first-order transfer function 1 We denote the initial (nominal) rotor speed by pq ‹ as it will later serve as an input setpoint to the VIM controller. defined by the circuit parameters of the underlying induction machine, namely the rotor's resistance and reactance as well as the mutual inductance.
On the other hand, the mechanical torque is determined by the machine's mechanical power output and the angular speed of the rotor. Assuming a lossless converter, the mechanical input power of an IM can be approximated by the output power measured at the converter terminal (denoted by p c P R), and given by Since the converter's terminal current and voltage measurements, corresponding to stator current and voltage of a virtual induction machine 2 , are available and actively employed in device-level control (i.e., inner loop control presented in the following section), the output active power can be expressed as Finally, the damping torque τ d " D∆ω r is proportional to the rotor frequency deviation, which yields the following lowpass filter characteristic of the induction machine in frequency domain: where D P R ě0 denotes the damping constant. Substituting (16) and (19) into (20) results in a closed-form expression for ∆ω r of the form: which corresponds to ∆ω r " F r pu, pq, with the measurement input vector u P R 4 and parameter vector p P R 6 ą0 defined as Having obtained the desired analytical expressions for all frequency components, the synchronous speed can now be computed from the frequency slip ω ν in (11) and ∆ω r in (21), as follows: Similarly to any PLL, the angle reference is determined by integrating the frequency signal, i.e., 9 θ s " ω s . The resulting expression in (23) clearly shows that the closed-loop estimator F s pu, pq emulates the synchronous speed and thus the synchronization properties of an IM based solely on the voltage and current measurements v f and i g at the converter terminal, therefore entirely replacing the conventional PLL-based synchronization. On the downside, the difference between the true synchronous speed and initial rotor speed setpoint can have an impact on synchronization accuracy. In particular, a proper selection of ω ‹ 0 prior to the grid connection of the VSC reduces ∆ω r and the subsequent transients. Nevertheless, it is reasonable to assume that the VSC is connected to the grid during steady-state operation. Thus, a very basic PLL can be introduced only to estimate ω ‹ 0 . However, even if this functionality is not available, any reasonable ω ‹ 0 guess will still allow the VIM to synchronize at the cost of some minor transients. A sensitivity analysis addressing the underlying phenomena is provided in Section IV-B. Another potential drawback of the frequency estimator (23) is the fact that the slip frequency ω ν is computed using a PD controller imposed on the quotient of the current components in dqframe. On the one hand, the derivative control is sensitive to fast signal changes. As the quotient i q s {i d s can experience high oscillations during transients, the PD controller might be prone to overshoots and even instability. On the other hand, the given input-output structure of the PD controller might lead to an index-2 DAE form 3 , which in turn increases the computational burden and imposes restrictions on the selection of the DAE solver. The aspects of DAE formulation will be discussed in detail in Section III-E.
III. VSC MODELING & CONTROL DESIGN
An overview of the prevalent control architecture for twolevel power converters is shown in Fig. 1. In this configuration, an outer system-level control provides a reference for the converter's terminal current that is subsequently tracked by a device-level controller. In the following, the model of a two-level voltage source converter is first presented and subsequently, the individual control blocks depicted in Fig. 1 are discussed.
A. Power Converter Model
The power converter model considered in this study is composed of a DC-link capacitor, a lossless switching block which modulates the DC-capacitor voltage v dc P R ą0 into 3 Index is a notion used in the theory of DAEs for measuring the distance from a DAE to its related ODE (i.e., the number of differentiations needed to obtain the ODE form). It is a non-negative integer that provides useful information about the mathematical structure and potential complications in the analysis and the numerical solution of the DAE. In general, the higher the index of a DAE, the more difficulties one can expect for obtaining its numerical solution [24]. an AC voltage v sw P R 2 , and an output filter. Furthermore, we assume that the DC-source current i dc P R ą0 is supplied by a controllable source, in the form of energy storage or curtailed renewable generation, and can be used as a control input. Averaging the dynamics over a single switching period and expressing them in per-unit and dq-domain [4] yields: where c dc P R ą0 and g dc P R ą0 denote the DC capacitance and conductance, and c f P R ą0 , f P R ą0 , r f P R ą0 represent the AC filter capacitance, inductance and resistance, respectively. The DC-side current is represented by i sw P R, whereas i f P R 2 , v f P R 2 , and i g P R 2 denote the filter current, converter voltage and current injection into the system. System base frequency is represented by ω b and equals the nominal frequency, while ω g P R ą0 is the normalized reference for the angular velocity of the dq-frame.
The converter is typically interfaced to the network through a transformer, with the transformer dynamics described by where r t P R ą0 and t P R ą0 denote the transformer's resistance and inductance, and v t P R 2 is the voltage at the connection terminal.
B. System-Level Control
In the system-level control of grid-following VSCs, the measurements y s " pv f , p c , q c q P R 4 are commonly the output voltage and the instantaneous active and reactive power given by where j P R 2ˆ2 is the 90°rotation matrix. Moreover, a synchronization device -a PLL or a VIM -estimates the
System-level control
Device-level control
Device model
Inner control loop Outer control loop i g Fig. 1. General configuration of the implemented VSC control structure. phase angle θ s P r´π, πq of the voltage v f as well as the synchronous (grid) frequency ω s P R ą0 at the Point of Common Coupling (PCC), and provides them as reference pθ s , ω s q to the device-level control. In addition, the outer control loop is used to calculate the current reference i ‹ f P R 2 based on the mismatch between measured signals y s and prescribed setpoints u s " pp ‹ c , q ‹ c , ω ‹ s , V ‹ c q P R 4 . The most common PLL implementation is a so-called type-2 SRF PLL [25], which achieves synchronization by diminishing the q-component of the measured voltage v q f P R via PI control, thus aligning the d-axis of the Synchronously-rotating Reference Frame (SRF) with the output voltage vector v f [26]: Here, pK s P , K s I q P R 2 ě0 are the proportional and integral control gains of the synchronization unit, ω ‹ s " 1 p.u. is the nominal angular frequency, and ε P R is the integrator state.
Having determined the synchronous angle and frequency pθ s , ω s q, the outer control loop subsequently computes the current reference i ‹ f P R 2 , by employing frequency and voltage droop control pR p c , R q c q P R 2 ě0 in combination with integral controllers K d I,f P R ą0 and K q I,f P R ą0 . More precisely, the outer control loop, described by internal state variables pp c ,q c q P R 2 , regulates the power output pp c , q c q to its respective setpoint pp ‹ c , q ‹ c q, as follows: Due to the P´f and Q´V droop characteristics, the active power reference is adjusted in response to a deviation of the measured frequency ω s with respect to the frequency setpoint ω ‹ s , whereas the reactive power reference is modified according to the mismatch between the magnitude of the output voltage v f and the converter voltage setpoint V ‹ c . Hence, the internal state vector of the system-level controller is x s " pθ s , ε,p c ,q c q.
The computed active and reactive power references are then transformed into the corresponding current reference signal i ‹ f , with two commonly used implementations for balanced systems: a constant current and a constant power mode. The first approach directly feeds the power references to the devicelevel control i ‹ f " pp c ,q c q, while the second mode adjusts them based on output voltage measurement such that the converter's power output is kept constant:
C. Device-Level Control
This control layer provides both AC and DC-side reference signals for the VSC device. The AC-side controller operates in a synchronously-rotating dq-frame, with the reference angle θ s and velocity ω s provided by the synchronization unit. In particular, given a current reference i ‹ f P R 2 in dq-coordinates defined by pθ s , ω s q, the device-level control is described by a current controller representing the inner control loop and computing a switching voltage reference v ‹ sw P R 2 [27], as follows: where K i P P R ą0 , K i I P R ě0 and K i F P t0, 1u are the respective proportional, integral, and feed-forward gains, γ P R 2 represents the integrator state, and superscript i denotes the current controller. Note that the angular velocity ω s of the SRF is reflected in the last term of (30b).
Finally, the DC voltage v dc is controlled through the DC current source and a PI controller, as follows: with DC voltage setpoint u d " v ‹ dc P R ą0 being an external control input, χ P R the internal state variable, and proportional, integral, and feed-forward gains denoted by K dc P P R ą0 , K dc I P R ě0 , and K dc F P t0, 1u, respectively. The DC current reference i ‹ dc P R ą0 is determined by the operating point pV ‹ c , p ‹ c , q ‹ c q and the converter losses, which indicates that for v dc " v ‹ dc the DC-side current will be i dc " i ‹ dc . Having computed the AC voltage v ‹ sw and DC current i ‹ dc reference signals, the device-level control output r d " pθ s , ω s , v ‹ sw , i ‹ dc q is sent to the power converter, with pθ s , ω s q being the angle and frequency references obtained by a synchronization unit within the system-level control. The state vector of the controller comprises AC and DC current computations in (30)-(31) and is described by x d " pγ, χq P R 3 .
Unlike the widely used virtual synchronous machine control [28], [29], and in contrast to the claims raised in [19], the inertia and damping in (21) will not be reflected in the converter's frequency response and oscillation damping performance. They do, however, contribute to a more robust and resilient frequency estimation technique, as will be later shown in Section VI.
D. Grid Synchronization: PLL vs. VIM
The traditional PLL unit presented previously is now replaced by the proposed VIM synchronization technique, with the converter model, system-level and device-level control structure preserved. For comparison and clarity, the controlblock implementation of both synchronization units (i.e., a type-2 SRF-PLL and a VIM) is depicted in Fig. 2.
The basic structure and operating principles of a type-2 SRF-PLL [25] have already been presented in Section III-B. This synchronization device acts as an observer and tracks the synchronous speed by measuring the stationary output voltage v f , transforming it into an internal dq-SRF, and passing it through a PI-controller pK s P , K s I q that acts on the phase angle difference: with ω 0 P R ą0 denoting the nominal angular velocity. The synchronization is achieved by aligning the d-axis of the internal SRF with the stationary abc-frame and diminishing the q-component, as described in [2], [25] and illustrated in Fig. 2. It should be pointed out that the combined Clarke and Park transformation within the PLL is completely independent of the transformation defined by the angle and frequency of the active power controller used for the rest of the VSC control scheme and therefore introduces a second SRF into the system. Hence, the internally computed filter voltage v f must be aligned with the corresponding voltage vector in the main SRF. The VIM control design illustrated in Fig. 2 is based on (23). Similarly to other synchronization methods including the PLL, the unknown grid frequency can be obtained by simply measuring the three-phase current and voltage at the filter output pi abc g , v abc f q. Note that the VIM also operates in a separate SRF defined by the internally computed synchronization angle θ s , and requires alignment of the transformed dq-quantities with the main SRF. As previously explained in Section II, another necessary input for the controller is the initial rotor frequency ω ‹ 0 , which determines the VIM's oscillation level at start-up. However, the requirement for the initial guess of ω ‹ 0 is not very strict, as it should only be "close enough" to the synchronous speed and subsequently let the emulated physical machine bring the VSC to synchronism. Moreover, in order to cope with potential frequency slip spikes during transients, induced by the PD control K ν psq acting on the quotient i q g {i d g in (11), the frequency slip is constrained by the saturation limits ω ν P rω ν , s ω ν s.
E. DAE Formulation
The DAE formulation of a conventional converter control scheme has previously been presented in detail. As shown in Section III-B, the PLL controller (32) can be expressed as a second-order dynamic system with state vector x pll :" pε, θ s q P R 2 and algebraic output vector y pll :" ω s .
In contrast, obtaining an appropriate (i.e., index-1) DAE representation of the VIM controller is not straightforward due to the aforementioned issues pertaining to the computation of the slip frequency in (11), as well as the fact that the electrical torque is described by a first-order transfer function K e psq applied to the product of state variables i d g and i q g in (16). As such, the existing DAE model is incomplete since the number of algebraic equations does not correspond to the number of algebraic variables. More precisely, the algebraic equation describing the slip frequency is missing and must be derived by transforming (11) accordingly.
We tackle this issue by introducing a new variable ϕ P R such that (33) Moreover, we define ϕ d :" 9 i d g and ϕ q :" 9 i q g , and apply the quotient rule to (33), which yields ϕ " Note that the terms pϕ d , ϕ q q P R 2 correspond to the differential equation (25), describing the dynamics of the current flowing through a transformer. Redefining (25) in SI yields (35b) Here, ω b P R ą0 and I b P R ą0 denote the base angular velocity and current used for conversion between the per unit and SI. The rest of the notation is adopted from (25). By rewriting (16) and (21) as and transforming (11) and (23) respectively intõ we obtain an index-1 DAE form comprising (36) as differential equations and (34)-(35), and (37) as algebraic equations 4 . The state vector representing the VIM dynamics is thus x vim " pτ e , ∆ω r q P R 2 , and is of the same order as the PLL controller, whereas the vector of algebraic variables consists of y vim " pϕ, ϕ d , ϕ q ,ω ν , ω s q P R 5 . The newly defined variableω ν P R represents the unsaturated frequency slip. In order to include the frequency slip saturation limits into the DAE model, we employ the well-known expressions for the minimum and maximum of two variables: and re-declare ω ν P rω ν , s ω ν s as the saturated slip counterpart ofω ν determined by the following algebraic equation: The derivation of (39) is given in Appendix A. By including ω ν into y vim we complete the DAE formulation of the VIM controller. Combining it with the internal dynamics of converter's system-and device-level control, as well as with the network-side dynamics pertaining to the device model representation, results in a 16 th -order converter model for both the PLL and VIM-based synchronization principle.
A. System Setup and VIM Tuning
In this section, the performance of the proposed control scheme is studied for various transient scenarios using the detailed EMT model of a grid-following VSC connected to a stiff grid, developed in MATLAB Simulink and described in Section III. We focus on real-time operation events such as start-up and synchronization, response to setpoint variation (i.e., voltage and power reference tracking) and islanding (i.e., converter disconnection from the main grid). Finally, the impact of the initial rotor speed estimate ω ‹ 0 on the converter's synchronization process with the grid is studied.
Understandably, the VIM's response is highly dependent on the selection of tuning parameters, in particular the parameters of the equivalent physical induction machine. This mainly refers to the rotor resistance and inductance, but also to the mutual inductance included in the transfer function K e psq. Additionally, proper values for the moment of inertia and damping are crucial for the dynamics of the rotor frequency, which in turn affects the sinusoidal nature of the voltage and current at the converter terminal. The initial parameters used in this case study have been obtained from a physical induction generator of a 1.5 MW type-1 wind turbine, with the most relevant parameters for VIM design listed in Table I. Note that the VIM input frequency is set to f ‹ 0 " 50 Hz. The dynamics of the frequency slip are described by a PD controller K ν psq in (11), with the proportional gain K P ν " R r {L r given by the IM parameters and a unity derivative gain K D ν " 1. Nevertheless, such high value of the derivative gain can destabilize the converter control during transients. This problem is overcome by re-tuning the PD controller via the Ziegler-Nichols method [30], i.e., determining the optimal K D ν component while assuming the existing proportional gain K P ν , which results in K D ν " 0.001. The parameters of the systemlevel control (i.e., droop and integral gains included in the power control) and the device-level control (i.e., PI gains of the current controllers) have been adopted from [4] in order to test the plug-n-play properties of the VIM.
B. Case Studies
First, the connection and synchronization of a VIM-based converter to the grid is studied. The VSC is connected to the grid at t " 0 s, with the initial input frequency f ‹ 0 " 50 Hz set equal to the grid frequency. The voltage reference is initialized at V ‹ c " 1 p.u., whereas the active and reactive power setpoints are set to p ‹ c " 0.5 p.u. and q ‹ c " 0 p.u., respectively. The transient response illustrated in Fig. 3 confirms the soft-start and self-synchronization capabilities of the VIM as well as an adequate oscillation damping characteristic. The setpoints are correctly tracked and the voltage and current overshoots during start-up are acceptable. Furthermore, the start-up overshoots can be avoided if the VSC is slowly ramped-up from the zero power setpoint. Note that the initial overcurrents are in accordance with the characteristic response of an induction generator, but can also be assigned to numerical initialization of the model. The initial transients are better understood by observing the estimated synchronous frequency f s and its time-variant components f ν and ∆f r . The frequency slip is very volatile during the first 300 ms, unlike the rotor's frequency dynamics term ∆f r , which can be associated with two aspects: (i) the frequency slip is proportional to the quotient i q g {i d g , which can reach very high values when i d g « 0; (ii) K ν psq behaves as a PD controller, with its derivative actions (K D ν ) being mostly utilized throughout the first 300 ms of the start-up. After 500 ms, both frequency components stabilize and the synchronous and converter output frequency reach a steady state value of f s « 50.008 Hz. Another important aspect of control performance is the reference tracking, i.e., converter's ability to follow sudden changes in voltage and power setpoints. Both scenarios are simulated independently, with setpoint changes occurring at t " 0.5 s in each case. The voltage reference exhibits a step increase of 5 %, whereas the active power reference increases by 20 %. Both step changes last for 1.5 s before setpoints returning to their original values.
The results depicted in Fig. 4 indicate that power and voltage reference tracking is achieved within reasonable time, as both the output voltage (i.e., the voltage v f after the filter) and active power follow closely the respective setpoints. This is an expected outcome, as the reference tracking capability comes from the proper design of system-and device-level controls, which remain intact compared to the model presented in Sec. III. Nevertheless, an inefficient synchronization device would have deteriorated the performance, which is clearly not the case for the VIM. A somewhat delayed response and excessive overshoot in the voltage output is solely an artifact of the employed PI tuning of the inner control loops, since the selected tuning favors the power tracking over the voltage tracking, and can easily be addressed with a different set of PI control gains.
Let us recall that one of the control inputs to the VIM is the so-called "estimated" initial rotor frequency f ‹ 0 . Previous examples have shown that under the f s « f ‹ 0 condition the system experiences a satisfactory performance with good synchronization and damping properties. However, having knowledge of the exact grid frequency prior to the connection of the VSC might not be achievable in real-world applications. Thus, the impact of an inaccurate f ‹ 0 guess on converter synchronization is investigated by studying the frequency and voltage response for f ‹ 0 " 49.9 Hz and f ‹ 0 " 50.1 Hz, and comparing it against the results presented in Fig. 3 for the "ideal" case where f ‹ 0 " 50 Hz. The analysis is focused on the first second of the response after start-up, with the frequency and voltage response presented in Fig. 5. We can conclude that the synchronization is successfully achieved within 500 ms for all three scenarios, with no distinctive differences between the three initialization points. Hence, as stated previously in Section II, the VIM will experience fast synchronization for any reasonable guess of the initial rotor frequency prior to the grid connection. Finally, we investigate the behavior of the VIM-controlled converter after the disconnection from the network. One of the main benefits of employing a VIM synchronization scheme is its standalone capability, i.e., the ability to operate even after being disconnected from the main grid if desirable; a characteristic of a physical induction machine. Such property is not attainable by traditional grid-following VSCs employing a phase-locked loop. The islanding is simulated 5 at t " 0.5 s and the inverter response is showcased in Fig. 6. The PLL-based unit loses synchronism immediately after the disconnection, with frequency plummeting below 49 Hz within 3 s of the disconnection. On the other hand, after some negligible initial transients, the VIM restores the converter to a new steady-state point and proceeds with normal operation.
V. STABILITY ANALYSIS Having validated the theoretical concept of VIM through EMT simulations, we dedicate this section to small-signal analysis of the DAE model presented in Section III. Moreover, we consider three different converter operation modes: (i) a grid-following VSC with a PLL; (ii) a grid-following VSC with a VIM; and (iii) a grid-forming VSC from [4]. Some very insightful observations can be made by studying the stability 5 We assume t " 0 to be the time instance at which the initialization transients have decayed and the system is in synchronism. maps in the R p c´R q c plane of a single inverter connected to a stiff grid. One such map is illustrated in Fig. 7 for a wide range of active and reactive power droop gains. Clearly, the two grid-following controllers have significantly different stability regions. However, for a reasonable tuning range considered in practice (i.e., R p c ă 0.1 p.u. and R q c ă 0.05 p.u.), the two regions are identical. Nevertheless, a very interesting observation can be made by comparing the aforementioned stability maps to the corresponding map of a grid-forming unit. Indeed, the stable region of a grid-forming VSC closely resembles the one of a VIM-based inverter for the whole parameter range, suggesting that replacing a PLL with a VIMbased synchronization device might provide some "forminglike" properties to the grid-following VSCs. A notion that could be substantiated by the fact that the VIM has standalone capabilities.
We continue this line of investigation by comparing the stability margins of all three converter modes depending on the strength (i.e., the short-circuit ratio µ P R) of the grid at the connection terminal. As can be seen from Fig. 8, the critical SCR for a PLL-based inverter isμ " 1, whereas the grid-forming VSC does not impose any requirements on the minimum grid strength. In that sense, the VIM-controlled inverter again resembles the grid-forming one, since it can withstand any SCR level. Moreover a similar analysis is done with respect to the maximum permissible penetration of inverter-interfaced generation. The movement of the most critical eigenvalue, i.e., the evolution of its real part with the increase in VSC installation level η P r0, 100s %, is depicted in Fig. 9 for the two-bus test case comprising one synchronous and one inverter-interfaced generator. While the maximum permissible installed capacity of PLL-based units is slightly below 70 %, this level can be increased by « 7 % by substituting the PLL with the VIM, therefore almost reaching the maximum permissible penetration of grid-forming VSCs of η max " 78.5 %. Note that the eigenvalue movement for scenarios SG´VSC pll and SG´VSC form is taken from [4]. This indicates that, despite not being able to provide black start and independently generate stable frequency reference, the VIM-based grid-following converters clearly share conceptual similarities with the grid-forming mode of operation. Furthermore, it should be pointed out that the instabilities arising at the maximum converter installation levels obtained State from Fig. 9 are solely associated with device-level control, with the highest state participation given in Table II. Hence, the instability is related to the inner voltage and current control and is not affected by the selection of the synchronization unit. Finally, we briefly address the topic of control tuning pertaining to the proposed VIM design. As previously pointed out in Section II, the tuning of a VIM can be based on the parameters of a physical induction machine. This particularly applies to the selection of resistance and inductance values involved in the PD controller K ν psq in (11) and the transfer function K e psq in (17). However, emulating the existing machine has its drawbacks, as it does not guarantee an optimal control performance. This was already discussed in Section IV, as the the derivative gain K D ν had to be re-tuned in order to achieve a satisfactory response during transients. Moreover, some induction generators might simply result in an unstable system. One such example is provided in Fig. 10 with the stability surface of a VIM-based converter illustrated in the R r´Lr´Lm space. It reflects the possible obstacles one could face when applying heuristic methods for VIM tuning, as the surface in Fig. 10 has distinctively nonlinear segments. Nevertheless, since VIM is an emulation of a physical machine, the tuning process does not have to incorporate exact physical parameters, but can rather use them as an initial starting point when designing the controller. Therefore, the multidimensional stability mapping, such as the one illustrated in Fig. 10, can be of great importance when optimizing the VIM performance.
VI. CONCLUSION This paper proposes a novel control strategy for synchronization of grid-connected VSCs based on the emulation of induction generator principles. For that purpose, a detailed mathematical model of an induction machine was derived and implemented within a detailed converter control scheme. It therefore eliminates the need for a dedicated PLL unit, while also providing additional services such as standalone operation after an islanding event. Moreover, similar to the the PLL, the VIM also has plug-n-play properties and can be combined with any system-level and device-level control method. The EMT simulations showcase smooth start-up and synchronization to the grid and an accurate computation of grid frequency, independent of the initial rotor speed input. The proposed synchronization device does not hinder the performance of other converter controls and preserves accurate voltage and power setpoint tracking. Finally, it was shown that a VIMbased grid-following converter resembles a grid-forming unit in certain operational aspects, therefore allowing for a higher penetration of VIM-controlled DGs compared to the PLLbased ones. A path for future work should go in a similar direction and further explore the dynamical properties of a VIM, as well as its interactions with conventional SGs and different converter control schemes.
APPENDIX A DERIVATION OF SATURATED FREQUENCY SLIP Let us first recall the expressions for the minimum and maximum of two variables from (38). The goal is to impose the lower and upper saturation limits pω ν , s ω ν q on an unsaturated frequency slip signalω ν , i.e., to obtain a new signal ω ν such that ω ν P rω ν , s ω ν s. This is equivalent to finding the maximum of the underlying signal and its lower bound (let us denote it byω ν ), and subsequently finding the minimum of that signal and the upper bound. In other words, ω ν " max tω ν , ω ν u , (40a) ω ν " min tω ν , s ω ν u .
|
2021-10-13T05:38:42.734Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "adb2ec17902cafe1ea19c1760cc8d6460c72e91b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "adb2ec17902cafe1ea19c1760cc8d6460c72e91b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
250267769
|
pes2o/s2orc
|
v3-fos-license
|
Deficit in observational learning in experimental epilepsy
Abstract Individuals use the observation of a conspecific to learn new behaviors and skills in many species. Whether observational learning is affected in epilepsy is not known. Using the pilocarpine rat model of epilepsy, we assessed learning by observation in a spatial task. The task involves a naive animal observing a demonstrator animal seeking a reward at a specific spatial location. After five observational sessions, the observer is allowed to explore the rewarded space and look for the reward. Although control observer rats succeed in finding the reward when allowed to explore the rewarded space, epileptic animals fail. However, epileptic animals are able to successfully learn the location of the reward through their own experience after several trial sessions. Thus, epileptic animals show a clear deficit in learning by observation. This result may be clinically relevant, in particular in children who strongly rely on observational learning.
after an ip injection of N-methyl-scopolamine (2 mg/kg) developed status epilepticus (SE), which was stopped by diazepam (10 mg/kg) after 120 min. 13 The rats performed the behavior task 1 month after the SE. No behavioral seizure was observed during the observation and exploration tasks (confirmed when analyzing video recordings).
All experiments were done in accordance with Aix-Marseille University and Inserm institutional animal care and use committee guidelines. The protocol was approved by the French Ministry of National Education, Superior Teaching, and Research, approval numbers APAFIS #30588-2020121011005518v3 and #20325-2019041914138115v2.
| Experimental design
We used an experimental design and protocol previously reported. 12 Briefly, experiments were conducted in a behavioral apparatus consisting of two square boxes: a transparent plexiglass inner box within an opaque outer box. The space between the two boxes included 12 symmetrically distributed wells. An accessible but not visible reward (chocolate loops, Nestle) was placed in one of the wells.
| Behavioral testing
Twenty rats were familiarized with the experimental transparent inner box environment daily for at least three sessions of 30 min each (as shown in Figure 1). This allowed the inner box to be experienced directly, whereas the outer box could only be observed.
Subjects were divided into naive (n = 5) and observer animals (n = 15). Naive animals were tested for finding rewards without observational training. After at least 20 consecutive successful trials of finding the reward, the naive animals were considered to be demonstrator animals. Each observer animal was then allowed to observe a demonstrator animal as the latter performed the task of seeking and getting the reward ( Figure 1A,B). OL consisted of five daily demonstrations for five consecutive days with the observer in the inner box ( Figure 1C). After OL completion, the observer rat was allowed to explore the outside space to seek and find the reward. The outside space was entered through the opening of a plexiglass wall opposite the reward well.
A performance was considered successful if the observer animal made no mistakes. A mistake was counted as active digging in an unrewarded well. A separate cohort of observers was tested without reward available to the animals during direct outside exploration.
| Data analysis
All data were analyzed using the average time taken to find the reward from entering the outside space, the total number of mistakes made, and the percentage of successful animals for each trial. All values were expressed as mean ± SEM. All behavioral data were analyzed similarly to the way previously reported. 12 Effect sizes and confidence intervals (CIs) are reported as effect size (CI width lower bound, upper bound).
To investigate whether control and epileptic animals can learn the location of a hidden reward by observation, observer rats watched demonstrators as they were running straight to the reward location (five trials daily for five consecutive days). Then, observers were allowed to directly explore the observed space to find the reward (see Figure 1).
In the control observer group (n = 5), all animals successfully found the reward without error during their first direct exploration of the outside space ( Figure 2). This result is also not statistically different from what has been reported previously 12 (unpaired mean difference: 7.1 [95.0% CI = .0, 14.3]). All subsequent direct explorations were also 100% successful (n = 15 trials, five animals).
In the epileptic observer group (n = 5), no rat successfully found the reward without error during the first direct exploration of the outside space ( Figure 2). The difference in percentage of success between epileptic and control groups is 93% (99.9% CI = 100, 71), a very large effect size. The difference in percentage of success persisted for the second, third, and fourth attempts (mean difference = 80% [99.0% CI = 100, 40], 40% [95.0% CI = 100, 20], and 40% [95.0% CI = 100, 20], respectively). The number of errors for epileptic observers during the first exploration is statistically different from the control observer group (the unpaired mean difference between epileptic and control is 3.2 [99.9% CI = 2.0, 6.0]), with a very large effect size, and similarly in the second exploration (the unpaired mean difference is 2.6 [99.9% CI = .4, 7.0]). The time taken by epileptic observer rats to complete the first trial was 6124 ± 729 s, which is statistically different from the control observer group (1118 ± 747 s; the unpaired mean difference between epileptic and control is 5006 [99.9% CI = 500, 6800]).
Thus, unlike control observers, epileptic observers failed in finding the reward during the first exploration of the observed space.
| Epileptic animals can learn the spatial memory task
Although epileptic animals failed in the observational learning task, they may have retained some information when observing the demonstrator, which would make them learn faster the location of the reward. Alternatively,
F I G U R E 1 Experimental design. (A) The experimental environment consisted of a transparent inner box and an opaque outer box.
The gray areas indicate the regions explored by the tested rat. In Step 1, the observer (red) watches the demonstrator (blue), which has been trained to remember the location of a reward hidden in one of the 12 wells. (B) Image of the experimental apparatus with the lower wall of the transparent inner box open. The reward is covered with gravel. One of the four walls of the opaque outer box is white and provides a distal cue to the animals. (C) Schematic representation of the experiment. The familiarization phase, in which the experimental animal is confined to the inner box, is followed by the observational training phase, in which it can observe the demonstrator animal navigating the outer space. During direct exploration, the observer animal is allowed to navigate in the observed space. One session is held daily, for a total of nine sessions (three for familiarization, five for observational training, and one for direct exploration). The red and blue areas correspond respectively to the space that the observer and demonstrator animals can directly explore. they may have major cognitive impairment in spatial memory tasks, which would make them unable to learn the location of the reward.
Seven trials are required before naive control rats consistently go directly to the reward (no mistake). 12 The percentage of animals finding the reward was comparable in naive control and epileptic observer rats during the seven first explorations. The time taken by naive control and observer epileptic rats to complete the first seven trials was also not statistically different. The number of errors in the first seven explorations was not statistically different between these two groups, except for the second trial (the unpaired mean difference is 2.0 [95.0% CI = .2, 4.6]). In this case, the epileptic rats made more errors than the naive ones (2.6 ± 1.2 and .6 ± .4, respectively). However, after seven trials, epileptic observer animals were able to find the reward as consistently as naive controls.
We conclude that observer epileptic animals can learn the location of the reward (spatial memory task) as well as nonobserver control rats, and that observing a demonstrator does not make them learn the task faster than control rats.
| Learning the location of the reward is independent of olfactory cues
To rule out reward localization by olfaction, the reward was removed after observational training before the first direct exploration in a second cohort of epileptic observer animals.
The difference between the two epilepsy groups was not significant. At the first exposure, the percentage of success was identical (0%), and the number of mistakes made was 3.8 ± .9 (n = 5) in the unrewarded epileptic animals and 3.2 ± .7 (n = 5) in the rewarded epileptic animals (the unpaired mean difference is .6 [95.0% CI = −1.8, 2.4]). The time required to find the reward during the first exposure was also not significantly different, 4601 ± 1410 s and 6124 ± 729 s, respectively (unpaired mean difference is −1523 [95.0% CI = −4620, 978]).
Rewarded and unrewarded epileptic observer animals showed similar performance, ruling out a possible olfactory influence on task success during the initial direct exploration.
| DISCUSSION
The main finding of the study is that epileptic rats fail to learn by observation (otherwise they would not make mistakes during their first exploration), although they succeed in the spatial memory task (they can learn where the baited well is located).
In humans, learning by observation requires, as a first step, to be attentive to the demonstrator. 1,3,4 Failure to process social information (paying attention to a conspecific), a trait also found in some patients with epilepsy and rodents, [14][15][16][17] may also be linked to the deficit in OL. The task presented here could be used to study spatial processes in a social context. F I G U R E 2 Epileptic animals fail to learn by observation. The percentage of animals successfully finding the reward is displayed as a function of trials. Naive control animals (blue, n = 5) learn the location of the reward, and do not make any mistakes after a certain time. They become demonstrators. After control observers (red, n = 5) are first exposed to the rewarded space, they go straight to the reward; they have learned by observation. Epileptic observers (black, n = 5) fail to find the reward during the first exposure, but learn the task as naive control rats. Error bars are mean ± SEM. The black dashed line represents success by chance (8.3%). *p < .05, **p < .01, ***p < .001. PILO, pilocarpine.
In this observational task, the hypothesis is that the observer builds a cognitive spatial map of the outside space. The numerous alterations present in temporal regions involved in spatial information processing in epileptic animals 11,18 may underlie the deficit in OL. Nonetheless, epileptic observers can learn the task as well as naive controls. In experimental epilepsy, comorbidities including spatial memory deficits and depression 11 are model-, 19 animal-, 15 and housing condition-dependent. 20,21 Comorbidities are not apparent when we use social housing as here. 20,21 Epileptic observers did not learn faster than naive animals. If they had learned the rule of the task (only one of 12 wells is baited), the learning curve would be S shaped. 22,23 In contrast, the curve was logarithmiclike, as expected when learning a spatial task. 12,22,23 This suggests that epileptic observers learned neither the space (which well is baited) nor the rule of the task.
| CONCLUSIONS
We provide the first evidence that observational learning in a spatial task is deficient in an experimental model of epilepsy. Given the importance of observational learning for humans throughout life, it will be important to test observational learning in patients with epilepsy, in particular in children.
|
2022-07-05T13:24:45.618Z
|
2022-07-02T00:00:00.000
|
{
"year": 2022,
"sha1": "38c5d770190062c590b5547d588b76c1996f9d2c",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/epi.17421",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "373bca2070353aee31b731922d6ec9c6a33bd561",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
93794207
|
pes2o/s2orc
|
v3-fos-license
|
Cyclopropanation of 5-(1-bromo-2-phenyl-vinyl)-3-methyl-4- Nitro-isoxazoles under Phase Transfer Catalysis (ptc) Conditions
Heavily substituted cyclopropane esters were prepared in high yields, complete diastereoselection and average (up to 58%) enantioselectivity. The reaction described herein entailed reacting 4-nitro-5-bromostyrylisoxazoles, a class of powerful Michael acceptors with malonate esters under the catalysis of 5 mol% of a chincona derived phase-transfer catalyst.
Interestingly, isoxazoles 2 (Figure 1), in which a halide is introduced on the exocyclic alkene, hold an additional electrophilic center E3 that increases the number of their possible synthetic applications [3].
The first type involves formation of cyclopropanes such as 7 (Scheme 1) via Michael addition of a nucleophile containing a leaving group to an activated alkene.For example, our group has recently reported a highly enantioselective cyclopropanation of 3-methyl-4-nitro-5-styrylisoxazoles 1 that reacted with bromomalonate 6 under the catalysis of Cinchona based phase transfer catalysts [43].The second type of MIRC processes involves the formation of cyclopropanes by nucleophilic addition to electrophilic substrates containing a leaving group, for example a bromide as in compound 2. Herein we report the results of our studies on the reaction of bromostyreneisoxazoles 2 and malonate 5 under the catalysis of Cinchona based phase transfer catalysts.
Results and Discussion
We first investigated the addition of dimethyl malonate 5a to 2a in the presence of K3PO4 50% w/w as inorganic base, toluene as organic solvent and quaternary ammonium salts derived from Cinchona alkaloids as catalysts (Table 1).The choice of toluene and phosphate arose from a preliminary screening which identified these as the most suitable condition to obtain desired cyclopropane 7 in high yields.
These experiments afforded cyclopropane 7a with in high conversion even with only 0.05 equiv. of catalyst loading, but with enantioselectivity up to a maximum of 42%.Importantly, cyclopropane 7a was always obtained as a single diastereoisomer.The higher enantiomeric excess was obtained with catalyst N-benzylquininium chloride (Table 1, entry 1).The second generation catalyst O-Allyl-N-9anthracenylmethylcinchonidinium bromide, provided high yields of desired 7a, but in an almost racemic form (Table 2, entry 5).The reason for this may lay in the peculiar mode of action of these catalytic species (see Figure 2) in which a free OH is required to provide a key H-bond with the enolate.[43] Based on the results collected on N-benzylquininium catalyst series, a series of Nbenzylquininium salts was prepared containing various functional groups at the benzyl ortho-position and employed as catalysts to promote the Michael addition.These catalysts provided cyclopropane 7a in similar enantiopurity as commercially available N-benzylquininium chloride (Table 1, entries 6-9).
In order to increase the enantioselectivity of this reaction, compound 2a was reacted with malonates bearing alkyl groups of increasing steric hindrance ( The scope of reaction was shown by reacting styryilisoxazoles 2a-1 with methyl malonate 5a under the catalysis of 14 (Table 3).The results collected pointed out the following facts: (i) compounds containing either electron withdrawing or electron donating groups were equally good substrates; (ii) substrates containing aromatic heterocycles were also good substrates giving products 7e in excellent yields and similar enantiomeric excess (Table 3, entries 7); (iii) alkyl substituted isoxazole 1i reacted equally well giving aliphatic cyclopropane 7i in comparable yield and ee.[a] Reaction Conditions: bromostyrylisoxazole 2a-i (0.1 mmol), toluene (5.0 mL), cat.14 (0.005mmol), dimethyl malonate 5a (0.2 mmol), K 3 PO 4 50% w/w (1 mmol).[b] Isolated yields after flash column chromatography.
[c] The enantiomeric excess (ee) of the product was determined by chiral stationary phase HPLC.
We have compared the data collected for the reaction of 1 and 6 [14] with those for the reaction of 2 and malonate 5 and explained the observed difference in enantioselectivity as follows (Figure 2).Firstly, the requirement for a free -OH on the phase transfer catalyst indicates the interaction of this group with one of the two reagents involved, presumably the enolate.It is well known that + N-CαH behaves as strong hydrogen bond donors.[44] Therefore, it is possible that in apolar media such as toluene an interaction could take place between the catalyst + N-CαH and the nitro group of the styrylisoxazole.According to this rationale, the bromine in compounds 2 shielded the NO2, limiting its interaction with the PTC as it may occur for compounds 1, hence justifying the lower enantioselectivity observed.
Experimental Section
General procedure for the preparation of compounds 7a-i: To a test tube equipped with a magnetic stirring bar were sequentially added the bromostyrylisoxazole 2a-1 (0.1 mmol), toluene (1.0 mL), catalyst 14 (5 mol%) and malonate 5a-e (0.2 mmol).The test tube was placed at the stated temperature, then K3PO4 50% w/w was added in one portion (0.28 mL, 1.0 mmol).The mixture was then vigorously stirred at the same temperature, with no precautions to exclude moisture or air.After the stated reaction time, the reaction was then quenched with sat.NH4Cl (4 mL) and the product extracted with toluene (3 × 1 mL).The combined organic phases were evaporated and the product was then purified by chromatography on silica gel (petroleum ether/EtOAc mixtures).
Figure 2 .
Figure 2. Proposed transition states for the cyclopropanation of 1 and 2.
|
2015-09-18T23:22:04.000Z
|
2015-04-13T00:00:00.000
|
{
"year": 2015,
"sha1": "ad43f335e844bbffa345f7972105d7950470006b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4344/5/2/595/pdf?version=1428918837",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "ad43f335e844bbffa345f7972105d7950470006b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
17104920
|
pes2o/s2orc
|
v3-fos-license
|
Associations of common polymorphisms in GCKR with type 2 diabetes and related traits in a Han Chinese population: a case-control study
Background Several studies have shown that variants in the glucokinase regulatory protein gene (GCKR) were associated with type 2 diabetes and dyslipidemia. The purpose of this study was to examine whether tag single nucleotide polymorphisms (SNPs) in the GCKR region were associated with type 2 diabetes and related traits in a Han Chinese population and to identify the potential mechanisms underlying these associations. Methods We investigated the association of polymorphisms in the GCKR gene with type 2 diabetes by employing a case-control study design (1118 cases and 1161 controls). Four tag SNPs (rs8179206, rs2293572, rs3817588 and rs780094) with pairwise r2 > 0.8 and minor allele frequency > 0.05 across the GCKR gene and its flanking regions were studied and haplotypes were constructed. Genotyping was performed by matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy using a MassARRAY platform. Results The G alleles of GCKR rs3817588 and rs780094 were associated with an increased risk of type 2 diabetes after adjustment for year of birth, sex and BMI (OR = 1.24, 95% CI 1.08-1.43, p = 0.002 and OR = 1.22, 95% CI 1.07-1.38, p = 0.002, respectively). In the non-diabetic controls, the GG carriers of rs3817588 and rs780094 were nominally associated with a lower plasma triglyceride level compared to the AA carriers after adjustment for year of birth, sex and BMI (p for trend = 0.00004 and 0.03, respectively). Furthermore, the association of rs3817588 with plasma triglyceride level was still significant after correcting for multiple testing. Conclusions The rs3817588 A/G polymorphism of the GCKR gene was associated with type 2 diabetes and plasma triglyceride level in the Han Chinese population.
Background
Glucokinase (GCK) is the key glucose phosphorylation enzyme responsible for the first rate-limiting step in the glycolysis pathway. GCK regulates glucose metabolism in the liver and glucose-stimulated insulin secretion from pancreatic beta cells [1]. GCK activity is closely regulated by the glucokinase regulatory protein (GCKR), a process depending on fructose 6-phosphate and fructose 1-phosphate [2,3]. Gckr-deficient mice display reduced GCK protein levels and activity in the liver and exhibit impaired postprandial glycemic control [4,5]. In a previous study, adenoviral-mediated hepatic overexpression of GCKR significantly improved insulin sensitivity and glucose tolerance in mice and resulted in decreased leptin concentration and increased triglyceride levels [6].
In the Diabetes Genetics Initiative genome-wide association study, the GCKR rs780094 A allele was found to be strongly associated with hypertriglyceridemia in populations from Finland and Sweden [7]. Subsequently, a large study of Danish white participants confirmed that the rs780094 A allele was associated with increased fasting triglycerides, impaired fasting and OGTT-related insulin release, reduced homeostasis model assessment of insulin resistance (HOMA-IR), increased risk of dyslipidemia and a modestly decreased risk of type 2 diabetes [8]. The HapMap II CEU data http://www. hapmap.org showed that rs780094 was in strong linkage disequilibrium (LD) (r 2 = 0.932) with a non-synonymous GCKR variant (Pro446Leu, rs1260326). The DESIR prospective cohort study demonstrated that the GCKR variant rs1260326 T allele was strongly associated with increased triglyceride levels, lower fasting glucose and insulin levels, a lower HOMA-IR index, and a higher risk for dyslipidemia, but a lower risk for hyperglycemia and type 2 diabetes in a general French population [9]. Another study, combining data from 12 independent cohorts comprising more than 45,000 individuals with various ethnic backgrounds, confirmed that GCKR rs780094 and rs1260326 were strongly associated with opposite effects on fasting plasma triglyceride and glucose concentrations [10]. Recently, the MAGIC study conducted a large-scale meta-analysis and provided convincing evidence that the GCKR rs780094 A allele was associated with lower fasting glucose and insulin levels, a lower HOMA-IR index, a higher triglyceride level, and a lower risk for type 2 diabetes [11].
Several studies of the association of GCKR variants with type 2 diabetes or glucose homeostasis parameters in Chinese populations have been reported [12][13][14]. In a study of a population-based sample of Han Chinese individuals, the GCKR rs780094 A allele was found to be significantly associated with a reduced risk of impaired fasting glucose (IFG) and type 2 diabetes, decreased fasting glucose, increased homeostasis model assessment of beta cell function (HOMA-B), and fasting triglyceride levels; GCKR rs1260326 displayed similar associations [12]. A study of healthy Chinese adults and adolescents showed that the GCKR rs780094 A allele was associated with increased triglyceride levels, and GCKR rs780094 alone did not contribute to fasting glucose but interacted with GCK rs1799884 to increase fasting glucose [13]. However, another study in a Han Chinese cohort did not find any association between GCKR rs780094 and type 2 diabetes [14]. Therefore, the association of GCKR variants with fasting plasma glucose and type 2 diabetes is still not confirmed in a Chinese population. The aim of this study was to replicate the associations of GCKR variants with type 2 diabetes and related traits found in Caucasian populations in a Han Chinese population and to identify the potential mechanisms underlying these associations.
Study population
All participants were of Southern Han Chinese ancestry and resided in the Shanghai metropolitan area. We recruited 1118 unrelated type 2 diabetic inpatients from the Endocrinology and Metabolism Department of Zhongshan Hospital, Fudan University, Shanghai, China. All diabetic patients met the 1999 WHO criteria for diabetes [15], had been diagnosed after the age of 29 years, and were treated with oral hypoglycemic agents and/or insulin. The 1161 unrelated non-diabetic control participants were recruited from people undergoing health examinations in Zhongshan Hospital, were older than 40 years, and had a fasting plasma glucose < 5.6 mmol/l. Written informed consent was obtained from all participants and the study was approved by the ethnic committee of Zhongshan Hospital, Fudan University, Shanghai, China.
Clinical measurements
Both the diabetic patients and the controls were extensively phenotyped for anthropometric and biochemical traits related to glucose metabolism. The phenotypes assessed in our study include height, weight, waist circumference, blood pressure, fasting glucose, total cholesterol, triglyceride, high density lipoprotein cholesterol (HDL-C), and low density lipoprotein cholesterol (LDL-C). BMI was calculated as weight (kg)/height 2 (m 2 ). In a subgroup of diabetic patients (n = 664), potential beta cell function was determined using intravenous arginine stimulation tests under fasting conditions. After taking a baseline blood sample, a 10% (wt/vol.) solution of arginine hydrochloride (5 g) was injected intravenously for 30-45 s. The end of the injection period was designated as time zero, after which samples were taken at 2, 4 and 6 min. The acute insulin response (AIR) to arginine was calculated as the mean of the insulin levels in the postinjection samples minus the insulin level in the prestimulus sample. The acute C-peptide response (ACPR) to arginine was calculated in the same way using sampled C-peptide levels.
Genotyping
We selected tag single nucleotide polymorphisms (SNPs) across the region of the GCKR gene (include 20 kb upstream and 9 kb downstream of the gene) from the HapMap Phase II, using the pairwise tagging model.
The pairwise tagging algorithm was developed by Carlson et al. and has been described previously [16]. In brief, the algorithm is based on the r 2 LD statistic and is comprised of several steps [16]. Starting with all SNPs above the specified minor allele frequency (MAF) threshold in the candidate gene region, the single SNP exceeding the specified r 2 threshold with the maximum number of other SNPs above the MAF threshold is identified [16]. This maximally informative SNP and all associated SNPs are grouped as a bin of associated sites [16]. Any SNP exceeding the threshold r 2 with all other SNPs in the bin is specified as a tag SNP for the bin [16]. Thus, one or more SNPs within a bin are specified as "tag SNPs" and only one tag SNP would need to be genotyped per bin. The binning process is iterated, analyzing all as-yet-unbinned SNPs at each round, until all sites exceeding the MAF threshold are binned [16]. Thus, the maximally informative set of common SNPs (tag SNPs) is selected and is to be assayed in candidate gene association studies [16]. All polymorphisms above a specified frequency threshold either are directly assayed or exceed a specified threshold level of r 2 with an assayed polymorphism (tag SNP) [16].
The selection criteria used in our study were an r 2 > 0.8 and a minor allele frequency > 0.05. Finally, four tag SNPs (rs8179206, rs2293572, rs3817588 and rs780094) were selected and genotyped. The genotyping was performed by matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy using a MassARRAY platform (MassARRAY Compact Analyzer, Sequenom, San Diego, CA, USA).
Statistical analysis
Continuous variables are expressed as the means ± SEM. Comparisons between groups were performed with T testing and χ 2 testing for normally distributed continuous and categorical variables, respectively. Deviations from the Hardy-Weinberg equilibrium were assessed by means of χ 2 testing. SNPs that were not in Hardy-Weinberg equilibrium were excluded from further analysis. Pairwise linkage disequilibrium including D' and r 2 was estimated using Haploview. Haplotypes estimating from the population genotype data were performed in Haplo. Stats (R2.8.1). We did allelic analysis for the association of GCKR polymorphisms with type 2 diabetes using logistic regression. We did genotypic analysis for the association of GCKR polymorphisms with quantitative traits using a general linear model, assuming an additive model. We tested the association of haplotypes with type 2 diabetes and quantitative traits by using logistic regression and a general linear model. Non-normally distributed values were log-transformed before analysis. All models were adjusted for year of birth and sex. Additional models were adjusted for BMI. Analysis was performed using SPSS software version 13.0. We used Bonferroni correction for multiple testing.
Baseline characteristics
The baseline characteristics of participants in this study are presented in Table 1. Of 2279 participants, 1118 were type 2 diabetes patients and 1161 were non-diabetic controls. Diabetic cases were older and had higher BMI, waist circumference, fasting glucose and triglyceride levels, but lower cholesterol concentrations than non-diabetic controls. There was no significant difference in the distribution of sex between diabetic cases and non-diabetic controls (p = 0.34).
Rs8179206, rs2293572, rs3817588 and rs780094 were in Hardy-Weinberg equilibrium in the total population, diabetic cases and non-diabetic controls ( Table 2). The G allele of rs3817588 was significantly associated with an increased risk of type 2 diabetes after adjustment for year of birth and sex (OR = 1.21, 95% CI 1.06-1.39, p = 0.004) ( Table 3). In addition, the G allele of rs780094 was significantly associated with an increased risk of type 2 diabetes after adjustment for year of birth and sex (OR = 1.19, 95% CI 1.05-1.34, p = 0.005) ( Table 3). The associations remained significant after additional adjustment for BMI ( Table 3). The associations of rs8179206 and rs2293572 with type 2 diabetes were not significant (OR = 1.18, 95% CI 0.85-1.65, p = 0.32 and OR = 1.04, 95% CI 0.88-1.23, p = 0.64, respectively) ( Table 3).
Associations of GCKR polymorphisms with quantitative traits in non-diabetic controls
In the non-diabetic controls, the GG and AG carriers of rs3817588 were nominally associated with a lower plasma triglyceride level compared with the AA carriers after adjustment for year of birth, sex and BMI (p = 0.0003 and p = 0.02, respectively), and the trend was in the same direction (p for trend = 0.00004) ( Table 4). The association of rs3817588 with plasma triglyceride level was still significant after correction for multiple testing. The GG carriers of rs780094 were nominally associated with a lower plasma triglyceride level compared with the AA carriers, after adjustment for year of birth, sex and BMI (p = 0.01), and the trend was in the same direction (p for trend = 0.03) (Table 4). However, the association of rs780094 with plasma triglyceride level was not significant after correction for multiple testing. The associations of rs8179206 and rs2293572 with plasma triglyceride level were not significant ( Table 4). The GG carriers of rs3817588 were nominally associated with a higher waist circumference compared with the AA carriers, after adjustment for year of birth and sex (p = 0.01), and the trend was in the same direction (p for trend = 0.04) ( Table 4). The association was not significant after correction for multiple testing. The associations of other polymorphisms with waist circumference were not significant ( Table 4). None of the four polymorphisms showed a significant association with BMI, fasting plasma glucose, total cholesterol, HDL-C, LDL-C, systolic blood pressure or diastolic blood pressure ( Table 4).
Associations of GCKR polymorphisms with AIR and ACPR in diabetic cases
A subgroup of diabetic cases (n = 664) was classified into 4 groups according to the duration of diabetes. The range of diabetes duration was 1 month to 40 years in this subgroup of diabetic patients. In the third quartile subgroup with a diabetic duration of 8-11 years, the GG carriers of rs780094 were nominally associated with lower levels of AIR and ACPR compared with the AA carriers, after adjustment for year of birth, sex and BMI (p = 0.02 and 0.03, respectively), and the trend was in the same direction (p for trend = 0.03 and 0.09, respectively) (Figure 1). In the fourth quartile subgroup with a diabetic duration over 11 years, the GG carriers of rs3817588 were nominally associated with lower levels of AIR and ACPR compared with the AA carriers, after adjustment for year of birth, sex and BMI (p = 0.008 and 0.01, respectively), and the trend was in the same direction (p for trend = 0.03 and 0.03, respectively) (Figure 1). However, these associations were no longer significant after correction for multiple testing.
Associations of GCKR haplotypes with quantitative traits in non-diabetic controls
In the non-diabetic controls, the haplotype block was nominally associated with plasma triglyceride level adjusted for year of birth, sex and BMI (the global p value = 0.0001) (Additional file 1, Table S2). The haplotype AGGG was associated with a lower plasma triglyceride level adjusted for year of birth, sex and BMI (p = 2.2 × 10 -5 ). The association remained significant after correction for multiple testing. BMI, waist circumference, fasting plasma glucose, total cholesterol, HDL-C, LDL-C, systolic blood pressure and diastolic blood pressure were not significantly different between haplotypes (Additional file 1, Table S2).
Discussion
In line with previous studies, our study confirmed the opposite effects of GCKR variants on glucose and triglyceride concentrations. Our data showed that the GCKR rs780094 G allele and rs3817588 G allele were associated with an increased risk of type 2 diabetes in Han Chinese individuals. The G alleles were also nominally associated with a lower fasting triglyceride level. Moreover, the association of rs3817588 with fasting triglyceride level was still significant after correction for multiple testing. The associations of GCKR rs780094 with type 2 diabetes and triglyceride level have been replicated in many studies of different ethnic populations since the Diabetes Genetics Initiative genome-wide association study [7][8][9][10][11]. Our study confirmed this association again in a Han Chinese population. Our study was the first study which adopted a tagging strategy and selected tag SNPs of GCKR including 20 kb upstream and 9 kb downstream of the gene for studying. We demonstrated that another polymorphism rs3817588 in GCKR affected glucose and lipid metabolism in a similar way as rs780094, which was not reported in the previous studies.
In the present study, the G allele of GCKR rs780094 was associated with higher odds of type 2 diabetes (OR = 1.19). The effect size was similar to that observed in another study of a Han Chinese population [12]. In the studies of populations of European descent, the G allele of rs780094 was also associated with a higher odds of diabetes, but the effect size was much smaller (OR = 1.06-1.08) [8,10,11]. The frequency of the rs780094 G allele is substantially lower in Han Chinese (43%) than in White Europeans (65%). The difference in genetic background between different ethnic groups may explain the discrepancy between effect sizes in Han Chinese and European populations. The G allele of GCKR rs3817588 was associated with higher odds of type 2 diabetes (OR = 1.21). For both polymorphisms, the G risk alleles for diabetes were nominally associated with a lower triglyceride level. The mechanism by which the GCKR variants lead to type 2 diabetes and protect against dyslipidemia remains to be determined. A potential explanation is that GCK regulation by GCKR is altered in the liver, which leads indirectly to decreased GCK activity [17]. Decreased GCK activity was associated with decreased glucose utilization in the liver [17]. With decreased glucose utilization and glycolytic flux, GCK, phosphofructokinase, and fatty acid synthase are downregulated, whereas phosphoenolpyruvate carboxykinase and glucose-6-phosphatase are upregulated [17]. These changes increase hepatic glucose output, lower malonyl-CoA concentration and inhibit de novo lipogenesis and VLDL triglyceride production [17].
We next investigated whether the GCKR variants were associated with beta cell function as determined by an arginine stimulation test in diabetic patients. We found that in groups with a relatively long diabetic duration, GCKR variants were associated with AIR and ACPR. The carriers of the G alleles of rs780094 and rs3817588 had lower values for AIR and ACPR than the AA homozygotes. These findings suggested that the GCKR variants probably contribute to diabetes susceptibility by impairing beta cell function, although the associations of the GCKR variants with AIR and ACPR became nonsignificant after correction for multiple testing. A study in a Han Chinese population found that rs780094 was associated with beta cell function as estimated by HOMA-B, which was consistent with our findings [12].
We also did haplotype analysis and found that all four SNPs exhibited moderate to strong LD in terms of D' and fell into one block. The block was associated with type 2 diabetes after adjustment for year of birth, sex and BMI. The AGGG haplotype and the GGGG haplotype were associated with an increased risk of type 2 diabetes. Both haplotypes carried the G risk alleles of rs3817588 and rs780094. However, the ACAG haplotype which carried the G risk allele of rs780094 did not show any association with type 2 diabetes. This would suggest that the effect of the rs780094 G allele on the risk of type 2 diabetes was due to its LD with rs3817588. The difference of the AGGG haplotype and the GGGG haplotype was at the locus of rs8179206. However, the GGGG haplotype was associated with a 2.08 times risk of diabetes which was much higher than that of the AGGG haplotype (OR = 1.18), suggesting that the G alleles of rs8179206 and rs3817588 had synergistic effect on the development of type 2 diabetes. Rs8179206 G allele also contributed to the risk of type 2 diabetes although it was not associated with diabetes in single locus analysis.
Rs3817588 and rs780094 are located in introns of GCKR. Based on the current data, we cannot confirm whether they are in the splicing site or the transcription factor binding site of the gene. We therefore assume them to be linked with one or more functional variants within the GCKR gene or its regulatory regions. Rs780094 is tightly linked with rs1260326 (HapMap CEU r 2 = 0.93, CHB r 2 = 0.82), a non-synonymous variant in GCKR associated with type 2 diabetes and triglyceride level [9,10,12]. A functional study showed that GCKR rs1260326 was associated with fasting plasma glucose and triglyceride levels, and this effect was mediated through regulating the activity of GCK in liver [17]. We did not genotype rs1260326 in the current study because of the fact that it is in strong LD with rs780094 and represents the same information as rs780094, which was demonstrated by a previous study in a Han Chinese population [12]. Although there is no evidence that rs3817588 is linked with any functional variant now, future fine mapping and resequencing of the GCKR gene may detect such functional variants. Figure 1 Acute insulin response (AIR) and acute C-peptide response (ACPR) stratified according to GCKR rs780094 or rs3817588 genotypes by quartile of diabetic duration. * p < 0.05 compared with homozygote of major allele (AA) ** p < 0.01 compared with homozygote of major allele (AA) # p for trend < 0.05 All p values were not significant after Bonferroni correction (p < 0.0016 (0.05/32) was used as Bonferroni corrected statistically significant level in association analysis between the individual SNP and AIR or ACPR) The figures below the genotypes indicate the number of diabetic cases for each genotype group.
Our study had some limitations. Firstly, we did not investigate gene-environment interactions. Because both genetic variants and environmental factors contribute to type 2 diabetes, and adverse environmental factors (high-caloric diets, physical inactivity, etc.) have important influence on the development of diabetes, the elucidation of gene-environment interactions should not be overlooked in future studies. Secondly, we did not do a functional study of rs3817588. Further functional study is needed to determine whether rs3817588 is associated with selective splicing of mRNA or the binding of transcription factors and affects expression level of protein ultimately.
Conclusions
We demonstrated that the rs3817588 A/G polymorphism of the GCKR gene was associated with type 2 diabetes and plasma triglyceride level in the Han Chinese population.
Additional material
Additional file 1: Table S1 Association of GCKR haplotypes with type 2 diabetes. Table S2 Quantitative traits stratified according to GCKR haplotypes in non-diabetic controls Figure S1 Haploview-generated linkage disequilibrium (LD) map and blocks of the 4 SNPs at the GCKR locus Abbreviations GCK: glucokinase; GCKR: glucokinase regulatory protein; HOMA-IR: homeostasis model assessment of insulin resistance; HOMA-B: homeostasis model assessment of beta cell function; IFG: impaired fasting glucose; AIR: acute insulin response; ACPR: acute C-peptide response; SNP: single nucleotide polymorphism; MAF: minor allele frequency; LD: linkage disequilibrium; SEM: standard error of mean; BMI: body mass index; CI: confidence interval; OR: odds ratio; HDL-C: high density lipoprotein cholesterol; LDL-C: low density lipoprotein cholesterol; BP: blood pressure; DM: diabetes; HWE: Hardy-Weinberg equilibrium.
|
2014-10-01T00:00:00.000Z
|
2011-05-13T00:00:00.000
|
{
"year": 2011,
"sha1": "d355b5d262241ecebebde858199b7edd1be135eb",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/1471-2350-12-66",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c964bb5e976e5b4d82ac28eedf491cf220ebeee",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
117381293
|
pes2o/s2orc
|
v3-fos-license
|
On the Mechanism of the Spin State Transition of (Pr1-ySmy)1-xCaxCoO3
Transport, thermal and magnetic measurements have been carried out on (Pr1-ySmy)1-xCaxCoO3. The system exhibits a structural phase transition accompanied by the spin state change from the intermediate spin (IS) state to the low spin (LS) state with decreasing temperature T. We have constructed a T-y phase diagram for x=0.3 and T-x ones for y=0.2 and 0.3. By analyzing their magnetic susceptibilities, the number of Co ions excited to IS state (or the electron number in the eg orbitals), nIS, are roughly estimated. With increasing y or with decreasing x, nIS decreases, and the phase transition changes gradually to the (IS-LS) crossover-like one. We discuss on the possible role of the Pr atoms in realizing the transition.
Introduction
Co oxides, which have the linkage of CoO 6 octahedra, have attracted much interest, because they exhibit various notable physical characteristics. The superconductivity found in Na z CoO 2 ⋅yH 2 O (z~0.3 and y~1. 3) is one of the examples of such characteristics. [1][2][3][4] The spin state change often observed for Co 3+ ions in perovskite systems RCoO 3 (R= various rare earth elements and Y) is another example. They exhibits a spin state change from the low spin (LS; spin s=0; t 2g 6 ) ground state to the intermediate spin (IS; s=1; t 2g 5 e g 1 ) or the high spin (HS; s=2; t 2g 4 e g 2 ) state with increasing temperature T. [5][6][7][8][9][10][11][12][13][14][15][16] The existence of the spin state change indicates that the difference of the electronic energies, δE, between these states is rather small. Therefore, we can control the physical properties of Co oxides by controlling the value of δE by various methods. For example, the change of the ionic radius r R of R 3+ clearly affects the value of δE, because the pseudo cubic crystal field splitting ∆ c between 3-fold t 2g and 2-fold e g orbitals depends on r R through the volume change of the CoO 6 octahedra.
The Co-Co transfer energy t, which seems to be important to determine δE, 17) also depends on r through the change of Co-O-Co bond angle induced by the change of the tolerance factor (r R +r O )/√2(r Co +r O ), where the r M (M=R, O and Co) is the ionic radius of M. The increasing tendency of the temperature of the spin state change with decreasing R-ion size is explained by these facts. [15][16][17] Typical results of the studies on the relationship between the spin state of Co 3+ ions and the local structures can be found in our reports, too. [17][18][19][20] In the consideration of δE of R 1-x A x CoO 3 (A= Ba, Sr and Ca), effects of the hole doping should also be considered. [21][22][23][24][25][26][27] The substitution of R 3+ with A 2+ affects the effective values of δE through the introduction of the hole-carriers and by changing the average ionic radius of R 1-x A x . For the relatively large atom species, for example, R=La and/or A=Ba and Sr, the ferromagnetic transition is often observed, which is due to the enhancement of the double exchange interaction caused by the decrease of δE or increase of the electron number in the e g orbitals.
For the samples of (Pr 1-y R' y ) 1-x Ca x CoO 3 (R'=rare earth elements and Y), a phase transition which is accompanied by the change from the IS to LS state of Co ions with decreasing T, is observed. 17,28) In the previous paper, we studied the transition in detail for (Pr 1-y R' y ) 1-x Ca x CoO 3 (0.0≤x≤0.5; 0.0≤y≤0.2) to extract information by what factors and how the spin state change is governed, where various kinds of studies, transport and magnetic measurements as well as the structure analyses at ambient pressure and high pressure were carried out. 17) We found that the sudden increase of the tilting angle of the CoO 6 octahedra at the transition temperature T s with decreasing T mainly stabilizes the LS state through the reduction of the Co-Co transfer energy t. The local 3 arrangements of the ligand atoms seem to be important to understand the structural transition or the spin state change. For example, the substitution of Pr with elements whose ionic radii are smaller than Pr 3+ , enhances T s through the increase of ∆ c and through the decrease of t (induced by the increase of the tilting angle). However, there remains a question why the transition takes place only in systems, which contain Pr and Ca. 17,21) Then, we have carried out further studies by various methods, transport, thermal and magnetic measurements for samples of (Pr 1-y Sm y ) 1-x Ca x CoO 3 (0.0≤x≤0.5, 0.0≤y≤1.0) to clarify details of the transition.
In the present paper, the T-y and T-x phase diagrams are constructed and the mechanism of the spin state transition of the systems is discussed.
Experiments
Polycrystalline samples of (Pr 1-y Sm y ) 1-x Ca x CoO 3 (0.0≤x≤0.5; 0.0≤y≤1.0) were prepared by the following method. Mixtures of Pr 6 O 11 , Sm 2 O 3 , CaCO 3 and CoC 2 O 4 ⋅2H 2 O with proper molar ratios were ground, and pressed into pellets. The pellets were sintered at 1200 ºC for 24 h under flowing oxygen and cooled at a rate of 100 K/h. Samples thus obtained were annealed in high pressure oxygen atmosphere (p~60 atm) at 600 ºC for 2 days. In powder X-ray diffraction patterns taken with FeKα radiation, no impurity phases were detected. Details of the sample characterization can be found in ref. 17.
Electrical resistivities ρ were measured by the standard four-terminal method by using an ac-resistance bridge on heating from 4.2 K to 300 K. Magnetic susceptibilities χ were measured by using a Quantum Design SQUID magnetometer under the magnetic field H of 0.1 T in the temperature range of 5-350 K. The thermoelectric powers S were measured by the DC method, where the typical temperature difference between both ends of the sample was about 1 K. The specific heats C were measured by a thermal relaxation method in the temperature range of 2-300 K using of the Physical Property Measurements System (PPMS, Quantum Design).
Experimental Results and Discussion
In Fig.1, the unit cell volumes of (Pr 1-y Sm y ) 0.7 Ca 0.3 CoO 3 determined by the X-ray diffraction at room temperature are plotted against y. In the determination of the lattice parameters, we have assumed that the systems are orthorhombic and the volume is estimated for the cell described by ~√2a p ×2a p ×√2a p (space group Pnma); a p being the lattice constant of the cubic perovskite cell, though it was difficult in the present experimental resolution to distinguish whether the samples with y≠0 is strictly orthorhombic or not. (Our previous neutron and X-ray Rietveld analyses indicate the 4 sample with y=0.0 has the orthorhombic unit cell. 17) ) For the present system, the volume decreases with y, because ionic radius of Sm 3+ is smaller than that of Pr 3+ . 29) Figure 2 shows the electrical resistivities ρ of the (Pr 1-y Sm y ) 0.7 Ca 0.3 CoO 3 samples. The samples with y = 0.0 and 0.1 do not exhibit the transition. For these samples, the resistivities ρ increase gradually with decreasing T down to their Curie temperature (∼65 K) and then, exhibit the metallic T dependence below the temperature. The phase transition is induced by the 20 % doping of Sm to the Pr sites and the transition temperature T s increases with increasing y. The resistivity ρ above T s also increases with increasing y. However, the transition is gradually smeared out as y approaches unity.
In Fig. 3, the magnetic susceptibilities χ are plotted against T for the samples of (Pr 1-y Sm y ) 0.7 Ca 0.3 CoO 3 . The data were taken with the condition of the zero field cooling under the external magnetic field of H = 0.1 T. For y≤0.1, no structural transition is observed. With increasing y through 0.2, the transition appears, and χ decreases abruptly at T s with decreasing T for the relatively small values of y, indicating that the system undergoes the first order transition, in which the spin state change from the IS to LS with decreasing T, is involved. As y approaches unity, the transition is smeared out. These results correspond well to those of the ρ measurements shown in Fig. 2.
In Fig , and attributed it to the possible the spin ordering of Co 4+ ions. Here, we note that , the magnetic susceptibility of the Co spins in low temperature phase exhibits the Curie-Weiss type T dependence, χ Co spin ∝ 1/(T-T C ) with T C ~ 15 K as shown latter.
In Fig. 6, T s values of the (Pr 1-y Sm y ) 0.7 Ca 0.3 CoO 3 are shown schematically against y, where the gradual change from the first order transition to the crossover-like one is shown by the y dependent broadening of the transition region. Why is the transition smeared out in the region of y close to unity? We think that the answer can be found in the fact that the number the electrons n IS excited in the e g orbitals becomes small: Because the average ionic radius of Pr 1-y Sm y decreases with increasing y, the volume of the CoO 6 decreases, causing the increase of ∆ c and δE and therefore the decrease of n IS . Due to this decrease of n IS , the IS→LS transition gradually lose the cooperative character and the crossover-like nature appears, as y approaches unity.
The above argument is supported by the following analyses of the spin susceptibilities χ Co spin of the Co moments. The T independent contribution χ 0 (y) has also been subtracted from the raw data. We note that the values of χ 0 (y) used here roughly follows the relation χ 0 (y)=-0.002+0.003y (emu/mol), though the relation may not have any significant physical meaning.
For (Pr 1-y Sm y ) 0.7 Ca 0.3 CoO 3 , the energies δE of the IS state of Co 3+ (Co 4+ ) ions with respect to the LS ground level of Co 3+ (Co 4+ ) ions can be determined by fitting the calculated T dependence of 1/χ Co spin to the experimentally obtained 1/χ Co spin -T curve in the narrow T region just above T s with δE being the fitting parameter (The reason why 6 the T range is limited to the narrow region is shown later.). In this calculation of χ Co spin of (Pr 1-y Sm y ) 0.7 Ca 0.3 CoO 3 above T s , the following method has been used. For the spin susceptibility of Co 3+ ions in the IS state with s=1 is proportional to The thin solid lines in Fig. 7(a) show the results of the fittings carried out just above T s . The obtained δE values are shown in Fig. 7(b). Although our calculation of δE has been done with rather crude assumptions, the y dependent characteristics of δE is qualitatively reproduced. In the region of the large y, the increase of 1/χ Co spin found in the relatively high T region of Fig. 7(a) with decreasing T, can be roughly understood, too, by the above calculation. We estimate here the number of Co ions excited to the IS state (or the electron number in the e g orbitals) at T s +0, n IS , by using the δE values in Fig. 7(b) and by considering the orbital degeneracy of the levels. The results are shown in Fig. 7(c), where the solid line represents the number of Pr 3+ ions per formula unit. In the region of the relatively large y, n IS decreases with increasing y.
It is smaller than the number of Pr 3+ ions. Due to this decrease of n IS , the transition seems to gradually lose the cooperative character and becomes the (IS→LS) crossover-like, as y approaches unity.
We also have studied the x-dependence of the transport and magnetic properties for the samples of (Pr 1-y Sm y ) 1-x Ca x CoO 3 (y=0. Figure 10(a) shows the inverse spin susceptibilities of the Co moments, 1/χ Co spin for (Pr 0.7 Sm 0.3 ) 1-x Ca x CoO 3 . The values of χ Co spin are estimated by subtracting the contributions from Pr 3+ and Sm 3+ and the T independent one, whose magnitude is chosen to adjust the x dependent Curie constant or the slope of the 1/χ Co spin -T curve in the low temperature region in the similar way to the case of Fig. 7(a). The thin solid lines in Fig. 10(a) represent the results of the fittings just above T s with δE being the fitting parameter to the observed 1/χ Co spin -T curves (The transition (or crossover) temperature T s is roughly defined to be the onset temperature of the increase of 1/χ Co spin with decreasing T.). Even though the fittings have been carried out only in the narrow region of T just above T s , the calculations for the relatively small x seem to describe the qualitative T dependence of 1/χ Co spin in the rather wide T region above T s . It is due to the fact that the Curie-Weiss temperatures T C , which appear in the calculation, are much smaller than the relevant temperatures (> T s ).
The electronic energies of the IS state of Co 3+ (Co 4+ ) in (Pr 0.7 Sm 0.3 ) 1-x Ca x CoO 3 obtained by the fittings are plotted against x in Fig. 10
(b). It increases with decreasing
x, which is consistent with the results of our previous paper. 17) In the region of the small x values, the increase of 1/χ Co spin with decreasing T found in the high T region seems to be understood without the phase transition. We have estimated n IS at T s +0 by using the δE values in Fig. 10(b), and the results are in Fig. 10(c). The n IS value decreases with decreasing x and as n IS decreases, the transition becomes the crossover-like one, losing the cooperative nature, as in the case shown in Fig. 6. Figure 12 shows the electrical resistivities ρ of the samples of Nd 0.6 Ca 0.4 CoO 3 under the various values of the applied pressure p. The resistivity ρ increases with increasing p, 17,21,30) due to the decrease of the electron number in the e g 8 orbitals through the increase of ∆ c and δE with increasing p. This system does not exhibit the phase transition up to 100 kbar. If the transition mechanism commonly exists in R 1-x Ca x CoO 3 with various species of R, the application of such the high pressure may induce the transition in Nd 1-x Ca x CoO 3 , as in the case of Pr 1-x Ca x CoO 3 , though the LS state is expected to be more stable in the system of (Nd, Ca) than in the system of (Pr, Ca), because the ionic radius of Nd 3+ is slightly smaller than that of Pr 3+ .
We have extended the studies to the system of (Nd 1-y Tb y ) 0. 7
Conclusion
The transport, thermal and magnetic properties of (Pr 1-y Sm y ) 1-x Ca x CoO 3 have been studied and the T-y and T-x phase diagrams have been constructed. By analyzing the magnetic susceptibility, the number of Co ions in the IS state (or the electron number in the e g orbitals), n IS , are roughly estimated at T s +0. As y approaches unity or x 9 approaches zero, n IS becomes small. As the results of this n IS reduction, the IS to LS change with decreasing T loses the cooperative nature and gradually becomes the (IS→LS) crossover-like one. We have not found any experimental evidence that indicates the existence of the first order phase transition in the perovskite Co oxides without (Pr, Ca), which implies that the Pr 4f-O 2pπ hybridization may be relevant to the occurrence of the transition. the T independent susceptibility χ 0 (y) from the raw data. The thin solid lines show the results calculated for the parameters of δE values shown in Fig. 10 They were calculated by using δE values in Fig. 10(b) and by considering the orbital degeneracy of the LS and IS states.
|
2019-04-14T02:07:47.707Z
|
2005-02-26T00:00:00.000
|
{
"year": 2005,
"sha1": "9f38f667164901d21bee4c278ab45dcdb1af6cee",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0502631",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2a16de173bae179294f4faa563e95a12f7842a37",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
39155182
|
pes2o/s2orc
|
v3-fos-license
|
hSef Inhibits PC-12 Cell Differentiation by Interfering with Ras-Mitogen-activated Protein Kinase MAPK Signaling*
Growth factor signaling by receptor tyrosine kinases regulates several cell fates, such as proliferation and differentiation. Sef was genetically identified as a negative regulator of fibroblast growth factor (FGF) signaling. Using bioinformatic methods and rapid amplification of cDNA ends-PCR, we isolated both the mouse and the human Sef genes, which encoded the Sef protein and Sef-S isoform that was generated through alternative splicing. We provide evidence that the Sef gene products were located mainly on the cell membrane. Co-immunoprecipitation and immunostaining experiments indicate that hSef interacts with FGFR1 and FGFR2 but not FGFR3. Our results demonstrated that stably expressed hSef strongly inhibits FGF2- or nerve growth factor-induced PC-12 cell differentiation. The intracellular domain of hSef is necessary for the inhibitory effect on FGF2-induced PC-12 cell differentiation. Furthermore, our data suggested Sef exerted the negative effect on FGF2-induced PC-12 cell differentiation through the prevention of Ras-mitogen-activated protein kinase signaling, possibly functioning upstream of the Ras molecule. These findings suggest that Sef may play an important role in the regulation of PC-12 cell differentiation.
tual" open reading frames using the ab initio gene prediction program GENSCAN (27). Signal sequences were predicted by SIGNALP algorithm (28). Predication of the transmembrane domain was performed with TMPRED (29), an algorithm based on the statistical analysis of a transmembrane domain data base. Netphos2.0 was used to identify and score all possible cytoplasmic serine, threonine, and tyrosine phospho- hSef-S is shown as a truncated form, which lacks 144 amino acids at the N terminus compared with hSef. C, genomic structure of human Sef. Schematic representation of exons is marked, and spans nucleotides 427,937-498,840 (hSef) and 420,648 -506,133 (hSef-S) in NT_005787.8 from human chromosome 3p21. The hSef (long form) has 13 exons and hSef-S (short form) has 14 exons. The start and stop codons are indicated. rylation sites. To obtain a full-length cDNA, 5Ј-RACE PCR was performed using mRNA from normal human testis and 293T cells according to the SMART RACE cDNA amplification kit user manual (Clontech). Total RNA was extracted with TRIzol reagent kit (Invitrogen) and reverse-transcribed using an oligo(dT) primer and Superscript II kit (Invitrogen). The 5Ј-RACE PCR products were then cloned into pT-Adv vector according to the AdvanTage PCR cloning kit instructions and then sequenced.
Plasmid Construction-Full-length cDNA of hSef was cloned into pcDNA3.0 and pcDNA3.1/Myc-His by HindIII/XhoI sites. hSef-S and mSef-S were subcloned into the EcoRI/XhoI sites of pcDNA3.0 with a six-repeat Myc tag at the N terminus. The deletions and mutants of FIG. 1-continued hSef Inhibits PC-12 Differentiation hSef were cloned into pIRESneo expression vector by standard polymerase chain reaction. Mouse FGFR-1, -2, -3, and -4 constructs were donated by Dr. D. Onitz. Elk-1 luciferase reporter plasmids and Sprouty4 were the gift from Dr. A. Yoshimura. The constitutive active MEK was donated by Dr. M. Cobb and active RasG12V construct was provided by Dr. T. Satoh.
Antibody Preparation-For bacterial production of hSef, an open reading frame coding for the peptide in intracellular domain of hSef (M 320 -I 457 , 138 aa) was amplified by PCR and cloned into pGEX4T-1 vector. The reading frame was sequence-confirmed after cloning. hSef was expressed in the inclusion bodies in Escherichia coli, solubilized with 4 M guanidine HCL, and dialyzed against 50 mM sodium acetate buffer, pH 5, containing 0.1 M NaCl. Antisera were raised in rabbits by standard methods and used for immunoblot analysis after 1,000 -10,000-fold dilution.
Immunofluorescent Staining and Microscopy-COS-7 cells were cultured on 6-well plates with 8 ϫ 10 4 per well (Corning Incorporated, Corning, NY). Cells were cotransfected with mFGFR2 and hSef constructs using Effectine transfection reagent (Qiagen). hSef and mF-GFR2 were stained with secondary antibodies conjugated with fluorescein isothiocyanate (green) and Cy3 (red), respectively. The cells were viewed using a Nikon inverted microscope ECLIPSE TE300. The colocalization of the two proteins was shown as a merged figure.
Generation of Stably Transfected PC-12 Cell Clones-To establish hSef stably expressed PC-12 cell lines, PC-12 cells were transfected with the above constructs using Transfast transfection reagent (Promega). Forty-eight hours after transfection, cells were plated at several different dilutions in media containing 0.5 mg/ml G418. For the next 2 weeks, the selective media were replaced every 3 to 4 days. Once the distinct "islands" of surviving cells were visualized, the individual clones were transferred into 96-well plates and continued to maintain cultures in selected media. The positive clones were confirmed by immunoblotting.
Western Blotting and Immunoprecipitation-The cells were lysed in lysis buffer containing 50 mM Tris, pH 7.6, 150 mM NaCl, 1% Nonidet P-40, and 1 mM sodium orthovanadate in the presence of protease inhibitors. The immune complexes were captured with protein A or G-Sepharose, washed in lysis buffer, and resolved by SDS-PAGE. The proteins were transferred onto nitrocellulose membrane, and the membrane was blocked with 5% non-fat milk in TBST containing 0.1% Tween 20 overnight at 4°C. Then, the indicated primary antibody was incubated, followed by horseradish-peroxidase-conjugated rabbit antisheep or anti mouse antibodies as secondary antibodies and detected by chemiluminescence according to manufacturer's instructions (ECL; Amersham Biosciences).
Luciferase Assay-The Elk-1 luciferase activity assay was performed using trans-reporting constructs, including PFA-Elk-1 and PFR-luciferase plasmids (PathDetect in vivo signal transduction pathway transreporting system; Stratagene) according to the manufacturer's instructions. The Elk-1 luciferase activity was measured using a luciferase assay system (Promega). The results were expressed as mean Ϯ S.D. from three independent experiments.
Differentiation of PC-12 Cells-PC-12 cells were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum, 5% horse serum, and 4.5 g/liter glucose at 37°C under 5% CO 2 . The cells were plated at a subconfluent density on 12-well culture plates coated with poly-L-lysine to improve cell attachment activity. The next day, cells were transiently transfected with enhanced green fluorescent protein, wild-type hSef, or the mutants for 36 h using Effectene Transfection reagent (Qiagen). Cells were stimulated with or without recombinant human FGF2 or NGF (R&D Systems) for 72 h and then examined by fluorescence microscopy. Cells with processes longer than 1.5 times the diameter of the cell body were considered to be positive for neurite outgrowth. The numbers of undifferentiated and differentiated cells were counted in three randomly selected fields containing ϳ200 cells each. Data were expressed as means Ϯ S.D of three independent counts.
RESULTS
Cloning and Primary Structure of hSef-To explore for the existence of additional members of the IL-17 receptor gene family, we screened the National Center for Biotechnology Information expressed sequence tag and NR databases using the cytoplasmic domain of the IL-17 receptor with the tBlastn and Blastp algorithms. We found several expressed sequence tags encoding an unknown protein reported in Gene Bank as the hypothetical human protein DKFZp434N1928 (accession no. AL133097). However, the translated sequence was only a fragment without an N terminus. We performed 5Ј-RACE PCR using mRNA from human testis tissue and 293T cells and obtained two complete cDNAs that were 4477 and 4478 bp long. The 4477-bp cDNA predicted an open reading frame of 739 amino acids encoding a novel single transmembrane protein (Fig. 1A), whereas the other cDNA encoded a protein of 595 amino acids that lacked the N-terminal 144 amino acids of the longer protein (Fig. 1B). BLAST analysis revealed that the sequences were identical to the Sef gene of human and mouse (20,21). However, only the longer protein isoform had previously been reported. We had isolated two isoforms of the gene in both human and mouse (GenBank accession numbers AF494208 & AF494211 and AF494210 & AF494209, respectively), and we adopt the name Sef for the long form and Sef-S for the short form in this article. hSef was mapped on human chromosome 3p21.1 with 13 exons and spanned 70,903 base pairs. hSef-S consisted of 14 exons and spanned 85,485 base pairs (Fig. 1C). mSef or mSef-S was located on chromosome 14 with 13 or 12 exons, respectively. and spanned 6,6304 and 13,726 base pairs, respectively. mSef and mSef-S encoded proteins with 738 and 594 amino acids, respectively. hSef is 75% identitical to mSef at the amino acid level, whereas hSef-S and mSerf-S shared 72% identity.
Computer-assisted analysis suggested that hSef contained a putative signal peptide of 16 amino acids, a 281-amino acid extracellular domain (Cys 17 -Pro 297 ), a 23-amino acid transmembrane stretch (Ile 298 -Met 320 ), and a 420-amino acid cytoplasmic tail (Cys 321 -Leu 739 ). This protein was predicted to be a type I single span transmembrane molecule according to Hartmann membrane topology model and PSORT II server. There were eight cystine residues and nine potential N-linked glycosylation sites in the extracellular domain, where an immunoglobulin domain and a fibronectin III domain were also predicted.
Furthermore, a highly conserved segment (TPPPLRPRKVW) located proximal to the IL-17 receptor transmembrane domain was replaced by the proline-rich segment (PFHPPPLRYREP), a putative SH3 interaction domain, which is a typical feature of transactivation domains for transcription factors. In addition, a putative TIR domain (Val 358 -Lys 424 ) (Toll/IL-1 Receptor domain) (30) and a putative TRAF6 binding motif (Pro 347 -Leu 351 ), Pro-X-Glu-X-X (aromatic/acidic residue) (31), were predicted in the intracellular part of hSef.
Sef Interacted with FGFR-It has been reported that FGFRs were highly expressed in kidney tissue (32), where hSef was also abundant (data not shown). Based on the suggestion that zSef and mSef might interact with FGFRs (22), we reasoned that hSef could interact with FGFRs and possibly affect FGF signaling. To detect whether a physical interaction of the two receptors occurred, we carried out a co-immunoprecipitation assay by co-expression of the two proteins in COS-7 cells. We successfully precipitated the FGFR1 ( Fig. 2A) and FGFR2 (Fig. 2B) using anti-Sef serum, but failed to precipitate FGFR3 (data not shown), suggesting that Sef specifically interacted with FGFR1 and FGFR2 in intact cells. These results are consistent with the report of interactions between zebrafish Sef with Xenopus laevis FGFR1 or FGFR2 (21). It also implied that hSef might elicit an effect on FGFR signaling similar to that of Spred or Sprouty family members, which strongly inhibit FGFR signaling (13,14,33,34).
To further examine whether co-expression of the two recep-tors occurred in mammalian cells, we carried out the immunostaining assay with the anti-FGFR2 antibody and anti-hSef serum. The results demonstrated that overexpressed hSef and FGFR2 were co-localized in COS-7 cells (Fig. 2C). Interestingly, we found that hSef was co-localized with FGFR1 in normal human testis (Fig. 5D), whereas we failed to observe obvious co-expression of the two proteins in other tissues (data not shown). Taken together, these data suggested that hSef inter-acted with FGFR1 and FGFR2 under at least some physiological condition. hSef Strongly Inhibited FGF2 or NGF-induced PC-12 Cell Differentiation-Although hSef is able to interact with FGFR1 and FGFR2, the biological function of this protein in mammalian cells remains unclear. Next, we examined whether hSef could affect FGF signaling. We stably expressed hSef in PC-12 cells, a rat pheochromocytoma cell line that could be induced FIG. 2. Interaction of hSef with mFGFR. A, co-immunoprecipitation of hSef and mFGFR1. COS-7 cells were transiently transfected with mFGFR1 and hSef constructs as indicated. Whole-cell lysates were immunoprecipitated (IP) with hSef rabbit anti-serum. After electrophoresis, blots were probed with rabbit polyclonal antibody to mFGFR1(Flg). The expressions of mFGFR1 and hSef were detected by rabbit polyclonal antibody to mFGFR1 and rabbit polyclonal antibody to hSef using the whole transfected cell lysates. B, co-immunoprecipitation of hSef and mFGFR2. COS-7 cells were transiently transfected with mFGFR2 and hSef constructs as indicated. The procedure was the same as in A. C, co-localization of hSef with mFGFR2 in transfected COS-7 cells. COS-7 cells were cotransfected with mFGFR2 and hSef constructs using Effectine transfection reagent (Qiagen). hSef and mFGFR2 were stained with secondary antibodies conjugated with fluorescein isothiocyanate (green) and Cy3 (red), respectively. The co-localization of the two proteins is shown as a merged figure. D, endogenous colocalization of Sef with FGFR1 in some tissues. A triple immunofluorescence staining and confocal analyses for the endogenous colocalization of Sef with FGFR1 in normal human testis was performed according to the manufacturer's instructions. Sef staining was performed using rabbit polyclonal anti-Sef antibody followed by fluorescein isothiocyanate-labeled goat anti-rabbit IgG. FGFR1 staining was performed using mouse anti-FGFR1 antibody followed by Texas-Red-labeled goat anti-mouse IgG. Nuclear staining was performed using DAP1. One obvious co-localization of Sef and FGFR1 in testis was observed . FIG. 3. Effects of hSef on differentiation of PC-12 cells. A, hSef inhibits FGF2 and NGF induced PC-12 cell differentiation. Mock-transfected or Sef stably transfected PC-12 clones were screened in selected medium containing 0.5 mg/ml G418. The G418-resistant clones expressing hSef or empty vector were transiently transfected with enhanced green fluorescent protein plasmid. The PC-12 cells were starved with serum-free into sympathetic neuron-like cells possessing elongated neuritis by basic fibroblast growth factor (FGF-2) or NGF. As showed in Fig. 3A, mock-transfected PC-12 clones were induced into neurite outgrowth in the presence of FGF-2 or NGF as seen in normal PC-12 cells. In contrast, all clones transfected with hSef, remained undifferentiated in response to either FGF2 or NGF, suggesting that hSef elicited an inhibitory effect on the differentiation of PC-12 cells (Fig. 3A). The percentages of cells with neurite outgrowth (differentiated cells) were significantly decreased for cells stably expressing hSef at both dosages of FGF2 or NGF (Fig. 3B). When cells were exposed to FGF2 or NGF for longer periods, the percentage of mock-transfected clones that were induced into neurite outgrowth increased significantly, whereas clones transfected with hSef resisted differentiation in the presence of FGF2 (Fig. 3C) or NGF (Fig. 3D). These results indicate that hSef could significantly inhibit the differentiation of PC-12 cells triggered by FGF2 or NGF, even in conditions of higher dosage or prolonged exposure of growth factor.
To determine which domain of hSef was necessary for inhibition of PC-12 cell differentiation induced by FGF2, we constructed the N-terminal truncated mutant hSef(⌬ N), which lacks the N-terminal extracellular domain of hSef; the C-terminal truncated mutant hSef(⌬ C), which lacks C-terminal intracellular domain of hSef; and a mutant hSef(DN), which lacks a motif (Glu 327 ϳLeu 333 ) in the intracellular domain containing a putative tyrosine phosphorylation site. We overexpressed these mutants with enhanced green fluorescent protein in PC-12 cells in the presence or absence of FGF2. In control cells and overexpressing hSef(⌬ C) cells, FGF2 strongly induced cell differentiation. However, in the cells overexpressing hSef(WT), hSef(⌬ N), and hSef(DN), FGF2 failed to induce significant cell differentiation (Fig. 3, E and F), suggesting that the N-terminal-truncated mutant and the (⌬ Glu 327 -Leu 333 ) mutant missing the seven amino acid putative tyrosine phosphorylation site did not affect the inhibitory properties of Sef on FGF2-induced PC-12 cell differentiation. These data indicated that the intracellular domain plays a critical role in the inhibition of PC-12 cell differentiation induced by FGF2. The results also suggest that hSef(DN), which lacks a tyrosine phosphorylation motif (Glu 327 ϳLeu 333 ) in the intracellular domain of hSef, does not function as a dominant negative form.
hSef Inhibited Ras-MAPK Signaling Pathway-It has been reported that Ras-MAPK signaling was required for FGF2induced PC-12 cell differentiation. To investigate the role of hSef in MAPK activation during FGF2-induced PC-12 cell differentiation, we first determined the effects of Sef on Elk-1 mediated luciferase activity. Data showed that overexpression of hSef significantly suppressed FGF2-induced Elk-1 luciferase activity in PC-12 cells (Fig. 4A), which was comparable with the effect of Sprouty4 (14,33,34). Compared with Sef, the C-terminal truncated mutant hSef(⌬ C) and the N-terminal truncated mutant hSef(⌬ N) had about 23 and 81% of the inhibitory effect, respectively (Fig. 4A, columns 4 and 5). This result was also correlated with the inhibitory effect of hSef on the FGF2-induced PC-12 cell differentiation. Furthermore, our results showed both hSef(WT) and hSef(⌬ N) suppressed FGF2dependent Elk-1 luciferase activity in PC-12 cells in a dose-dependent manner (Fig. 4, B and C).
Earlier studies had shown that activation of ERK1/2 was important for neurite outgrowth in PC-12 cells and that ERK1/2 phosphorylation was strongly but transiently induced by FGF2, with the level of phosphorylation reaching a maximum within 5-10 min and then declining to lower sustained levels (35,36). With this in mind, we examined whether hSef suppressed endogenous ERK1/2 activation induced by FGF2 in PC-12 cells. As shown in Fig. 4D, overexpression of hSef significantly suppressed endogenous ERK phosphorylation, with a maximum inhibition at 10 min after stimulation. In addition, hSef exhibited an inhibitory effect on ERK activation in a dose-dependent manner (Fig. 4E). These results indicate that hSef could inhibit FGF2-induced PC-12 cell differentiation, possibly through the inhibition of Ras-MAPK signaling.
hSef Inhibited Ras-MAPK Signaling, Possibly by Targeting the Upstream Molecules of Ras-To attempt to identify the signaling component of Ras-MAPK pathway in FGF2-induced PC-12 differentiation that is suppressed by Sef, we used the constitutively active Ras(G12V) or active MEK(MEK1RF) molecules to examine ERK activation and Elk-1 luciferase activity in both PC-12 cells and 293T cells. Both luciferase assays and Western blotting analysis results showed that hSef had no inhibitory effect on the signaling mediated by constitutively active MEK (Fig. 5, A-C) or Ras (Fig. 5, D-F) in both PC-12 and 293T cells. These data suggest that the target signaling molecule for Sef was located upstream of Ras in the FGFR-Ras-MAPK signaling pathway at least in PC-12 and 293T cells.
DISCUSSION
In our attempt to identify novel receptors in the IL-17 family (37-39), we screened GenBank using the intracellular domain of human IL-17AR and found a gene encoding novel singlespan transmembrane proteins. Our results show that there are at least two isoforms of this gene in mouse and human. The long form of the sequence was identical to Sef (26), and we adopted the name of Sef-S for the short form in this article. The specific functions of this gene were unclear. In this report, we provided evidence that hSef has an inhibitory effect on the PC-12 differentiation induced by FGF2 or NGF. zSef and mSef, zebrafish and mouse orthologues of the Sef gene, have recently been reported to be novel modulators of FGF signaling, and zSef might regulate Ras/MAPK signaling triggered by FGF during early zebrafish embryonic development (20,21). Our data demonstrate that hSef interact and colocalize with FGFRs in some human tissues. This interaction was confirmed by immunoprecipitation assays, which showed that hSef interacted with FGFR1 and FGFR2 but not FGFR3.
We examined the effect of Sef on growth factor-induced neurite outgrowth of PC-12 pheochromocytoma cells, which is de- A, inhibitory effects of hSef on Ras-MAPK signal pathway. The Elk-1 luciferase activity assay was performed using trans-reporting constructs, including PFA-Elk-1 and PFR-luciferase plasmids (PathDetect in vivo signal transduction pathway trans-reporting system; Stratagene) according to the manufacturer's instructions. Equal amounts of the indicated plasmids were transiently co-transfected into PC-12 cells with Elk-1 reporter plasmids by Transfast transfection reagent (Promega). 24 h later, cells were starved with serum-free medium for another 24 h followed by stimulation with 20 ng/ml FGF2 for 6 h as indicated, and then lysed. The Elk-1 luciferase activity was measured using Dual-Luciferase Reporter Assay System (Promega). Data were normalized by co-transfection with Renilla reniformis luciferase reporter vector and expressed as the mean Ϯ S.D. (n ϭ 3). B and C, dose-dependent effects of hSef(WT) and N-terminal-truncated hSef (⌬ N) on the Ras-MAPK signal pathway. Luciferase assays were carried out as described above. Data were expressed as the mean Ϯ S.D. (n ϭ 3). D, hSef and the equal amount of empty vector were transfected into PC-12 cells for 24 h. After serum starvation for another 24 h, cells were treated with (ϩ) or without (Ϫ) 20 ng/ml FGF2 for the indicated times. The endogenous ERK1/2 activation was detected by immunoblotting with anti-p-ERK antibody (pp44 and pp42 form; Santa Cruz Biotechnology). The blots were stripped and re-probed with anti-ERK antibody (p44 and p42 form) to verify equal loading. Similar results were obtained in two independent experiments. E, the increasing amounts of hSef plasmid were transiently transfected into PC-12 cells using Transfast transfection reagent (Promega). After serum starvation for 24 h, cells were treated with (ϩ) or without (Ϫ) 20 ng/ml FGF2 for 10 min. Whole-cell lysates were transferred to nitrocellulose membrane. The endogenous ERK1/2 activation was detected with anti-p-ERK antibody (Santa Cruz). The blots were stripped and re-probed with anti-ERK antibody (Santa Cruz). The increasing expression of hSef was detected with the specific anti-serum of hSef.
FIG. 5. hSef interferes with Ras-MAPK signaling by acting on the upstream molecules of Ras. Constitutively active MEK (MEK1RF) (A-C) or constitutively active Ras (Ras G12V) (D-F) constructs were transiently co-transfected into PC-12 cells or 293T cells with Elk-1 luciferase reporter plasmids or increasing amounts of hSef plasmid for 36 h. The luciferase activity was measured as described above. Data were normalized by co-transfection with R. reniformis luciferase reporter vector and expressed as the mean Ϯ S.D. (n ϭ 3). Additionally, ERK activation was detected by immunoblotting. The whole lyasates of transfected cells with active MEK C, or active Ras F, were immunoblotted with anti-p-ERK, anti-ERK, and anti-Sef rabbit polyclonal serum, respectively. overexpressed hSef on FGF2-induced PC-12 cell differentiation was significantly correlated with the role of hSef on the prevention of Ras-MAPK activation. Interestingly, two reports showed that zSef was expressed in mid-brain-hindbrain boundary, hindbrain, and forebrain (20,21). Another group using in situ hybridization demonstrated expression of mSef in the forebrain, mid-brain-hindbrain boundary region and branchial arches in the mouse (25). Recently, a report demonstrated that hSef was detectable in human brain (26), which is consistent with our Northern blot analysis (data not shown). All of those data suggested that the Sef gene is expressed in the brain. In this article, we adopted PC-12 cells as a model to demonstrate that hSef inhibited cell differentiation induced by FGF or NGF. Therefore, we speculate that Sef might play a role in developmental processes in the nervous system.
Opposing results on the point of action of Sef on FGFR-Ras-MAPK signaling have been reported. One group has suggested that Sef acts at the FGFR level (21), whereas another has demonstrated that Sef functions downstream of Ras (20). Like Sprouty, the specific target molecule of Sef interfering with Ras-MAPK signaling is unclear. Our results demonstrated that Sef suppressed Ras-MAPK activation, possibly functioning upstream of Ras, possibly at the FGFR. We have succeeded in detecting colocalization of endogenous Sef with FGFR1 in the testis and other tissues (data not shown). Sef might also function through other pathways if it can interact with other indirect signaling molecules in FGF signaling. Thus, the detailed molecular mechanism of the physiological regulation of Sef on FGFR signaling remains to be determined.
Taken together, our results show that Sef is an interesting and important molecule that significantly inhibits FGF2-induced PC-12 cell differentiation, probably through prevention of Ras/ MAPK signaling. However, the precise functions of Sef in vivo and its splicing isoform remain unclear. The next critical goal is to disrupt the locus in mice for clarifying functions in vivo.
|
2018-04-03T04:28:48.688Z
|
2003-12-12T00:00:00.000
|
{
"year": 2003,
"sha1": "dc0f4fe0d8415968eaf7c616f86f64ee680c2415",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/278/50/50273.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "44284b63ae4e1f54844131c7e63392b6ce84d711",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
246822451
|
pes2o/s2orc
|
v3-fos-license
|
Accessing Polarized Fragmentation Functions at the Unpolarized EIC and BELLE Experiments
We briefly report our recent progress on the study of the polarized fragmentation functions of $\Lambda$ hyperon in unpolarized semi-inclusive deep-inelastic scatterings and electron-positron annihilations at low energies. In particular, we present a simple but practical method on how to measure the azimuthal-angle-dependent longitudinal polarization and the transverse polarization inside the production plane and bridge these observables to the corresponding structure functions and fragmentation functions at the leading twist. Our work diversifies the high energy reactions that can probe the polarized fragmentation functions.
Introduction
Fragmentation functions (FFs) are non-perturbative quantities that offer insight into the hadronization process of a high energy parton. The polarized FFs, with an additional dimension, can deliver more information about non-perturbative physics. However, the polarized FFs are in general less studied due to the scarcity of experimental data.
The situation has been improved a bit recently after Belle collaboration published the first measurement of Λ-hyperon transverse polarization in e + e − annihilation [1]. The measurement is conducted as follows. In the center-of-mass frame of the incoming electron and positron, two almost back-to-back jets are produced. From one jet, we measure a Λ-hyperon and from the other, we measure a light hadron. The transverse polarization of Λ is measured along the normal direction of this production plane, which is defined by the momenta of Λ and the light hadron. The readers may have already figured out that this experiment is designed to probe D ⊥ 1T (z, p T ), if they are familiar with the physical interpretations of leading twist FFs. Several parameterizations [2][3][4] and model calculations [5] have been promptly carried out.
To probe the flavor dependence of D ⊥ 1T , Belle experiment measured the transverse polarization of Λ with different associated light hadrons. The philosophy of this measurement is that the light hadron can put a flavor tag on the parent quark. To our surprise, the difference between the polarization in Λ + π + and Λ + π − is distinct. The same goes for Λ + K + and Λ + K − processes as well. The D'Alesio-Murgia-Zaccheddu (DMZ) [2] and Callos-Kang-Terry (CKT) [3] parameterizations introduce a significant isospin symmetry violation to describe the Belle data. Moreover, a model calculation [5] suggests it cannot describe the data without the sea parton contribution under the constraint of the S U(6) spin-flavor symmetry.
The isospin symmetry is a robust symmetry in QCD. A strong violation is not expected in the polarized FFs where the strong interaction dominates. Therefore, we explore the possibility to describe Belle data with an isospin symmetric parameterization for the D ⊥ 1T FF. We also propose to ultimately test the isospin symmetry in the unpolarized semi-inclusive deep inelastic scatterings (SIDIS) at large Bjorken x. With the ability to perform both ep and eA scatterings, the future EIC can easily accomplish this task.
Furthermore, we also propose other polarization observables in the unpolarized SIDIS and low energy e + e − annihilations, albeit it might be counter-intuitive. These observables are azimuthal angle dependent and therefore vanish in the 2π phase space average. The simple method proposed in our work makes it possible to access the polarized FFs at the future electron-ion collider (EIC) or the current Belle experiment. Our work diversifies high energy processes that can probe the polarized FFs and therefore is a complementary study in the field.
This proceeding is organized as follows. In Sec. 2, we discuss the isospin symmetry of FFs and present an isospin symmetric parameterization that can well describe the Belle data. In Sec. 3, we briefly demonstrate the idea of accessing the polarized FFs in the unpolarized SIDIS and low energy e + e − annihilation. In Sec. 4, we briefly discuss the possibility to probe the longitudinal spin transfer G 1L and the longitudinal-to-transverse spin transfer G ⊥ 1T at Belle. We make a summary in Sec. 5.
Isospin symmetry
The Belle data [1] shows a distinct difference between the transverse polarizations of Λ in Λ + π + and Λ + π − processes. A naive picture to interpret this data is that Λ + π + favors the d → Λ and d → π + channel, since this is the only combination that both Λ and π + are produced from the favored (valence) parton. Using the same argument, we may find that Λ + π − production favors the u → Λ andū → π − channel. Therefore, from this naive picture, we can get the impression that D ⊥Λ 1T s . Following this perspective, two isospin-asymmetric parameterizations [2,3] have been proposed recently and describe the Belle data quite well.
However, this naive picture is a good approximation only at large z Λ and large z h . At small z Λ and z h , the flavor components are far more complicated than what the aforementioned picture suggests. Therefore, we investigate the flavor components in each process and conclude that the difference between the transverse polarizations in Λ + π + and Λ + π − processes can be easily explained by taking into account contributions from unfavored (sea) partons.
Using the Trento convention [6] for D ⊥ 1T , we identify the transverse polarization of Λ as where z Λ is the longitudinal momentum fraction carried by Λ, z h is that carried by the light hadron and P Λ⊥ is transverse momentum of Λ with respect to the light hadron momentum. Here, we have used the with p hT and p T being the transverse momentum of light hadron and Λ with respect to the parent partons. ω 1 is given by ω 1 ≡ −P Λ⊥ ·p T z Λ M Λ , whereP Λ⊥ is the unit four-vector along the direction of P Λ⊥ . To proceed, we assume that the z dependence and the p T dependence in FFs can factorize, while the p T dependence part takes the Gaussian ansatz. Therefore, we have where ∆ is the Gaussian width which is assumed to be flavor-independent for simplicity. This Gaussian assumption is a common practice in parameterizing the TMD PDFs/FFs at the initial scale. The evolution of these TMD functions is governed by the well-known Collins-Soper equation which is given in Refs. [7]. Therefore, the p T distributions of TMD PDFs/FFs usually deviate from the simple Gaussian function at large scale. Nonetheless, we decide to focus on the p T -integrated observables since the number of data points is limited. The exact form of the p T distribution does not matter any more for the p T -integrated observables. It only contributes to a constant factor which can be absorbed into the overall normalization. For the unpolarized FFs of Λ and light hadron, we employ the DSV [8] and DEHSS [9] parameterizations. With these setups, we extract the parameterization of D ⊥Λ 1T i (z) under the constraint of isospin symmetry by fitting the Belle data and show the polarized and unpolarized FFs of Λ in Fig. 1. This isospin symmetric parameterization can also describe the Belle data well. Therefore, we demonstrate that the Belle experimental data does not automatically translate into isospin symmetry violation in D ⊥ 1T . Furthermore, we propose a clean test on this point at the future EIC experiment [10]. At large x, the dominant contribution to nucleon PDF comes from u and d quarks. At the future EIC experiment, we can switch the target from a proton to a large nucleus and therefore can tune the effective u and d quark PDFs. In light of this, we can easily conclude that there will be almost no difference between the transverse polarizations in ep and eA scatterings at large-x. However, if the isospin symmetry is indeed strongly violated as the DMZ and CKT parameterizations suggested, we shall observe an evident difference. The above argument solely relies on the dominance of u and d quark PDFs at large-x. The uncertainties in the polarized FFs and unpolarized FFs can affect the exact value of the predicted transverse polarization, but will not undermine this qualitative conclusion.
Longitudinal polarization in unpolarized SIDIS
This concept might be counter-intuitive to some readers. At first sight, it seems to violate parity conservation to measure the longitudinal polarization in the unpolarized SIDIS. However, the physics behind this is quite simple. Although the parent nucleon is unpolarized, the partons inside it can still be transversely polarized thanks to the Boer-Mulders function. This transverse polarization can further propagate to the azimuthal asymmetries and longitudinal and transverse polarizations of final state hadrons through chiral-odd FFs. It enriches the observables that can be studied in the most simple process. In this proceeding, we use longitudinal polarization as an example to demonstrate how to measure these observables. A more general discussion is given in Ref. [10].
From the kinematic analysis, we can easily find the longitudinal polarization of the Λ hyperons produced in the unpolarized SIDIS takes the following form, where F's are scalar structure functions. Clearly, if we only select events where Λ hyperons are produced in the first and second quadrants, the sin(2φ) structure in the numerator does not contribute. The cos(φ) and cos(2φ) contributions also disappear. Therefore, we can probe this F sin φ U L structure function. Similarly, if we only select events where Λ hyperons are produced in the first and third quadrants, the sin(φ), cos(φ) and cos(2φ) structures vanish. We can then probe the F sin 2φ U L structure function. At the leading order and leading twist approximation, F sin φ U L = 0 and F sin 2φ U L is related to the convolution of Boer-Mulder function and H ⊥ 1L FF. Assuming that the magnitude of H ⊥ 1L is roughly the same with that of D ⊥ 1T and using the Boer-Mulders function extracted from the Drell-Yan process, we estimate the longitudinal polarization can be as large as a few percent and should be measurable at the future EIC in light of its high luminosity.
Another approach is to measure the sin(φ) or sin(2φ) weighted polarization. It is straightforward to show these two approaches are equivalent. Moreover, by measuring the azimuthal-angle-dependent transverse polarizations, we can study the chiral-odd H 1T and H ⊥ 1T FFs. Following the same strategy, we can also access polarized chiral-odd FFs in low energy e + e − annihilation by measuring two back-to-back hadrons. In this case, the Collins function of the reference light hadron plays the role to provide information on the transverse polarization of parent quarks. Utilizing this method, we open new opportunities for probing polarized chiral-odd FFs.
Dihadron polarization correlation in e + e − annihilation
Similar ideas can further extend to other observables as well. For instance, we can study the longitudinal spin transfer G 1L and the longitudinal-to-transverse spin transfer G ⊥ 1T at Belle as well. The longitudinal spin transfer G 1L describes the probability of producing a longitudinally polarized hadron from a longitudinally polarized parton. It has been studied at LEP by measuring the longitudinal polarization of the produced Λ/Λ [11,12]. The hard interaction is dominated by weak interaction at LEP and therefore the quarks are longitudinally polarized. However, at low energy e + e − colliders, the hard interaction is dominated by electromagnetic interaction. It is very difficult, if not impossible, to probe the longitudinal spin transfer at Belle.
However, the longitudinal polarization of the quark and that of the antiquark produced in the same hard interaction are correlated. We then define the polarization correlation of two back-to-back Λ/Λ-hyperons, PC L (z 1 , z 2 ), as the possibility of same-sign polarization minus that of the oppositesign polarization. At the leading order and leading twist approximation, the dihadron polarization correlation is related to G q→Λ/Λ 1L (z 1 , p T 1 ) ⊗ Gq →Λ/Λ 1L (z 2 , p T 2 ). Furthermore, it is straightforward to measure this observable in experiments as well. Using θ * i to denote the angle between the longitudinal direction and the momentum of p/p in the Λ/Λ-rest frame, we find dN d cos θ * 1 d cos θ * 2 = 1 + αP L (z 1 ) cos θ * 1 + αP L (z 2 ) cos θ * 2 + α 2 PC L (z 1 , z 2 ) cos θ * 1 cos θ * 2 , where α is the decay parameter, P L (z i ) is the longitudinal polarization of Λ/Λ with momentum fraction z i and PC L (z 1 , z 2 ) is the aforementioned dihadron polarization correlation. At low energy, P L (z i ) ≈ 0 and PC L (z 1 , z 2 ) 0. Thus, we can probe the longitudinal spin transfer by measuring cos θ * 1 cos θ * 2 at low energy e + e − annihilation. Furthermore, we can study the longitudinal-totransverse spin transfer, G ⊥ 1T , by measuring the longitudinal-transverse or transverse-transverse polarization correlation of two back-to-back Λ/Λ-hyperons.
A similar idea has also been proposed in Ref. [13] recently. We refer interested readers to Ref. [13] for more details.
Summary
In this proceeding, we discuss the capacity to eventually test isospin symmetry of D ⊥Λ
|
2022-02-15T06:47:40.243Z
|
2022-02-14T00:00:00.000
|
{
"year": 2022,
"sha1": "7ce821588bdd2906e8f119561c200594e19bb3fb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7566/jpscp.37.020116",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "b455731e0420ade9473b59d38dbd182e628b0bc7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
231603742
|
pes2o/s2orc
|
v3-fos-license
|
Response to “How can WhatsApp facilitate the future of medical education and clinical practice?”
Dear Editor We thank the authors for their correspondence related to our scoping review [1]. They propose expanding the use of an instant messaging application (IMA) in the undergraduate setting, specifically for problem-based learning (PBL) and during clinical student clerkships. The authors state some orthodoxies about undergraduate medical learning – that it is opportunistic, experiential and is effective when it occurs in groups – before asserting that WA can enhance each of these elements of learning. We are in agreement with this. A majority of published literature suggests that, concerns about intrusiveness notwithstanding, WA makes learning more convenient, collaborative, inclusive and enjoyable. It promotes group learning, and enhances efficiency by streamlining the organisational aspects of learning. Importantly, it is also an effective educational tool. In our review, 9 studies evaluated the IMA in an undergraduate setting [2–10], 8 of which, including 3 randomised controlled trials [3, 4, 9] demonstrated a learning benefit. If the authors hope to expand the role of an IMA in undergraduate education, further analysis of these 9 studies reveals some useful insights to guide their planning. Eight used an online moderator/facilitator and 7 used WA in a blended model of learning. Furthermore, 6 used the IMA in discrete learning modules rather than for broad curricular learning, all of which centred around basic science topics (histology, bacteriology, pharmacology, anatomy, virology and physiology). Current evidence therefore favours WA use for discrete topical blended learning, guided by a moderator, and remote from the bedside. This is not to state that it is of no value in undergraduate clinical learning; just that the current published research does not support its use in this setting. The authors relate WhatsApp specifically to PBL and to clinical attachments. We draw their attention to a qualitative study included in our review by Raiman et al. evaluating the IMA in these settings [5]. This small study yields interesting insights into IMA learning, in particular the categorisation of the WA content into educational, organisational and social messages. Users also lamented the lack of face-to-face interaction with IMA learning. Decisions about the role of WA in learning activities therefore need to take account of how the platform will be used and what messages will be promoted therein. The principal assertion of the authors that the role of WA should be more widely integrated into undergraduate medical education activities is a laudable one. Consistent with our instructional design model however, the objectives of WA as an educational tool need to be clearly articulated prior to the design and delivery of a new intervention [1]. Assuming that the purpose of the WA group is primarily educational, the following guiding questions may be useful to a new IMA education designer.
Dear Editor
We thank the authors for their correspondence related to our scoping review [1]. They propose expanding the use of an instant messaging application (IMA) in the undergraduate setting, specifically for problem-based learning (PBL) and during clinical student clerkships.
The authors state some orthodoxies about undergraduate medical learningthat it is opportunistic, experiential and is effective when it occurs in groupsbefore asserting that WA can enhance each of these elements of learning. We are in agreement with this. A majority of published literature suggests that, concerns about intrusiveness notwithstanding, WA makes learning more convenient, collaborative, inclusive and enjoyable. It promotes group learning, and enhances efficiency by streamlining the organisational aspects of learning. Importantly, it is also an effective educational tool. In our review, 9 studies evaluated the IMA in an undergraduate setting [2][3][4][5][6][7][8][9][10], 8 of which, including 3 randomised controlled trials [3,4,9] demonstrated a learning benefit.
If the authors hope to expand the role of an IMA in undergraduate education, further analysis of these 9 studies reveals some useful insights to guide their planning. Eight used an online moderator/facilitator and 7 used WA in a blended model of learning. Furthermore, 6 used the IMA in discrete learning modules rather than for broad curricular learning, all of which centred around basic science topics (histology, bacteriology, pharmacology, anatomy, virology and physiology). Current evidence therefore favours WA use for discrete topical blended learning, guided by a moderator, and remote from the bedside. This is not to state that it is of no value in undergraduate clinical learning; just that the current published research does not support its use in this setting.
The authors relate WhatsApp specifically to PBL and to clinical attachments. We draw their attention to a qualitative study included in our review by Raiman et al. evaluating the IMA in these settings [5]. This small study yields interesting insights into IMA learning, in particular the categorisation of the WA content into educational, organisational and social messages. Users also lamented the lack of face-to-face interaction with IMA learning. Decisions about the role of WA in learning activities therefore need to take account of how the platform will be used and what messages will be promoted therein.
The principal assertion of the authors that the role of WA should be more widely integrated into undergraduate medical education activities is a laudable one. Consistent with our instructional design model however, the objectives of WA as an educational tool need to be clearly articulated prior to the design and delivery of a new intervention [1]. Assuming that the purpose of the WA group is primarily educational, the following guiding questions may be useful to a new IMA education designer.
1. Will WA be used as a standalone or blended learning tool? As noted above, the strongest evidence supports a blended model of learning. 2. Will the discourse on the IMA be solely educational or will organisational and/or social messages be permitted? This centres on whether the core learning occurs through WA discussions, or whether WA is used to facilitate learning occurring elsewhere. For the former, online activities should be clearly mapped to predetermined learning objectives. 3. Will WA learning use a pre-defined curriculum or not? It appears to be more suited to discrete quanta of learning than to large curricula. 4. Will learner participation be voluntary or mandatory? 5. Will a moderator be required and has he/she experience in this online role? WA groups used primarily for educational purposes, especially those where core learning occurs through the online discussions, require a trained moderator. 6. Will the WA group discussion be used as a summative student assessment tool? Few data exist regarding the use of WA for formal undergraduate assessment [11]. Measuring participation in online discussions ignores the fact that effective passive learning can occur in an IMA group. Furthermore, active participation may be hindered by reticence and unfamiliarity with or poor access to the platform. Finally, problems posed on a WA group can effectively only be "answered" once, the solution thereafter visible to all group members. We call this phenomenon the "answer exposure", a shortcoming which denies some learners the opportunity to demonstrate their knowledge [8]. In our experience however, the IMA environment is indeed useful for formative assessment, clarifying errors and misunderstandings in a transparent and collaborative environment.
Finally, the authors correctly assert that the COVID-19 pandemic has inspired a reappraisal of the role of technology-enhance learning (TEL) in medical education. Numerous reports highlight a plethora of eLearning strategies, described as a "medical education revolution" [12]. These strategies include learner management systems (Blackboard®, Fry-it®), videoconferencing (Zoom®, Teams®), lecture recording platforms (Panopto®), digital clinical placements and eSimulation strategies [12]. The additional appeal of using an IMA for informationsharing in an era of physical distancing seems obvious. Based on the paucity of medical education articles related to COVID-19 and IMA learning however [13], it is possible that the pandemic has paradoxically driven TEL away from WA and towards more formal online learning tools. Of note, we also caution against overuse of mobile internet devices in an era of electronic patient records. The obvious concern is the perception of a doctor or medical student interacting more with their electronic devices than with the patient.
In summary, WhatsApp is a cheap, accessible learning resource which the authors suggest requires further integration into undergraduate education. In principle, we support an expansion of blended learning strategies in medical education but caution however against the use of an IMA without theory-driven instructional design considerations. We also encourage reflection on the use of other more formal and adaptable technologies to enhance learning. We encourage the authors to consider the use of our instructional design model to integrate IMA education into their practice.
|
2021-01-15T05:02:57.547Z
|
2021-01-14T00:00:00.000
|
{
"year": 2021,
"sha1": "318f46f70cd0280dce5ec13a33eb4ca68ae460e6",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-020-02455-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "318f46f70cd0280dce5ec13a33eb4ca68ae460e6",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
30433295
|
pes2o/s2orc
|
v3-fos-license
|
Methyl eucomate
The crystal structure of the title compound [systematic name: methyl 3-carboxy-3-hydroxy-3-(4-hydroxybenzyl)propanoate], C12H14O6, is stabilized by intermolecular O—H⋯O and C—H⋯O hydrogen bonds. The molecules are arranged in layers, parallel to (001), which are interconnected by the O—H⋯O hydrogen bonds.
There is no heavy atom with a significant anomalous dispersion contribution, so the absolute configuration from the diffraction pattern itself could not be determined. However, the absolute configuration of the eucomic acid has been established by synthesis (Heller & Tamm, 1974) though its crystal structure has not been determined. Therefore the title compound is expected to share the same R configuration at the chiral centre C8.
Experimental
The title compound was purified from the stems of Opuntia vulgaris according to the reported procedures (Jiang et al., 2002).
Briefly, the stems of Opuntia vulgaris (1 kg) was extracted with 95% ethanol under room temperature. The extracted solution was concentrated with rotary evaporator to afford a crude extract, which was suspended in distilled water and partitioned with petroleum ether, ethyl acetate and n-butanol. Then the n-butanol fraction was subjected to silica gel column chromatography eluted with methanol-chloroform gradient solvent system to afford the title compound (16 mg). The transparent rectangular crystals of the title compound with average size of 0.50 × 0.40 × 0.30 mm were obtained by slow evaporation of the methanol solution at room temperature.
Refinement
Though all the hydrogens were discernible in the difference electron density maps. Neverheless, the hydrogens were situated into the idealized position and constrained during the refinement. Hydroxyl hydrogens: O-H equalled to 0.82 Å, U iso (H)=1.5 U eq O; C aryl -H equalled to 0.93 Å, U iso (H)=1.2 U eq C aryl ; C methylene -H equalled to 0.97 Å, U iso (H)=1.2 U eq C methylene ; C methyl -H equalled to 0.96 Å, U iso (H)=1.5 U eq C methyl .
There is no heavy atom with significant anomalous dispersion contribution in the structure for the used wavelength, so the absolute configuration from the diffraction pattern itself was not determined. 836 Friedel reflections were merged before supplementary materials sup-2 the refinement. However, the absolute configuration of the related eucomic acid has been established previously (Heller & Tamm, 1974) and therefore the title compound has been expected to share the same R configuration at the chiral center C8.
Reflection (0 0 2) was omitted. Fig. 1. The molecular structure of the title structure showing 30% probability displacement ellipsoids and the atom-numbering scheme.
|
2014-10-01T00:00:00.000Z
|
2008-06-28T00:00:00.000
|
{
"year": 2008,
"sha1": "48a7e4f2d962fc3b07820d18c5a4c5e256b16035",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2008/07/00/fb2096/fb2096.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80551ee97f1087f5c6128bf3640580a868eb3769",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258846280
|
pes2o/s2orc
|
v3-fos-license
|
Goal-oriented representations in the human hippocampus during planning and navigation
Recent work in cognitive and systems neuroscience has suggested that the hippocampus might support planning, imagination, and navigation by forming cognitive maps that capture the abstract structure of physical spaces, tasks, and situations. Navigation involves disambiguating similar contexts, and the planning and execution of a sequence of decisions to reach a goal. Here, we examine hippocampal activity patterns in humans during a goal-directed navigation task to investigate how contextual and goal information are incorporated in the construction and execution of navigational plans. During planning, hippocampal pattern similarity is enhanced across routes that share a context and a goal. During navigation, we observe prospective activation in the hippocampus that reflects the retrieval of pattern information related to a key-decision point. These results suggest that, rather than simply representing overlapping associations or state transitions, hippocampal activity patterns are shaped by context and goals.
Every day, people need to plan and execute actions in order to get what they want. Spatial navigation, for instance, requires one to pull up a mental representation of the relationships between different placesi.e., a cognitive map 1 -and generate a plan for how to reach a goal. Tolman 1 proposed that cognitive maps enable behavioral flexibility so that the same underlying representation can be used to reach different goals. For example, if we wanted to navigate to the Tiger exhibit at the San Diego Zoo we might use the same map-like representation to find the Zebra exhibit.
Several lines of evidence suggest that the hippocampus plays a key role in navigation, though its role in navigation is fundamentally unclear. For example, based on findings showing that hippocampal place cells encode specific locations within a spatial context, many have argued that the hippocampus forms a cognitive map of physical space 2,3 . It is now clear that the hippocampus also tracks distances in abstract state spaces [4][5][6] , potentially supporting the broader idea that the hippocampus encodes a memory space 7 that maps the systematic relationships between any behaviorally relevant variables [8][9][10] (see 11,12 for alternative views).
Building on this idea, some have proposed that the hippocampus encodes a predictive map that specifies not only one's current location, but also states or locations that could be encountered in the future 9,13 . For example, the successor representation 9,14 , a popular computational implementation of the predictive map model, has been used to argue that the hippocampus represents each state in terms of its possible transitions to future states. This model demonstrates that via an incremental learning process about state-to-state transitions, analogous to model-free learning about rewards, enables organisms to rapidly learn how a sequence of actions can lead to a desired outcome.
Although numerous studies have investigated representations of abstract state spaces in the human hippocampus, two fundamental questions remain unanswered. One key issue concerns the role of context. Single-unit recording studies have reported that the spatial selectivity of place cells is context-specific-that is, the spatial selec-tivity of a given cell in one environment varies when an animal is moved to a different, but topographically similar environment 2,[15][16][17][18][19] ; see 20 for review). Just as one might pull up different cognitive maps for different physical contexts, it is reasonable to think that we might utilize context-specific maps of abstract state spaces. Computational models have been proposed to explain how the hippocampus might recognize contexts [21][22][23] , but there is little empirical evidence showing whether or how the context in abstract spaces is encoded by the hippocampus.
A second key issue that has not been addressed is the importance of goals in hippocampal representations of abstract task states. Theories of state space representation by the hippocampus rely heavily on results from studies that examined activity in hippocampal place cells during random movements through an environment 18 . Accordingly, studies of abstract spaces in humans typically investigate incidental learning of stimulus dimensions or arbitrary state dynamics [24][25][26] . These kinds of passive, incidental learning tasks differ from those used by Tolman 1 to demonstrate that animals actively use a spatial representation to guide navigation to particular goal locations in an environment. If the human hippocampus forms an abstract cognitive or predictive map, one would expect to see such a representation during planning and navigation towards different goals in the same context.
Based on what is known from studies of spatial navigation, there is reason to think that hippocampal representations in the context of goal-directed navigation might fundamentally differ from what is seen during random or incidental behavior. For example, hippocampal place cells have differential firing fields during planning depending on the future goal of the animal [27][28][29][30] , and goal locations tend to be overrepresented 31,32 . Consistent with these findings, fMRI studies of spatial navigation have found that hippocampal activity is modulated by a participant's distance from a goal location 33,34 , and that hippocampal activity patterns during route planning carry information about prospective goal locations in a virtual space 35 . These findings suggest that hippocampal representations during planning or navigation in abstract state spaces might be influenced by goals. If this is indeed the case, it would potentially challenge models proposing that the hippocampus encodes a relatively static map of current 2 or possible future states 9 .
In the present study, we use functional magnetic resonance imaging (fMRI) to investigate how contexts and goals shape hippocampal representations during planning and navigation (Fig. 1). We devise a task in which participants are required to generate a plan and navigate through two abstract state-space contexts in order to reach a goal state. Critically, the contexts include the same stimuli, with different action relationships in each context. This allows us to examine the impact of context and goals during planning and navigation across perceptually similar sequences. We compare activity patterns elicited during planning of sequences that share a goal to those that had different goals, in order to disentangle the unique contribution of goal information on hippocampal activity patterns. Finally, we analyze the time course of hippocampal patterns while participants actively navigate during the task to examine if current and future states are reactivated in a way that is consistent with computational models of hippocampal function. We show that, during planning, hippocampal representations carry context-specific information about individual sequences towards a goal. Similarly, during navigation, we find prospective activation in the hippocampus that reflects the retrieval of pattern information related to a key decision point. Taken together, our results suggest that hippocampal activity patterns reflect integrated representations of sequences that lead to the same goal. Furthermore, our data support the notion that the hippocampus plays a phasic role in the activation of patterns that contain information about future states by prioritizing sub-goal information during active navigation.
Navigating an abstract spatiotemporal map
Prior to scanning, participants were trained to criterion (85% accuracy) to navigate to four goal animals in two distinct contexts that consisted of animals that were systematically linked in a deterministic sequence structure (see Methods). Each zoo context consisted of the same nine animals arranged in a plus maze topology, but the relationships between animals across the two zoos were mirror-reversed and then rotated counterclockwise by 90 degrees (Fig. 1a). At each animal, participants were able to make one of four button presses that allowed them to transition between animals. In the scanner, participants were asked to use their knowledge of the zoo contexts to actively navigate from a start animal to a goal animal (Fig. 1b), where start and goal animals were always at the ends of the maze arms. Each trial consisted of a planning phase and a navigation phase. During the planning phase, a cue indicated the start and goal animals. Next, during the navigation phase, participants saw the start animal alone before moving through a sequence of animals to reach the goal animal. For each animal, participants had to decide which direction in the plus maze to move to ultimately reach the goal animal. On any given trial, participants were only allowed four moves to navigate to the goal animal and the interstimulus interval was fixed to ensure that an equal amount of time was spent at each state. In each zoo context, participants planned and navigated 12 distinct sequences (each repeated 4 times across 6 runs of scanning). In addition, one trial from each sequence was randomly chosen to end early at the rabbit (Catch Trials). This resulted in 72 sequences that could be analyzed (see Methods). Participants were highly accurate at navigating to the goal animal in each context (Context 1: Mean = 93.7%, SD = 12.9%, Context 2: Mean = 94.7%, SD = 12.2%), with no significant differences in accuracy between contexts (t 22 = 1.16, p = 0.26, d = 0.24, 95% CI [−0.027, 0.0076]). This suggests that participants had successfully formed distinct representations of each zoo context. We next tested whether participants' reaction times would be modulated by differences in the decision-making demands at different locations in the virtual maze. Specifically, our task was structured such that participants were required to initiate their navigation plan at the onset of the start animal (i.e., position one), and at position threethe center of the plus maze, they needed to choose the correct move in order to reach the goal. Accordingly, we expected reaction times (RTs) to be higher at these positions in the navigational sequence than at other positions. Consistent with this prediction, analyses with a linear mixed effects model revealed a significant effect of position (χ 2 ( (Fig. 1). This shows that decision-making demands at key locations, such as choice points, influenced participants' response time.
Hippocampus is sensitive to context-specific sequences in abstract spaces During the planning phase (i.e., when participants were viewing the cues), we expected that participants should retrieve information about the sequence of state-action pairs that led from the start animal to the goal animal. Our first analyses targeted the extent to which hippocampal activity patterns carried information about the context and the planned sequence. To address this question, we extracted hippocampal multi-voxel activity patterns on each cue trial and calculated pattern similarity (Pearson's r) between trial pairs that came from repetitions of the same sequence cue in the same context, and compared those to both trial pairs for sequence cues with different start or end points, and trial pairs for sequence cues that came from the same or different context (Fig. 2a). Importantly, visual information was shared across contexts as the cue only indicated the start and goal animal, not the context, and the same cue was associated with different moves between contexts. In addition, only trials which resulted in participants subsequently making the correct moves towards the goal were included in neural analyses.
To test whether hippocampal activity patterns carried information about the context and the planned sequence, we used a linear mixed effects model 36 with fixed effects of context (same/different) and sequence (same/different), and a random intercept for participants (see Methods for model selection details and Eq. 2) to predict pattern similarity in the hippocampus. We reasoned that, during planning, participants retrieved information about the sequence of states and actions needed to reach the goal. Therefore, we predicted that pattern similarity should be higher for sequences that shared the same stateaction pairs. Moreover, we predicted that this effect should be contextspecific, as the same sequence across contexts have different stateaction pairs. Consistent with this prediction, we found a significant sequence by context interaction (Fig. 2b: Hippocampal activity patterns reflect future goals during planning The above analysis demonstrates that hippocampal activity patterns carry context-specific information about planned sequences, but there are reasons to think that hippocampal sequence representations might become more similar under certain circumstances. For instance, if the hippocampus uses predictive maps that carry information about possible future states 9 , one might expect similar representations of diverging sequences that share the same starting point but lead to different goals by more heavily weighting the immediate state-action pairs that follow planning (see Methods for successor representation simulation details and Supplemental Fig. 1). On the other hand, it is possible that goals are more heavily weighted during planning 37 , in which case we might expect similar representations of converging sequences that lead to the same goal but start at different states. We sought to test these ideas by comparing pattern similarity during cues associated with repetitions of the same sequence, cues associated with converging sequences that shared the same goal state, cues associated with diverging sequences that shared the same start state, and cues associated with sequences that had different start and different goal states (Diff. Start Diff. Goal) (Fig. 2a).
A linear mixed effects model with fixed effects for overlap (same sequence/converging/diverging/diff start + diff goal) and context (same/different) and a random intercept for participant (see Methods for model selection details and Eq. 3) showed a significant context by overlap interaction (χ 2 (3, N = 23) = 14.75, p = 0.002, η 2 p = 0.09, 95% CI [0.01, 0.17]). (Fig. 2c, d). Follow up tests investigating this significant interaction revealed that, within a context, cues with converging goals had significantly higher pattern similarity than cues with diverging goals (z = 2.19, p = 0.03, d = 0. 23 . In sum, these results show that during planning, representations in the hippocampus are differentiated based on future context-specific goals. This suggests that goals may fundamentally shape representations in hippocampus via shared patterns between sequences that lead to the same goal.
Differences in pattern information during the cue period cannot be explained by shared motor plans or sensory details The present results are consistent with the idea that the hippocampus supports the planning of state-action sequences toward a goal. Importantly, our cues were carefully controlled, such that participants Article https://doi.org/10.1038/s41467-023-35967-6 viewed visually identical stimuli across contexts and participants did not make responses during the planning phase. However, it is possible that low-level visual representations could be modulated by context 38 . To verify that visual regions did not show any effect of context, we ran a control analysis on an anatomically defined visual cortex ROI (V1/V2). To do this, we compared pattern similarity between cues of the same sequence, cues that had different starting items but the same goal, cues that had the same starting item but diverged to a different goal, and cues that shared neither the start nor the goal. This analysis is identical to the overlap analysis run on hippocampus above (see Methods and Eq. 3 for model details). We found that this visual cortex ROI was only sensitive to visual information ( Having verified that low-level visual information was not modulated by context, we next turned to representations of motor actions during panning. It is conceivable that, during planning, the pattern of results in hippocampus could be driven by overlap in planned movements between converging vs. diverging sequences. To ensure context effects observed in hippocampus were not due to shared motor information during planning, we examined trial pairs that had the exact same moves, trial pairs that had two moves in common, and pairs that had no moves in common, to ensure that movement information alone was not modulated by context in the hippocampus. Results showed no effect of planned moves or context on pattern similarity ( action sequence required to execute a plan. Altogether, these analyses provide an important control and bolster our interpretation of the findings from our analyses of the hippocampus, by showing that primary sensory areas are activating behaviorally-relevant representations during planning, but that the effects of context and goal are only present in hippocampus.
Representation of behaviorally relevant sequence positions during navigation
Having established that the hippocampus represents information about context-specific goals during planning, our next analyses turned to how state-action information is dynamically represented during navigation. Available evidence suggests at least three ways that navigationally-relevant information might be represented by the hippocampus. Based on classic studies of place cells, we might expect the hippocampus to represent the current state as participants navigated toward the goal. Alternatively, based on predictive map models 9 , we could expect that the hippocampus would represent not only the current state but also future states.
A third possibility is that the hippocampus might preferentially represent goal-relevant information during navigation. In our study, the most behaviorally significant points in a navigated sequence were the starting point (position 1), when a goal-directed plan must be initiated, and the center of the maze (position 3), a critical sub-goal where one's decision will determine the ultimate trial outcome. This was confirmed by our behavioral analyses that revealed that participants were slower to respond at positions 1 and 3 ( Fig. 1). We therefore reasoned that participants might be likely to prospectively retrieve hippocampal representations of these states during navigation.
To test this prediction, we examined pattern similarity differences during navigation across converging and diverging sequences in the same zoo context. Converging and diverging sequences were chosen because these sequences have an equal number of overlapping states, but the timing of the overlap is systematically different. Both the current state and standard predictive map models would suggest that pattern similarity during navigation should reflect this pure overlap-early in a sequence there should be higher pattern similarity across pairs of diverging sequence trials, and late in a sequence there should be higher pattern similarity across pairs of converging sequence trials. In contrast, a goal-based account would predict that pattern similarity could reflect prospective coding of goal-relevant information 35,39 , which should be higher across converging sequences (which share the same upcoming goal), relative to diverging sequences (which overlap in early states but lead to different goals).
We used a time-point by time-point pattern similarity analysis approach that enabled us to examine information in multivoxel activity patterns about current, past, and future states to test our key hypotheses. This technique is conceptually similar to cross-temporal generalization techniques used in pattern classification analyses 40 . First, we extracted the time-series for each navigation sequence using a variant of single trial modeling that utilizes finite impulse response (FIR) functions 41 , allowing us to examine activity patterns for each time point (TR) as participants navigated through the sequence of items. Importantly, incorrect trials were excluded from this analysis. As depicted in Fig. 3, we quantified pattern similarity between pairs of navigation sequences (e.g. zebra to tiger sequence compared to the camel to tiger sequence) at different timepoints (e.g., TR 1 to TR 10), which yielded a timepoint-by-timepoint similarity matrix for each condition (converging or diverging sequences). The diagonal elements for this matrix reflect the similarity between pairs of animal items from the same timepoint in the sequence. Off-diagonal elements reflect the similarity between an animal at one timepoint in the sequence and animal items at other timepoints in the sequence. Importantly, incorrect trials were excluded from this analysis. Separate timepoint-by-timepoint correlation matrices (Pearson's r) were created for pairs of converging sequence trials and pairs of diverging sequence trials. We next computed a difference matrix and tested for statistically significant differences between converging and diverging sequences, correcting for multiple comparisons using clusterbased permutation tests (10,000 permutations, see Methods for more details, for individual subject contrast maps see Supplemental Fig. 3).
As noted above, diverging sequences have overlapping states early in the sequence, and converging sequences have overlapping states late in the sequence. If the hippocampus represents only current states, we would expect to see pattern similarity differences between converging and diverging close to the diagonal of the timepoint-bytimepoint matricesthat is, we would expect higher pattern similarity for diverging pairs during timepoints early in the sequence and higher pattern similarity for converging pairs during timepoints late in the sequence. If the hippocampus represents current and temporallycontiguous states, as suggested by predictive map models, we would expect that at early positions, we would expect higher pattern similarity for diverging sequences, both on-and off-diagonal, and at late positions, we would expect higher pattern similarity for converging sequences both on-and off-diagonal. Finally, if the hippocampus preferentially represents goal-relevant information during navigation 37,39 , we would expect to see higher off-diagonal pattern similarity only for converging sequences, because only converging sequences share the same goal. Specifically, we expected higher offdiagonal pattern similarity between goal states and earlier positions in the sequences.
Consistent with the prospective representation of goal-relevant states in the hippocampus, we found several clusters showing higher similarity for converging compared to diverging sequences (Fig. 4, Supplemental Figs. 5-7). Interestingly, there was a significant offdiagonal cluster (outlined in red: T 22 = 3.34, p = 0.038, d = 0.27, 95% CI [0.0031, 0.013], maximum cluster corrected) that roughly corresponds to the activation of the decision point (position 3) when participants were at position 1 (approx. TRs 10-15). Other clusters tended to overlap with key locations in the experiment, which roughly correspond to position one activating position five (TRs 18 to 21) and position three activating position five (TRs 18 to 20) (Fig. 4D), although these clusters did not survive multiple comparison correction. These data are consistent with the idea that information about position 3 was preferentially activated in converging sequences, in which the same key decision was required to navigate to the same goal.
Discussion
The aim of the present study was to identify how the hippocampus represents task information during planning and navigation towards a behavioral goal. During planning, we show that hippocampal representations carried context-specific information about individual sequences to a goal. Surprisingly, not all sequences were equally differentiated, such that sequences that converged on a common goal showed higher pattern similarity compared to diverging sequences, despite an equal amount of overlap between the conditions. Similarly, during navigation, we found that the hippocampus prospectively activated goal-specific representations of the key decision point. Taken together, our results suggest that the hippocampus forms integrated representations of sequences that lead to the same goal. Furthermore, they support the notion that the hippocampus plays a phasic role in the activation of patterns that contain information about future states and prioritizes sub-goal information during active navigation. In summary, our data are consistent with the idea that rather than simply representing overlapping associations, hippocampal representations are shaped by context and goals.
The hippocampus represents context-specific goal information during planning
A key finding from the present study is that, during planning, hippocampal activity patterns are organized such that they either generalize or differentiate between sequences depending on the goal, and do so in a context-specific manner. These findings are relevant to theories which propose that prospective thought (prediction/planning) relies on the same circuitry used for episodic memory [42][43][44] . In support of this idea, place cells fire in a sequence that represents the path that an animal will take in a phenomenon described as forward replay 45 Our findings suggest an important constraint on the role of the hippocampus in imagination and simulation. In our study, if participants simulated the sequence of sensory events that led to the goal (i.e., imagining the sequence of animals), we would expect hippocampal representations to generalize across repetitions of the same sequence of animals across the two different zoo contexts. Instead, we found that hippocampal representations during planning were context specific, such that pairs of trials involving the same sequence of animals across different contexts were indistinguishable from entirely different sequences. Moreover, similarity across different sequences that led to the same goal in the same zoo context was indistinguishable from similarity across repetitions of the same sequence in the same context. Thus, in our study, hippocampal activity most likely did not reflect the imagination of a sequence of stimuli per se, or even a specific sequence of states, but rather a context-specific representation of behaviorally relevant points to achieve a goal.
Together with prior research, our results are relevant to an emerging body of work suggesting that goals and other salient locations have a measurable impact on spatial and non-spatial maps in the brain 19,[48][49][50][51] . For example, McKenzie et al. 19 found that rewarded events had higher pattern similarity within a context compared to unrewarded events. Moreover, there is evidence that, after learning in a reward-based foraging task, place cells tend to be clustered around goal locations 31,32 . This could go some way towards explaining our results of increased pattern similarity for sequences that converge on the same goal. Considering the current work and past findings, we propose that hippocampal representations are flexibly modulated depending on current behavioral demands, incorporating trial-specific information that allows organisms to realize a specific goal 52 .
Our findings are also relevant to past work showing that the hippocampus represents information about specific sequences of objects 25,[53][54][55][56][57] . Studies examining how the brain represents routes with multiple paths or that are hierarchical in nature show that activity in the hippocampus is higher when planning and navigating an overlapping route and that, during navigation, the univariate bold signal is modulated by distance to a goal [58][59][60] . In one study, Chanales et al. 60 show that representations of overlapping spatial routes become dissimilar over learning. This is potentially at odds with the current findings, where we find that routes that overlap in their goal show higher pattern similarity compared to routes that do not share a goal. However, participants in Chanales et al. passively viewed pictures along routes, whereas participants in our task actively navigated. As mentioned earlier, rodent studies suggest that hippocampal spatial coding can shift dramatically between goal-directed behavior and random foraging in the same context. Moreover, in Chanales et al. it would make sense for participants to differentiate overlapping routes because they did not include sequences that converged on the same goal. Thus, it would be optimal to learn a unique representation for each spatial route in order to predict the outcome. In contrast, in our experiment, all trials that converged on the same goal required the same key decision at position 3, regardless of the starting point. In this situation, it is optimal to learn a representation that captures the information that is common to any sequence that converges on the same goal. For example, as depicted in Fig. 1, any trial with a tiger as the goal animal will require participants to choose the down button at position 3. In the next section, we explain why results from the navigation period are also consistent with this interpretation.
Reinstatement of remote timepoints in the hippocampus during navigation
If the hippocampus supports prospective planning for goal-directed navigation, then it is important to understand how it functions when such actions are taken when navigating abstract spaces. For example, if the hippocampus is involved in retrieving the specific state-action plan, what is its function once this plan is executed? To address this question, we contrasted pattern similarity during the navigation phase across pairs of converging sequences against pairs of diverging sequences.
As noted above, the animals in the first three positions overlapped across diverging sequences, whereas the animals in the last three positions overlapped across converging sequences. Thus, if the hippocampus only represented the current state during navigation, we would have expected pattern similarity on the diagonal in Fig. 4 to be higher for diverging trials for early time points, and then higher for converging trials in the later time points (see also Supplemental Fig. 4). If participants solely retrieved past states during navigation, we would expect off-diagonal pattern similarity to be higher for diverging sequences than converging sequences (because the first three positions were common for the diverging sequences). Our data were inconsistent with both of these accounts. Instead, we found that offdiagonal pattern similarity was higher for converging than for diverging trial pairs, suggesting that hippocampal activity patterns carried information about future timepoints during navigation.
The significant cluster of increased pattern similarity for converging, relative to diverging, sequences was consistent with the interpretation that, at the outset of the navigation phase, participants prospectively activated a representation of position 3. This result is notable for two reasons. First, participants were engaged in active, self-initiated navigation, and as such, we would expect considerable variability in the timing of prospective coding across trials and across participants. The fact that prospective coding of position 3 (as indicated by off-diagonal pattern similarity) was nonetheless reliable across participants attests to the significance of this position to successful task performance. Second, the finding is notable because the stimulus at position 3 is exactly the same for all trials in all contexts. Thus, the disproportionate representation of position 3 across convergent sequences could not solely reflect the identity of the stimulus itself.
As noted above, the correct decision to be made at position 3 depends on one's current goal and context. All converging sequences share the same decision at position 3 because they share the same goal, whereas diverging sequences are associated with different decisions at position 3 because they involve different goal states. These results are consistent with the idea that participants prospectively activated the most goal-relevant information in the upcoming sequence, namely the context-and goal-appropriate decision at position 3.
Consistent with our study, research in rodents shows that hippocampal ensemble activity differs between routes that share a common path but lead to a different goal [28][29][30]61,62 . There are also findings that demonstrate predictive hippocampal representations that are related to future behavior in both spatial and non-spatial tasks 24,63 . Our data, however, suggest that, during goal-directed behavior, the human hippocampus does not solely reflect the current state during navigation, or only the immediate future, but rather that it emphasizes strategically important states for reinstatement during ongoing behavior. Our results align with computational models that show that place cells associated with behaviorally relevant locations in an environment are preferentially incorporated into replay events 37 .
Relevance to models of hippocampal state space representation
Several models of hippocampal contributions to spatial navigation and memory propose that the hippocampus generates predictions of upcoming states 64 . For instance, a specific computational implementation of a predictive map model, the successor representation, states that the hippocampus is involved in learning relationships between states and actions, and that its representations reflect expectations about future locations 9,65 . We used a standard version of this computational model to generate simulated pattern similarity results, and surprisingly, these simulated matrices were qualitatively different from what we observed in the hippocampus.
In our simulations (see Supplemental Materials), a classical version of the successor representation reflected the transition probabilities between states, such that adjacent states were more similar than non-adjacent states. Because participants transitioned between all start and end positions equally in both directions, the model could not reproduce the difference between converging and diverging sequences either during the planning or navigation phases. It is possible that, in the relatively small and deterministic state space used in our task, it is not advantageous to represent an elaborate transition structure. An alternative approach to account for the present results would be to use a model that places heavier emphasis on context instead of only the next item or next decision. One model, the clonestructured cognitive graph model 23 , is able to learn clones of similar observations that are distinguished by the current context. We predict that those models take into account context and goals, like the model presented in George et al., will be better able to capture the nuances of our task.
Alternatively, it might be advantageous to focus on models that incorporate an inductive bias to specifically focus on the most goalrelevant aspects of state space (e.g., the goal, context, and decision at P3). In many situations, an agent with an appropriate understanding of task structure could benefit by deploying the hippocampus more strategically, by preferentially encoding and prospectively retrieving memories for key decision points towards a goal 12 . One example of a computational model that relies on strategic deployment of past experience comes from Lu, Hasson, and Norman 66 . Their simulations showed that it was computationally advantageous to prioritize hippocampal encoding and retrieval of temporally extended events at event boundaries, which are moments of high uncertainty or prediction error. In our task, inductive biases carried out through such a computational framework could emphasize the goal and key decision point, rather than passively representing all possible state transitions.
We hypothesize that hippocampal representations of physical space and abstract state spaces are flexible, reflecting the computational demands of the planning problem, and the participant's understanding of, and experience with, the problem 52,67 . In the present study, the task might have encouraged a model-based planning strategy in which future goals and key states are strategically retrieved and represented in hippocampus. In cases where learning is passive and incidental to the task, or when transitions between states change unpredictably, hippocampal state spaces might instead resemble successor-based maps. Finally, in more complex tasks, participants might adopt different strategies with varying degrees of emphasis on goal-relevant information 68 .
Human behavior is characterized by the ability to plan and flexibly navigate decision spaces in order to realize future goals. The present findings suggest that the hippocampus represents context-specific, goal-oriented representations during navigation. These findings may contribute to the development of unified models accounting for hippocampal contributions to memory, navigation, and goal-directed sequential decision-making [69][70][71] . Additionally, this work highlights the importance of studying goal-directed behavior, attentional modulation of memory representations, and their impact on planning.
Participants
Thirty healthy English-speaking individuals participated in the fMRI study. All participants had normal or corrected-to-normal vision. Written informed consent was obtained from each participant before the experiment, and the Institutional Review Board at the University of California, Davis approved the study. Participants were compensated with an Amazon gift card for their time. Data from one participant was excluded due to technical complications with the fMRI scanner, one participant was excluded due to a stimulus computer malfunction, two participants were excluded due to poor behavioral performance in the scanner (defined as falling below-trained criterion, 85% correct, in the scanner), and one participant was removed from the scanner before the experiment concluded because they did not wish to continue in the study. Prior to data analysis, to ensure data quality, we conducted a univariate analysis to examine motor and visual activation during the task compared to an implicit baseline (unmodeled timepoints when the participant was viewing a fixation cross). Two participants showed little to no activation in these regions and were excluded from further analysis. The remaining 23 participants (11 male, 12 female, all right handed) are reported here.
Stimuli and procedure
Data was collected from participants using Matlab 2016a and Psychophysics toolbox. Task stimuli consisted of nine common animals, shown in color on a grey background. Participants were tasked with learning two zoo contexts, consisting of animals organized in a specific spatial orientation (Fig. 1a). Importantly, animals in both contexts were visually identical, but each context had a distinct spatial organization. Training consisted of three stages per context: 1) map study, 2) exploration, 3) sequence navigation. This was followed by an additional sequence navigation phase that alternated between contexts.
During map study, participants were initially shown an overhead view of one of the zoo contexts (counterbalanced order across participants). After studying this picture, participants were asked to reconstruct the location of all the animals by arranging icons on the screen. If participants were not able to perfectly recreate the maze they were shown the picture once more and asked to try again. Next, during the zoo exploration, participants used arrow keys to move between items in the zoo, starting from the central animal. At the bottom of the screen, participants were shown arrows indicating all possible moves from their current location (e.g. Left, Up, Down, Right at the center position of a maze). If participants made an incorrect move (moving outside of the animal maze) they were informed they made a wrong move. Participants were required to visit each of the animals four times before moving on to the next phase. During the sequence navigation phase, participants were shown a cue with a start and goal animal, and had four moves to reach the goal on a given trial. Start and goal animals were always the endpoints of an arm. Participants were trained to 85% criterion before learning the other context. The same training procedure outlined above was repeated for the second zoo context. After learning each of the zoos to criterion, participants completed an additional sequence navigation phase with the same timing as the MRI scanning session.
In the MRI scanner, participants completed six runs of the sequence navigation task (Fig. 1b). In each run, participants completed 16 sequence navigation trials. Trials from a given context were presented in a blocked fashion so that there were 8 consecutive trials from each context. Across runs, context blocks were alternated and their order was counterbalanced across participants. In addition, the order of sequences within each context was counter-balanced across blocks to ensure no systematic ordering effects influenced our results. Each navigation trial began with a cue signaling a start and a goal animal displayed for 3 s, followed by a 3 s ITI. Participants then saw the start animal and navigated by pressing buttons to move through the space one animal at a time. Animal items were displayed on the screen for 2 s with a 3 s ITI, regardless of a participant button press. For items where participants made a navigational error, text was displayed for 2 s informing them they made a wrong move or incorrectly navigated to a goal animal. In each zoo context, participants planned and navigated 12 distinct sequences (each repeated 4 times across 6 runs of scanning)
MRI data processing
Data were preprocessed using SPM12 (https://www.fil.ion.ucl.ac.uk/ spm/) and ART Repair. Slice timing correction was performed as implemented in SPM12. We used the iterative SPM12 functional-image realignment to estimate movement parameters (3 for translation and 3 for rotation). Motion correction was conducted by aligning the first image of each run to the first run of the first session. Then all images within a session were aligned to the first image in a run. No participant exceeded 3 mm frame-wise displacement. A spike detection algorithm was implemented to identify volumes with fast motion using ART repair (0.5 mm threshold) 72 . These spike events were later used as nuisance variables within generalized linear models (GLMs). Participants native structural images were coregistered to the EPIs after motion correction. The structural images were bias corrected and segmented into gray matter, white matter, and CSF as implemented in SPM12. Native brainmasks were created by combining gray, white matter masks. Data were smoothed with a 4 mm 3 FWHM 3D gaussian kernel.
Regions of interest
ROI definitions were generated using a combination of Freesurfer, and a multistudy group template of the medial temporal lobe. The multistudy group template was used to generate probabilistic maps of hippocampal head, body, and tail as defined by Yushkevich et al. 73 and warped to MNI space using Diffeomorphic Anatomical Registration Using Exponentiated Lie Algebra (DARTEL) in SPM8. Maps were created by taking the average of 55 manually-segmented ROIs and therefore reflect the likelihood that a given voxel was labeled in a participant. Masks were created by thresholding the maps at 0.5, (i.e., that voxel was labeled in 50% of participants). These maps were then reverse normalized to native space using Advanced Normalization Tools (ANTS). Participant-specific cortical ROIs were generated using Freesurfer version 6.0. from the Destrieux and Desikan atlas [74][75][76] . Individual cortical ROIs were binarized and aligned to participants' native space by applying the affine transformation parameters obtained during coregistration. These masks were combined into merged masks that encompassed the entire hippocampus bilaterally (see cue period pattern similarity for more information). Anatomical ROIs for V1/V2 and BA4a/p were obtained by running all participants structural scans through the freesurfer recon-all pipeline. Our V1/V2 ROI was obtained by merging the anatomical masks for BA17 and BA18 (Supplemental Fig. 2).
Cue period pattern similarity analysis
Our primary interest was to investigate how prospective sequence representations were modulated based on context membership. To achieve this, we used representational similarity analysis to analyze multi-voxel activity patterns within regions of interest 77 . Generalized Linear Models (GLMs) were used to obtain single trial parameter estimates of the cue period using a modified least-squares all (LSA) model 35,78 . Data were high-pass filtered using a 128 s cutoff and prewhitened using AR(1) in SPM. All events were convolved with a canonical HRF to be consistent with prior work 78 . Cue periods were modeled using separate single trial regressors for each cue (2 s boxcar). The remaining portions of the task were modelled as follows: Navigation periods were modelled with separate 25 s boxcar functions for each trial, separate single trial regressors for catch sequences modelled as a 15 s boxcar, separate single trial catch blank trials (stick function), outcome correct at condition level (stick), outcome incorrect at condition level (stick), and the four button presses at the condition level (stick). Nuisance regressors for motion spikes, 12 motion regressors (6 for realignment and 6 for the derivatives of each of the realignment parameters) and a drift term were included in the GLM. Pattern similarity between the resulting beta images were calculated using Pearson's correlation coefficient between all pairs of trials in the experiment. Only between run trial pairs were included in the analysis to avoid spurious correlations driven by auto-correlated noise 79 .
Based on evidence of functional differentiation along the longaxis of the hippocampus, we tested for any longitudinal or hemispheric differences in hippocampal patterns 63,80,81 . Analyses revealed no significant differences in the pattern of results between left and right or between anterior or posterior segments of the hippocampus. As a result, subsequent analyses were performed with pattern similarity data from a bilateral hippocampus mask.
Linear mixed models
Behavioral responses and pattern similarity were analyzed using linear mixed effects models to account for the nested structure of the dataset, allowing us to statistically model errors in our model clustered around individuals and trial types that violate the assumptions of standard multiple regression models. Statistical comparisons were conducted in R (3.6.0) (https://www.r-project.org/) using lme4 and AFEX 82,83 . Reaction times were analyzed using the following formula: Where (1|participant) indicates the random intercept for participant and RT is the reaction time for each position during the navigation phase, excluding position 5 (as no response is required). Furthermore, outlier RTs were excluded that exceeded 2.5 standard deviations from a participant's average reaction time.
For the pattern similarity analyses, pairwise PS values were input for each participant into three separate models with the following formulas: ðFigure 2bÞ : PS ∼ same sequence*same context + ð1|participantÞ ð2Þ ðFigure 2c=dÞ : PS ∼ overlap*same context + ð1|participantÞ ð3Þ ðSupplemental Figure 2Þ : PS ∼ move*same context + ð1|participantÞ Where (1|participant) indicates the random intercept for participant and PS is the Pearson correlation coefficient for a given trial pair. Fixed effects for Eq. 2: (1) same sequence -a categorical variable with two levels indicating if the trial pair was from the same or different sequence. (2) Same context -categorical variable with two levels: same or different. Fixed effects for Eq. 3: overlap -a categorical variable with four levels: full, converging, diverging, and diff. start diff. goal. Same context -same as Eq. 2. Fixed effects for Eq. 4: Movea categorical variable with three levels: same moves, shared moves, no moves. Same context -same as Eq. 2. Statistical significance for fixed effects was calculated by using likelihood ratio tests, a nonparametric statistical test where a full model is compared to a null model with the effect of interest removed. For example, to test the significance of an interaction term two models would be fit. One with two main effects and no interaction and the other with the interaction term. Effects sizes were calculated with the partial eta squared statistic. Follow up tests and estimated marginal means 84 from LMMs were calculated using the R package emmeans (https:// cran.rproject.org/web/packages/emmeans/index.html). Effects sizes were calculated with Cohen's d.
In all the above models, a model with a maximal random effects structure, as recommended by Barr et al. 85 , was first fit. In all cases the maximal model failed to converge or was singular, indicating overfitting of the data. When examining the random effects structure for these models, random slopes for our fixed effects accounted for very little variance when compared to our random intercept for participant. To improve our sensitivity and avoid over-fitting these terms were removed as suggested by Matuschek et al 86 . Lastly, it is important to note that our results are not dependent on using linear mixed models. Using standard repeated measures ANOVA produces qualitatively and quantitatively similar results in all ROIs (See also Supplemental Tables 1-4).
Successor representation simulation
To better understand specific predictions of the successor representation in our task we performed a simple simulation with respect to our task 9 . First, we created a topological structure (connected graph) that was similar to our task. As seen in Supplemental Fig. 1, this structure closely resembled the plus maze participants navigated in. We simulated the successor representation based on a random walk policy using the equation.
Where γ is a free parameter that controls the decay of the SR and T is the full transition matrix of the task depicted in Supplemental Fig. 1A/B. For the current simulations, gamma of 0.3 was used, but results are qualitatively similar for different values. Random walk or policy independence can be assumed in this case because maps were well learned before the scanner and each sequence was traversed in both directions an equal number of times 65 . We then tested the hypothesis that, during planning, the hippocampus encodes the SR of the first position in the sequence (columns of SR). We extracted columns of the SR for three planned sequences ((state 1 -> state 5) (state 6 -> state 5) (state 1 -> 9)) and calculated the similarity (Pearson's) between pairs of trials. The same sequence was calculated by correlating the same sequence with itself. The converging condition was obtained by correlating trials that started at different states but converged on the same end state. The diverging condition was obtained by correlating trials that started at the same state but diverged to different end state. Lastly, the diff. start diff. goal condition was calculated by correlating trial pairs that started and ended at different states. As shown in Supplemental Fig. 1, the SR heavily weights the immediate locations around the starting location and thus would predict that diverging sequences should have higher similarity than converging sequences.
Timepoint-by-timepoint representational similarity analysis
To examine whether participants activated remote timepoints as they navigated through our virtual environments (e.g., activating decision points early in the navigation trial), we used a variant of single trial modeling using finite impulse response (FIR) functions 41 . This method allowed us to isolate the unique spatiotemporal pattern of activity for a given navigation trial while simultaneously controlling for surrounding time points during the run. We modeled 47 seconds of neural activity with a set of 38 FIR basis functions. Specifically, we obtained a spatial pattern of activity for each of these 38 TRs in our model, which allowed us to compare the similarity of the spatial patterns of activity between timepoints in the navigation phase. Additional regressors were included for motion, however spike regressors were not included in this analysis because they perfectly colinear with an FIR basis sets for each TR. A separate GLM was used for every trial resulting in 72 voxel time series. Collinearity in our model was measured using the variance inflation factor (VIF) and was verified to be within acceptable levels according to standards in the literature 87 (see also Supplemental Fig. 8). To examine within trial type similarity (same trial type across repetitions) timepoint-by-timepoint similarity matrices were generated by correlating activity patterns from repetitions of specific sequence pairs (e.g. zebra-tiger repetition 1 with camel-tiger repetition 1), at every TR. The resultant matrices were symmetrized by averaging across the diagonal of the matrix using the following equation: (X T + X)/2. The resultant timepoint-by-timepoint similarity matrix was averaged within a specific trial type to get a single average timepointby-timepoint similarity matrix for each participant and condition (Fig. 3, Supplemental Fig. 7). This was done separately for converging and diverging sequences. Only between-run trial pairs were included in the analysis to avoid spurious correlations driven by auto-correlated noise 79 . This method allowed us to isolate individual sequence patterns while controlling for temporally adjacent navigation trials. To identify which points in time corresponded to relevant parts of the task, we manually lagged trial labels by 4 TRs to account for the slow speed of the HRF.
Time point-by-timepoint similarity matrices were constructed only for converging and diverging sequences. This subset of trials was chosen for several methodological reasons listed below. One is that, to maximally control for differences in trial numbers between conditions and temporally auto-correlated evoked patterns, while still maintaining enough power to examine future state activation; we restricted our analyses to converging and diverging sequences within the same context. Importantly, this selection of trials allows us to simultaneously control for several factors while testing specific predictions. Another is that, converging and diverging sequences are matched in terms of the number of shared items and therefore overall visual similarity. Specifically, the same animal items are seen during the first half of diverging sequences, while the same animal items are seen in the second half of converging sequences (all sequences share the center item).
To assess statistical significance, and to correct for multiple comparisons, we used cluster-based permutation tests 88 with 10,000 permutations, with a cluster-defining threshold of 0.05 (two-tailed). Each pixel of a statistical comparison (T-value) was converted into a Z value by normalizing it to the mean and standard error generated from our permutation distributions. Cluster significance was determined by comparing the empirical cluster size to the distribution of the maximum cluster size (sum of T-values) across permutations with a cluster mass threshold of 0.05 (two-tailed).
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
Processed data to reproduce figures in the manuscript and supplement are available at https://github.com/jecd/Hippocampgoal and in a Zenodo repository https://doi.org/10.5281/zenodo.7264243 89 . Source data are provided with this paper. Raw data available at https://osf.io/ txauh/.
|
2023-05-24T13:21:35.003Z
|
2023-05-23T00:00:00.000
|
{
"year": 2023,
"sha1": "3858abd6a84a3b1e3bd024e5c894b43fe0e76c03",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41467-023-35967-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9dd239fafad25b604ee8d75fe13fc605055eb9da",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260229506
|
pes2o/s2orc
|
v3-fos-license
|
THE RULE OF LAW AS A PRINCIPLE OF LEGAL MEANING: EUROPEAN AND UKRAINIAN EXPERIENCE
Summary The article deals with the problem of legal meaning-making which is quite important, as the development of law is generally carried out through the creation of legal meanings by the subjects of law. Law must be meaningful, reasonable and has some sense . Legal existence is the real embodied and existing perfectly legal meanings. Emphasis is placed on the danger of large-scale de-legalization of public relations in modern Ukraine, which is manifested in numerous violations of the current Constitution of Ukraine by government officials, human rights violations that have become widespread, very significant politicization of the judiciary, including the Constitutional Court of Ukraine, violations of legal standards by various branches of government. On the way to a democratic state governed by the rule of law, the law itself, forms and ways of its manifestation, in contrast to arbitrariness, must increase in the country, not decrease. The semantic right of creation should be intensified, not narrowed. It is emphasized that the implementation of the rule of law is the basic basis for legal meaning. Concern for the law is a concern for the development of its meanings, their assertion and protection must be a common cause of each and everyone. It is on the basis of the rule of law that real Ukrainian European integration is possible.
Introduction
Having set a course for the establishment of democratic principles of life and European integration, Ukrainians must make efforts to implement a number of legal reforms, without which these transformations will be impossible, and, consequently, the set goals will be unattainable. However, our reality now testifies to the unpreparedness of both society and especially the authorities for qualitative changes in legal life.
Unfortunately, the efforts of the reformers are nullified by the reluctance of the political and business elite to change something radically, the reforms are more imitated than actually implemented. Over the last decade in Ukraine there has been a large-scale dejuridization of public relations, which is manifested in numerous violations of the current Constitution of Ukraine by government officials, including the head of state, mass human rights violations, even stronger politicization of the judiciary, including the Constitutional Court of Ukraine, breach of law by different branches of government, etc. On the way to a democratic state governed by the rule of law, the forms and ways of its manifestation, in contrast to arbitrariness, must increase in the country, not decrease. The situation of narrowing the legal space, the destruction of legal meanings, which we can observe in this regard in society and the state, should be a real concern for both legal citizens and government officials.
Various aspects of this problem are developed by such authors as N. Bordun-Komar (Sokolenko, 2013), S. Shevchuk (Shevchuk,2008), O. Yaremko (Yaremko, 2020) and others. These scholars emphasize the problem of implementing the rule of law, warn against violating constitutional norms, principles of law, emphasize the need for compliance with legal standards by all law enforcement agencies, and so on. We will look at the problem of legal development, turning to the concept of legal meaning as an essential feature of the development of law and a means of expanding the boundaries of legal existence through the prism of the rule of law as its fundamental principle, show its importance for law and order and the consequences. This problem still refers to the underdeveloped knowledge in the philosophical and legal field, so it is proved to be relevant.
Materials and methods
The research methodology should correspond to the subject of research. To cover this problem, it is appropriate to use the principle of polymethodology, involving philosophical, general and special legal approaches and methods, which in complementarity should ensure the completeness of the study of the problem. In particular, the leading approach in the study should be anthropological-axiological, through which both law and legal meaning-making, and the rule of law appear as humanistic phenomena, without which a full human existence becomes impossible. The anthropological-axiological approach allows us to see legal meaning-making as an active process of affirmation and expansion of law through the creation and development of legal subjects of legal meanings, which are at the same time universal legal values that embody the principles of human existence. In the light of the same approach, the rule of law is separated from the rule by law and is revealed meaningfully through a number of elements: legal values, principles, regulations, procedures, customs, etc., which aim to harmonize social relations and, consequently, contribute to being, exercising natural rights, to create law and order. The anthropological-axiological approach concretizes the natural-legal mega-approach, within which it is only possible to see law and legal meaning-making as a non-state universal phenomenon. The synergetic method contributes to the interpretation of the rule of law and legal meaning-making as systemic interdependent phenomena that are mutually enriched and mutually developed. Such general scientific methods as analysis and synthesis, theoretical modeling, the principle of objectivity have contributed to the study and formation of a number of concepts, in particular, such as: legal meaning, rule of law, legal standards, human rights, principles of law and others. The hermeneutic approach helped to understand and interpret the basic concepts and provisions of the research problem. The comparative legal method allowed us to see the factors and shortcomings of the experience of legal meaning-making in the European and Ukrainian legal space.
Legal meaning-making as an urgent problem of today
The modern human world has become digital, human existence is becoming more and more complex, and the law must respond accordingly to these processes in order to be adequate to a man of today with his needs and interests, which are constantly evolving. Scientists note that the current world is becoming more dynamic, uncertain and pluralistic, because it is made so by a person who himself is now (S. Proleyev). It is clear that such a complex, uncertain, dynamic world and the same person (in today's world is dominated by a person of the situational type) put forward new requirements to the law, without denying the previous ones. The current digital age has made many adjustments to legal life, in particular the problem of human rights. The modern author K. Lefort notes that human rights "… go beyond any particular definition given to them; … Acquired rights necessarily encourage the support of new rights." And further the author notes the following: "A democratic state goes beyond what is traditionally attributed to the rule of law. It tests rights that have not yet been incorporated into it" (Lefort, 1986). That is, the development of a man and the development of his rights is a simultaneous natural process, and the state must keep up with it.
K. Lefort interprets the current development of natural human rights as a "universalist stretch", i.e. their intensification, a concentrated demand for "equality" in all forms of human interaction and throughout the geopolitical space (Lefort, 1986). Therefore, the necessary rethinking of human rights should not take place in the direction of their restriction or mutual exclusion, but rather, on the contrary, as the expansion of "mutually multiple potentials (forces, powers, authority)", which are universal. In this regard, "human rights democracy" must be unrestricted, despite various obstacles. Human rights should not be thought of as dogma, they should be "reformulated and re-established", invented and restored in new conditions, in other proportions, ˗ scientists say (Evropeyska konventsiya pro zahist prav i osnovnyh svobod lyudyny, 1950).
The law in all its manifestations must work for man, it is his goal, and it is designed to provide him with a comfortable life in today's world. It must undoubtedly be humanistic, and it depends on the people who will create the law of the digital society. And in this sense, the problem of legal meaning-making is especially important. In general, the development of law is carried out through the creation of legal meanings by the subjects of law. Law must be meaningful, reasonable and has some sense. Meaning is the understanding of something, the importance of something, it is a certain meaning, purpose of something, task, usefulness of something, etc. (Dictionary of the Ukrainian language, 1978:405). Meaning is the content of thought, some phenomenon with a purpose; it is the essential characteristics of something, the meaning of creation, human interaction. W. Frakl considered meaning to be the basic motivation of human direction, which reveals the essential nature of the human field. A. Camus wrote that being has no meaning, it is absurd, however, something makes sense ˗ and this "something" is a man, "the only being who needs meaning…" (Camus, 1945).
Legal meanings serve as a link between a man and the world of law, the "empire of law" (R. Dworkin). Legal existence lives as the unity of a man as a subject of law with law, its forms and manifestations. Legal existence is the real embodied and existing perfectly legal meanings. Their origins are in the human spiritual and cultural essence, in the interaction of people as subjects of law, in their relations, in the ideas of perfection and harmony of human relations. Legal meanings are deeply humanistic; it is their humanistic content that makes law the same humanistic phenomenon. These meanings embody the universal and this is their power and significance.
However, people may lose touch with legal reality, the non-existence of law as a real possibility always exists. A person can spray, level, destroy legal meanings. She is not always ready to create them, to understand their importance, purpose, task, usefulness, and so on. The similar situation is among the nations ˗ some of them have real success in the development of law, and others have not been able to achieve special achievements and successes. A man must worry about the law, assert it, develop it, because the law itself and its meanings (which are at the same time legal values) are a necessary prerequisite for human existence. Concern for the law is a concern for the development of its meanings, their assertion and protection must be a common cause of everyone. V. Nesterenko emphasizes that "without constant care for this, without creative human effort and spiritual enlightenment, the world is depleted of meanings. As a result, "enlightenments" and even semantic cavities appear in life, where individuals or the whole human communities stumble or even fail" (Nesterenko, 1996: 109). A nation is not just a community of people, it is a community united by common meanings through which a common world of life is created. Ukrainians as a nation and as a legal community must be united by common legal meanings. Marginals appear where common meanings that can unite people are lost. This also applies to legal marginality. Legal meanings cement the community, creating a common understanding of the principles of legal existence, motivate to joint action to affirm and protect these principles, to create a common legal existence. The strength of these meanings is in their universal humanity, which has attracted at all times. To assert, multiply, develop, seek, enrich, strengthen legal meanings, values and ideals, principles and standards of law, legal customs and traditions, thus overcoming the semantic emptiness, all sorts of destructive tendencies and means to be right. The legal community is created in a movement from unique legal meanings to universal values. And this is impossible without the revival of historical memory, first of all semantic memory, the ability to know and appropriate a number of meanings, to master the semantic chain, which can penetrate not only the universal ratio, but also the depths of the soul (Bratasyuk M., Rosolyak, 2020).
The experience of legal meaning-making in the EU and modern Ukraine
Ukrainians in the process of creating a national legal existence need to join the European legal meanings. What is Europeanness in general and Europeanness in law, to which Ukrainians are so eager? We believe it is a set of meanings that are European values as well, and they appeared in the process of transformation of a unique European into universal. In law, they find expression in European standards of law (which are the basis for the protection of human rights) and in the rule of law (which is a concentrated expression of these standards and a fundamental principle of legal development). Perhaps the greatest shortcoming of Ukrainian legal life is the systematic deviation from the requirements of the rule of law, the constant violation of its standards, the neglect of universal law, the constant replacement of universal special, particular, partial. The principle of the rule of law is an eloquent expression of European values, which at the same time have the status of universal. Therefore, it quite naturally went beyond the common law system, where it was formed (Daysi, 2008) and became a recognized universal heritage. Nowadays, Ukrainians need to reform the legal sphere precisely in the direction of the fullest realization of the rule of law ˗ this will be the key to the Europeanization of Ukrainian legal reality, its accession to the European legal space. Ukrainian lawyers have not yet learned enough that the rule of law is not just a general idea, like any other principle. The rule of law must be perceived first of all as an integrated system of requirements that is the result of spiritual and cultural development of mankind, compliance and approval of which in legal life is a necessity, a guarantee of avoiding failures in extrajudicial life that occurred with individuals and peoples under certain conditions. The rule of law concentrates exemplary climaxes in the universal legal culture. Ukrainian scholars emphasize that this principle is understood as different one from the rule by law only in the context of the natural law paradigm, which asserts natural law as a deeply humanistic value-semantic phenomenon. It is this legal paradigm that affirms and protects such universal values as justice, life, human dignity, individual freedom, wealth, private property, equality, human rights, and so on. All these values are the basis of human existence, outside of them it becomes partial, bold, disordered, it turns into a continuous mess, slips into nothingness, ruin, becomes hostile to a man, because it depersonalizes the life and even destroys it physically. The rule of law as a principle that is not identical with the rule by law, is read in the context of the natural-legal paradigm is not accidental, it in the value-semantic sense completely coincides with it. As evidenced by European legal development, this principle was formed in line with the natural law tradition, which is quite powerful, especially in the Anglo-Saxon legal culture. It comes from this culture that this principle has spread to continental legal development, gaining universal significance. In fact, this principle is revealed through a number of sub-principles, legal norms, procedures, customs, traditions, legitimized by society, means of ensuring human rights, and so on. In particular, the rule of law includes the principle of a man with his life, natural rights, honor and dignity as the highest value, the principles of justice, the principle of equality of subjects of law, the principle of respect for individual freedom, the principles of accessibility and authority, good faith, reasonableness decision-making, principles of proportionality, legal certainty, etc. All these meanings-values are interconnected, all together, being applied in practice, create an orderly human being, favorable for the assertion of the human person, his existence. All components of this megaprinciple are included in the content of legal standards.
The rule of law as a kind of mega-principle, which has a very extensive and deep humanistic meaning and content, should become the basis of legal meaning for a man and for the government, its bodies and officials. It depends on many factors of different plan. In this context, it is very important to interpret the components of the rule of law as a guide to action, program requirements, which are requirements-goals, requirements-tasks, requirements-benchmarks, i.e. meaningful, reasonable and have to be performed. These requirements can be developed, as the European Court of Human Rights does successfully, they can be specified, but they cannot be deviated from, they cannot be neglected, violated, they must be observed and enforced, etc.
Ukraine in the legal sense is quite blurred, uncertain, scattered. Many Ukrainian citizens are not ready to live following the law, in fact, they suggest Ukrainian law to be nothing more than a declaration. Unfortunately, there are many such marginals among lawyers themselves. That is, Ukrainians as a semantic legal community have not fully developed. To overcome the legal marginality of the individual, it is necessary to "appropriate" the legal meanings expressed by the national legal tradition, "to permeate them with consistency, to place oneself in the appropriate semantic field" (Bratasyuk M, 2019). The Ukrainian legal tradition, which has more than a thousand years of history (Zaharchenko, 2019), is natural law (Gradova, 2013) it is based on the same legal meanings that express the content of the rule of law. It is very important now, when carrying out legal reform, to turn to our natural-legal tradition, and to use that value-semantic core which underlies it. The idea of the rule of law with such semantic characteristics as interpretation of law as justice, good, common good, respect for human dignity, individual freedom, human life as a special value, reinforced by Christian tradition, respect for private property, the principle of good faith, individuality of punishment, punishment through a fair trial, etc. (Bratasyuk V., 2015) was formed in our national culture since the time of princely Ukraine. This tradition must be revived, used in the development of the national legal system, overcoming the totalitarian legal legacy of contempt for man and law (Bratasyuk M., Shevtsova , 2021). The humanism that is potentially inherent in the rule of law, the standards of law, must become real. This is a difficult and arduous job for the entire nation, civil society and government. But this is the path to the rule of law, the fundamental principle of which is the rule of law. This state will not happen outside of it.
Good examples of legal meaning are demonstrated by the European Court of Human Rights, developing the principles and norms of the European Convention of Human Rights (Evropeyska konventsiya pro zahist prav i osnovnyh svobod lyudyny 1950), which is the embodiment and expressive expression of universal legal meanings and principles, standards of law, its rule. Ukrainian lawyers have recently become acquainted with the principle of proportionality, the essence of which is to maintain a balance of interests of legal entities, compliance with the purpose and means of achieving it, and so on. The meaning of proportionality is that, equalizing the interests of the subjects of law, to organize, improve legal relations, harmonize human existence. Without finding this proportionality of interests, legal relations can collapse. The principle of legal certainty is the European Court's of Human Rights contribution to the development of the mega-principle of the rule of law. The semantic content of the principle of legal certainty affects the humanization of legislation and law enforcement, which certainly has a positive effect on the implementation and protection of human rights (Kampo, Savchin, Sergienko, 2010). The ultimate meaning of the application of the provisions of the Convention is to ensure the rule of law as a deeply humanistic principle of legal development. The law of the European Court of Human Rights is "alive" because it focuses on the most important meaning and value i.e. a person with his inalienable rights and freedoms, his full living, dynamic human existence, his harmony with life in general.
Conclusions
In summary, it can be stated that legal meaning-making is a process that develops together with people and society, depends on many factors of different kinds. Semantic emptiness, all sorts of destructive tendencies in social relations must be overcome by creating, asserting, multiplying, seeking, enriching and developing, strengthening legal meanings, values and ideals, principles and standards of law, legal customs and traditions. The people as a legal community are not just a population. It appears not when laws are passed, but when it is able to live in common meanings, that is, to be a sensible community. This requires a movement from unique legal meanings to the universal values of each individual who makes it, and, of course, special representatives in public authority bodies that directly carry out the legislative process and law enforcement. The realization of the rule of law as an integrated system of requirements is the key to avoiding failures in the extra-legal existence that occurs with individuals and people under certain conditions. It is on the basis of the rule of law that real Ukrainian European integration is possible.
|
2023-07-28T15:19:24.755Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "30d0ad7baf372237939d82a2faf57d58da64182e",
"oa_license": "CCBY",
"oa_url": "http://pnap.ap.edu.pl/index.php/pnap/article/download/1106/1056",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "45dfb479ff878fa1535b5a1eda8335f0fc8b031a",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
}
|
67841630
|
pes2o/s2orc
|
v3-fos-license
|
Interaction-induced chaos in a two-electron quantum-dot system
A quasi-one-dimensional quantum dot containing two interacting electrons is analyzed in search of signatures of chaos. The two-electron energy spectrum is obtained by diagonalization of the Hamiltonian including the exact Coulomb interaction. We find that the level-spacing fluctuations follow closely a Wigner-Dyson distribution, which indicates the emergence of quantum signatures of chaos due to the Coulomb interaction in an otherwise non-chaotic system. In general, the Poincar\'e maps of a classical analog of this quantum mechanical problem can exhibit a mixed classical dynamics. However, for the range of energies involved in the present system, the dynamics is strongly chaotic, aside from small regular regions. The system we study models a realistic semiconductor nanostructure, with electronic parameters typical of gallium arsenide.
A quasi-one-dimensional quantum dot containing two interacting electrons is analyzed in search of signatures of chaos. The two-electron energy spectrum is obtained by diagonalization of the Hamiltonian including the exact Coulomb interaction. We find that the level-spacing fluctuations follow closely a Wigner-Dyson distribution, which indicates the emergence of quantum signatures of chaos due to the Coulomb interaction in an otherwise non-chaotic system. In general, the Poincaré maps of a classical analog of this quantum mechanical problem can exhibit a mixed classical dynamics. However, for the range of energies involved in the present system, the dynamics is strongly chaotic, aside from small regular regions. The system we study models a realistic semiconductor nanostructure, with electronic parameters typical of gallium arsenide. In a many-body system, it is possible for signatures of quantum chaos to appear due solely to the interactions among its particles. During the last decade, such interaction-induced signatures of quantum chaos have been investigated, for example, in spin-fermion models, 1 in the compound nuclear state with 12 particles in the sd-shell 2 and in the heavy rare-earth atom of Cerium. 3 In those studies, the evidence for quantum signatures of chaos found in the level-spacing statistics or in the statistical properties of the wave functions was considered to be conclusive. This conclusion is not entirely surprising since all of those systems, having relatively large numbers of particles (> 10), are similar to the complex nuclear systems that motivated the introduction of the ideas of random matrix theory (RMT) in the first place. 4 On the other hand it is not obvious a priori, whether the inclusion of the interaction in few-body systems, like for example currently available semiconductor quantum-dots, leads also to signatures of quantum chaos. In a recent study, the effect of electronic interactions was considered in a parabolically-confined three-electron system. 8 (It is well known that three is the lowest number of interacting electrons necessary to break the integrability in a parabolic quantum dot.) In that study, the crossover from regular to irregular spectra as a function of the interaction strength was found to be incomplete, possibly due to the existence of hidden symmetries not taken into account in the statistical analysis. Therefore, the question of the emergence of signatures of quantum chaos due to interparticle interactions in systems of very few particles remains open.
Closely related to the issue of characterizing the dynamical properties of simple interacting systems is the problem of quantum control with external fields. The manipulation of few particles (electrons and holes) in semiconductor quantum dots is a potentially important technological problem that is receiving increasing attention. 5,6 In this context, recent theoretical studies have shown interesting effects of single-electron turnstile behavior, and localization and correlations in systems of quantum dots with two interacting electrons in them. 7 In the present Letter we investigate the signatures of quantum chaos in a similar system, ie., two interacting electrons in a quasi-one-dimensional semiconductor quantum dot. We show that the Coulomb interaction between the electrons induces an unambiguos transition from a regular spectrum to a spectrum that follows closely the predictions of RMT for systems whose classical analog exhibits chaotic dynamics.
In order to fully characterize the emergence of chaos due to interactions in this simple system, we also study the dynamics of its classical analog. The Poincaré maps that describe the classical system show a strongly (albeit mixed) chaotic behavior due to the inclusion of the true Coulomb interaction in the system.
We assume that the quantum dot has narrow parabolic confinement in the transversal x − y dimensions, so that the energies associated with those modes are high compared to the energies of the remaining degree of freedom (Born approximation). The two-electron wave function can then be written as where φ(x) is the lowest harmonic oscillator energy eigenstate. The energy eigenstates satisfy where H 0 (z) = −(h 2 /2m * ) ∂ 2 /∂z 2 + V (z) is the singleparticle Hamiltonian with V (z) being the quantum-dot defining potential. m * is the effective mass, and V 1D is the Coulomb interaction given by We use the values of the dielectric constant ǫ = 13.1 and m * = 0.067m e corresponding to gallium arsenide. We choose to work with a quasi-one-dimensional semiconductor quantum dot confined in 15Å in the x-y plane and a width of 800Å in the z-direction.
In the absence of the interaction term in Eq. (2), the Hamiltonian is a sum of two single-particle onedimensional Hamiltonians, whose classical counterpart is obviously integrable. The main question we seek to answer is whether the Coulomb interaction between the electrons introduces chaos in the system.
In order to look for signatures of quantum chaos, we follow a standard statistical analysis of the energy spectrum, which consists of the following steps. First we calculate the exact spectrum {E n } by diagonalization of the Hamiltonian matrix. The level spectrum is used to obtain the smoothed counting function N av (E) which gives the cumulative number of states below an energy E. In order to analize the structure of the level fluctuations properties one "unfolds" the spectrum by applying the well kwown transformation x n = N av (E n ). 9 From the unfolded spectrum one calculates the nearest-neighbor We first consider the spectral properties of the noninteracting two-electron problem whose Hamiltonian is H 0 (z 1 )+ H 0 (z 2 ). Its eigenstates can be classified by their total spin in singlets and triplets. To compute the NNS distribution we use eigenstates of a given spin. In the inset of Fig. 1 we show the obtained NNS non-interacting distribution P N I (s) (histogram) which follows a Poisson distribution (characteristic of an uncorrelated sequence of energy levels) given by P P (s) = e −s and shown for comparison as a solid thin line. Due to the finite dimension of the Hilbert space, N av (E) saturates in the highest energy region. Therefore, we compute the NNS distribution using the lowest ∼ 1000 eigenvalues. 10 The obtained Poisson distribution is an expected signature of most quantum two-dimensional systems whose classical counterparts are integrable. 11 To analize the interacting spectrum we diagonalize exactly Eq. (2). We also take into account the symmetry of the spectrum due to the parity of the confining potential and the interaction potential. Therefore, in order to compute the NNS distribution we use eigenstates of a given parity and spin. This kind of decomposition is a standard procedure followed in the analysis of spectral properties of quantum systems whenever the Hamiltonian of the system possesses a discrete symmetry. 9 After unfolding the spectrum the NNS distribution is computed for the even parity states. Again, we consider ∼ 1000 eigenstates of the interacting Hamiltonian (whose eigenenergies are lower than the energy of the first transversal mode, for compatibility with the Born approximation).
Since the singlet is the ground state of the two-electron system, we first concentrate on the subspace of spatial wave functions that are symmetric under particle exchange. The interaction affects very clearly the spectrum, resulting in a strong level repulsion: the NNS distribution is in accordance with the predictions of RMT. 12 As a consequence, the obtained NNS distribution P IS (s) (histogram shown as a thick solid line in Fig. 1) is well described by the Wigner surmise 12 P W (s) = 1 2 π s exp(−πs 2 /4), shown for comparison as a thin solid line.
For the triplet states, due to the antisymmetry of the spatial wave functions, it is reasonable to expect that the tendency of the two electrons to avoid each other results in a weaker level mixing. Nevertheless, although some differences with the singlet case would appear for other statistical measures (that are not possible to perform with the number of levels at hand), those differences are not qualitatively visible on the computed NNS distribution P IT (s), shown as a dashed line in Fig. 1. Again the obtained histogram fits quite well with the predictions of RMT. We now turn to the dynamics of the classical counterpart of the two-electron quantum dot system. We consider the classical interaction potential given by V cl (|z 1 − z 2 |) = α/ (d 2 + |z 1 − z 2 | 2 ), where the parameters α and d have been obtained from the best fit to the Coulomb interaction V 1D (|z 1 − z 2 |), Eq. (3). (The fit is very good at all distances, down to the resolution of the spatial grid used in our numerical calculations.) The classical single particle confining potential V C (z) is a square well of length L, and we restrict the analysis to bounded motion within this box. In this situation, the effect of the confining potential is to reflect elastically the particles off the boundaries in each bounce, breaking the translational symmetry of the problem. As a consequence, the center of mass momentum is not preserved. Nevertheless, in the absence of interaction each single particle energy is a constant of motion, and therefore the classical prob-lem is integrable. On the other hand, the inclusion of the Coulomb interaction breaks the conservation of the single particle energies and can induce an irregular dymamics in the confined system. For a total energy E = E * ≡ α/d there is a separatrix in the classical dynamics. That is, for E < E * the particles never cross each other and the sign of the relative coordinate z 2 − z 1 never changes, while for E > E * it can change.
Denoting ǫ ≡ E/E * we write where we have defined with i = 1, 2. In this way, for a given value of the reescaled energy ǫ, the classical dynamics depends only on the parameter d * . Taking into account that L = 800Å and the best fit with the quantum Coulomb term V 1D gives d = 8Å, we obtain d * = 0.01. In Fig. 2(a) we show for ǫ = 0.9 the Poincaré surface of section v ′ 2 vs. z ′ 2 , for the motion of one of the particles, taken at times when the other particle bounces off the left boundary of the well (the topology of the Poincaré section does not depend on which particle is selected). The motion is chaotic over most of the accesible phase space for the given energy shell. Fig. 3(a) shows another Poincaré section for ǫ = 16. Again, except for small regions of regular motion, the dynamic is fully chaotic.
Although the two Poincaré sections look qualitatively similar, the trajectories in the plane z ′ 2 vs. z ′ 1 are quite different as can be seen from Figs. 2(b) and 3(b). Figure 2(b) (3(b)) shows for ǫ = 0.9 (ǫ = 16) a piece of a trajectory in the z ′ 2 vs. z ′ 1 plane corresponding to an initial condition in the chaotic region. In Fig. 2(b) the trajectory never crosses the straight line defined by z ′ 2 = z ′ 1 , because for ǫ = 0.9 is always z ′ 2 > z ′ 1 . In Ref. 13 the authors perform a classical analysis of the emergence of chaos due to the inclusion of an interparticle screened Coulomb interaction in an infinite well. The classical motion of such a system is qualitatively similar to our classical model only for ǫ < 1.
From the ≈ 1000 eigenstates employed to compute the NNS distribution displayed in Fig. 1, only the lowest 5 % have eigenenergies that correspond to ǫ < 1. The eigenenergies of the remaining states correspond to values of ǫ ranging from 1 to 16, for which, as we have shown, the classical dynamics is chaotic over most part of the energy shell. Therefore the NNS distribution computed from these states results in a remarkable quantum signature of the underlying classical chaotic dynamics.
For energies ǫ >> 1 the classical regular regions in the Poincaré maps should become predominant over the chaotic ones, because in such a limit the Coulomb term can be considered as a perturbation of the noninteracting two-electron Hamiltonian. Nevertheless, values of ǫ >> 1 are not realistic for quantum wells describing semiconductors nanoestructures. In conclusion, we show that the Coulomb interaction is responsible for the chaotic dynamics in a quasi-onedimensional two-electron-quantum dot. This is the first study of a realistic few body system where the emergence of chaos due to interparticle interactions is unmistably demostrated through the analyses of both its quantum spectral properties and the dynamics of its classical counterpart. We believe that the present results also put serious constraints to some models of semiconductors nanoestructures in which the interaction among particles is modeled, for a finite number of particles, as a capacitive term in the form of a constant interaction (CI). The inclusion of the CI gives an interacting Hamiltonian whose spectral properties are those of the non-interacting one. In other words, a Poisson NNS distribution will remain Poissonian after considering the interaction in the CI model.
|
2018-12-26T23:47:36.915Z
|
2000-09-06T00:00:00.000
|
{
"year": 2000,
"sha1": "86670965f9d434c9ad687fb88cab9976b84780bd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0009085",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "16f9434db12c73d23624cb49870f423139aaa065",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
218978451
|
pes2o/s2orc
|
v3-fos-license
|
From high volume to “zero” proctology: Italian experience in the COVID era
Purpose The coronavirus disease 2019 (COVID-19) pandemic hit Italy early and strongly, challenging the whole health care system. Proctological patients and surgeons are experiencing a previously unseen change in care with unknown repercussion. Here we discuss the proctological experience of 4 Italian hospitals during the COVID-19 pandemic. Methods Following remote brainstorming, the authors summarised their experience in managing proctological patients during the COVID-19 pandemics and put forward some practical observations to further investigate. Results The 4 hospitals shifted from a high-volume proctological activity to almost “zero” visits and surgery. Every patient accessing the hospital must respect a specific COVID-19 protocol. Proctological patients can be stratified based on presentation and management considerations into (1) neoplastic patients, the only allowed to be surgically treated, (2) the ones requiring urgent care, operated only in highly selected cases and (3) the stable, already known patients, managed remotely. Changes in the clinical management of the proctological disease are presented together with some considerations to be explored. Conclusions In the absence of scientific evidence, these practical considerations may be valuable to proctological surgeons starting to face the COVID-19 pandemics. Beside the more clinical considerations, this crisis produced unexpected consequences such as an improvement of the therapeutic alliance and a shift towards telemedicine that may be worth exploring also in the post-COVID-19 era.
Introduction
In the first month of the coronavirus disease 2019 (COVID-19) pandemic, almost 2 million people have been affected by SARS-CoV2, with more than 125,000 deaths worldwide [1].
Italy was the first Western country hit-21,000 deathswith severe outbreaks mainly affecting the northern regions. Hospitals are close to collapsing due to the high number of patients needing for intensive care unit (ICU) [2].
With regard to healthcare, many Italian institutions are now transformed in COVID Hospital, with operative theatres turned into ICU. Our hospitals changed dramatically their clinical behaviour: only patients with symptomatic and undeferrable neoplastic diseases are allowed to be surgically treated following some specific rules (COVID procedure). All outpatient visits and operations of non-oncological patients were suspended, with the exception of highly selected cases. Little is known on how to best manage patients with benign diseases and the consequences this interruption of care will have in post-pandemic times.
Proctologic diseases, for their high incidence and great impact on the quality of life, have real social, psychological and healthcare repercussions.
This communication is to define how COVID-19 affects proctologic patients based on the experience of four University hospitals across Italy (Rome, Pisa, Naples and Turin), tertiary referral centres, with a high-volume of dedicated activity.
Believing that in any crisis lies an opportunity, this communication puts forward clinical observations and lessons learnt that could be valuable also in the post-COVID era.
Management of proctological patients in COVID-19 times
Most proctologic activities were suspended from the Italian lockdown, on the 9th of March, including outpatient visit, endoscopy, CT/RM diagnostic tools and surgery.
We suddenly shifted from a high-volume proctological activity, accounting more than 10,000 outpatient visits and 3000 proctologic surgical procedures per year, to "zero" visits and surgery.
All patients in treatment for proctologic diseases were contacted by expert members of the team to inform on the changes in care. A referent phone was given to all patients, with 4 h a day availability.
Only highly selected patients are admitted for surgery following a "COVID procedure".
This includes the following: & General recommendations such as social distancing, the use of masks and gloves and frequent hand washing with antiseptic gel.
& A first telephonic triage to investigate symptoms such as fever, cough, asthenia, myalgia or conjunctivitis, diarrhoea, dyspnea, recent travels to outbreaks areas and contact with SARS-CoV2-positive patients. & A second triage 24-48 h before admission to perform an oral-nasal-pharyngeal swabs to screen for SARS COV-2.
As general rules, the hospitalisation should be as short as possible, and patients' visitors are not allowed in the ward.
With regard to practical management, we can stratify proctologic patients in 3 main groups: 1) The only allowed to be surgically treated: patients with anorectal tumours In the case of anal cancer, an outpatient biopsy is performed, and the patient is sent to the oncologists for chemoradiotherapy.
Patients with perianal cancer complicated by stenosis of the lumen or bleeding are rapidly admitted, with the COVID procedure. A transanal complete excision of the tumour is performed, preserving as much anal sphincter as possible, but avoiding the persistence of tumoural tissue bleeding or occluding the lumen. A colostomy is performed only if strictly necessary. Of note, currently Stoma Centres are suspended and work in telemedicine (with videos and photos), sending the personalised stoma's dressing directly to the patient's home.
The patient is quickly discharged and monitored by remote, with physical surgical checks only if needed. After surgical healing, and hopefully once the pandemic is over, patients will be sent to chemoradiotherapy with the operative report and histological exam.
It is important to stress that all decisions are shared online or by phone with the multidisciplinary team.
2) The ones requiring urgent care as follows: & Patients scheduled for surgery with a sudden worsening of symptoms; & New patients with relevant symptomatology; & Operated patients with urgent complications.
Severe symptoms call for an urgent visit, with eventual outpatient surgical procedure. If necessary, a day hospital (DH) surgery can be promptly organised, passing through the emergency room with the usual COVID procedure.
Patients with the complicated haemorrhoidal disease should be treated conservatively whenever possible. For instance, in case of significant (VAS > 7) chronic pain, an analgesic treatment is given. An eventual outpatient removal of staples can be provided, following haemorrhoidopexy. An acute important pain is mostly due to an associated acute thrombosis or a fissure in anus. For acute thrombosis, mesoglycan or low molecular weight heparin is given. If increasing pain and thrombosis persist, an outpatient thrombectomy with local anaesthesia is provided. In case of a fissure in ano with sphincteric spasm or anal stenosis nonresponding to medical treatment, exceptionally, an outpatient dilation with cauterization or internal sphincterotomy in DH can be performed.
An urgent operation should be reserved for haemorrhoids with chronic and progressive anal bleeding, producing severe anaemia (Hb < 8 gr/dL) or, in the early postoperative, for huge, active bleeding that occurred in a short time, causing a rapid and considerable decrease of haemoglobin.
The operation for haemorrhoids is performed in an outpatient or, in highly selected cases, in DH. The preferred anaesthesia is propofol + local infiltration, with minimal fluid intake, to reduce the possibility of urinary retention. The gold standard operation is represented by conventional haemorrhoidectomy, with radiofrequency to minimise postoperative bleeding [3], or, in the alternative, by transanal haemorrhoidal dearterialization (THD) [4].
Stapled prolapsectomy is currently less used in order to avoid exceptional but frightening complications, more difficult to face in this period [5]. In the case of associated pathologies, with the assumption of anticoagulant or antiplatelets, the modulation of therapy is shared in accordance with cardiologists. In order to decrease the risk of PO bleeding, we can add tranexamic acid if not contraindicated [6], minimising the use of non-steroidal anti-inflammatory drugs. In order to avoid a tight physical follow-up, patients are regularly contacted over the phone and encouraged to self-medicate, offering remote monitoring by phone or online.
With regard to perianal sepsis, the first approach should be conservative, using antibiotics.
Caution should be reserved to debilitated patients, for a possible local spread (Fournier's gangrene) with general sepsis [7]. If this conservative therapy is not successful, outpatient drainage with seton positioning [8], in local anaesthesia, can be performed as a definitive solution or as a "bridge step" for further radical surgery (MR or endoluminal ultrasound is currently denied). All patients should be instructed for the management of seton and for using smart follow-up.
When the abscess is due to perianal Crohn's disease after first-line antibiotic therapy, drainage with seton could be necessary to allow further medical treatment [9].
3) The stable, already known: patients already scheduled for surgery These patients were reassured and maintained in an operative list. Therapy was refined, as for dosage and time, and, eventually, renewed. A referent phone number was given, in case of an eventual adverse event or for messages, photos or videos (telemedicine).
All patients in treatment for pelvic-perineal rehabilitation suspended their treatment, receiving a remote support.
Considerations on COVID-19 and proctology
The management of proctological patients in Italy has completely changed during the COVID-19 pandemic. The normal surgical programme was suspended, only oncologic surgery is now allowed [10].
Our usually high-volume proctologic activity is suddenly reduced approximately to "zero": only urgent care is provided using the COVID procedure. Patients are followed remotely by telemedicine with the difficulty, intrinsic to proctology, to show a hidden part shrouded in shame. Telemedicine could, however, be implemented also in post-COVID times, not as an alternative to physical examination, but as a complementary tool. For this shift to take place, important clinical and legal medical questions have to find clear answers. In addition, a framework to reward the medical staff working from remote (i.e. medical smart working) has to be elaborated for telemedicine to be sustainable.
The relationship between proctologic patients and surgeons has completely changed. The crisis produced an unexpected change of patients' behaviour, generating a remarkable improvement of the therapeutic alliance, with a considerable decrease of defensive feeling.
We have to recognise patients for the great sense of responsibility they are showing during this crisis by offering them a surgical definitive treatment in an acceptable time.
It is mandatory to create different paths of care, separating COVID-positive and COVID-free hospitals, providing a larger surgical availability to non-oncological, non-urgent patients.
There is no coming back to normality as we know it, the future is on us.
Many open questions remain, such as: When will the crisis stop and will we start with proctologic surgery? Which is the impact of this suspension of treatment on patient's symptoms, psychological status and quality of life? What will we do to catch up with the surgical waiting list? Where will we operate COVID-positive and COVID-free proctologic patients?
Further studies are necessary to inform on the best management of proctologic patients during and after the COVID crisis, a turning point in our history.
We hope this letter can be helpful to other surgeons dealing with proctological patients during this pandemic, wishing that these considerations will no longer be necessary in the near future.
The work described has not been published before; it is not under consideration for publication anywhere else.
Availability of data and material All data and material are available on request Author's contributions Domenico Mascagni: conceptualization, writing, final review and final approval; Chiara Eberspacher: conceptualization, writing original draft and final revision with approval; Pietro Mascagni: data analysis, drafting the article and final approval; Alberto Arezzo: data acquisition, critical revision and final approval; Francesco Selvaggi: data acquisition, critical revision and final approval; Alessandro Sturiale: data collection, editing and final approval; Giovanni Milito: data acquisition, critical revision and final approval; Gabriele Naldini: conceptualization, data collection, critical revision for important intellectual content and final approval. This publication has been approved by all co-authors.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Ethics approval This is an observational study. No ethical approval is required, according to Department of Surgical Science Ethics Committee.
|
2020-05-29T14:51:58.413Z
|
2020-05-29T00:00:00.000
|
{
"year": 2020,
"sha1": "0db8fd4f09029de50dbfb46eecfc3999366eb3e0",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00384-020-03622-x.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0db8fd4f09029de50dbfb46eecfc3999366eb3e0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207908110
|
pes2o/s2orc
|
v3-fos-license
|
Crack Sensitivity Control of Nickel-Based Laser Coating Based on Genetic Algorithm and Neural Network
: This paper aimed to establish a nonlinear relationship between laser cladding process parameters and the crack density of a high-hardness, nickel-based laser cladding layer, and to control the cracking of the cladding layer via an intelligent algorithm. By using three main process parameters (overlap rate, powder feed rate, and scanning speed), an orthogonal experiment was designed, and the experimental results were used as training and testing datasets for a neural network. A neural network prediction model between the laser cladding process parameters and coating crack density was established, and a genetic algorithm was used to optimize the prediction results. To improve their prediction accuracy, genetic algorithms were used to optimize the weights and thresholds of the neural networks. In addition, the performance of the neural network was tested. The results show that the order of influence on the coating crack sensitivity was as follows: overlap rate > powder feed rate > scanning speed. The relative error between the predicted value and the experimental value of the three-group test genetic algorithm-optimized neural network model was less than 9.8%. The genetic algorithm optimized the predicted results, and the technological parameters that resulted in the smallest crack density were as follows: powder feed rate of 15.0726 g / min, overlap rate of 49.797%, scanning speed of 5.9275 mm / s, crack density of 0.001272 mm / mm 2 . Therefore, the amount of crack generation was controlled by the optimization of the neural network and genetic algorithm process.
Introduction
Laser cladding technology is an advanced manufacturing method that uses a high-energy laser beam to irradiate cladding powder and a matrix to rapidly melt and solidify it [1]. It can produce high-performance alloy surfaces on inexpensive metal substrates without affecting the properties of the matrix, thus conserving valuable rare metal materials [2,3]. Laser cladding technology has not been widely promoted since its conception, mainly because the most difficult problem in laser cladding is the cracking of the cladding layer, which limits its practical application [4]. The laser cladding of high-hardness, nickel-based coatings is particularly prone to crack generation, and is difficult to find the optimal process in traditional process experiments [5][6][7]. In this study, Ni60 powder was laser clad on a #45 steel substrate. There are differences in the physical properties of #45 steel and Ni60 powder materials. Together with the rapid heating and rapid cooling of high-energy laser beams, if the process is not properly selected, the cladding layer will generate great thermal stress. The thermal stress of the laser cladding layer is usually tensile stress, and cracks are generated when the local tensile stress exceeds the strength of the coating material. Cracks tend to occur in these places due to the low strength of the dendritic boundaries, pores, and inclusions of the laser cladding layer, which tend to A #45 steel plate with a size of 110 mm × 80 mm × 8 mm was used as the base, and the quantity totaled 20 pieces. Before laser cladding, the substrate was first sandblasted and cleaned with absolute ethanol to make the surface of the substrate flat and free of oil and other debris. To obtain a corrosionresistant, wear-resistant, high-hardness laser cladding layer, a Ni60 self-fluxing alloy powder was selected as the cladding powder. The powder particle size was 45-106 μm. The powder was previously dried (120 °C, held for 2 h). The chemical compositions of the #45 steel and the Ni60 powder are exhibited in Tables 1 and 2.
Experimental Method
The mechanism of crack formation in the laser cladding layer was investigated [19,20]. It was found that the process parameters, especially the powder feeding rate, the lapping rate, and the scanning speed, had a great influence on the crack sensitivity. A three-factor, four-level orthogonal experiment (L16 (4 3 )) was designed, as exhibited in Tables 3 and 4. The laser power was 1600 W, and the defocus amount was 16 mm. The crack density was evaluated to measure the impact of the process parameters on the crack sensitivity, and is given by Equation (1). The laser cladding orthogonal experiment results are presented in Table 4. A #45 steel plate with a size of 110 mm × 80 mm × 8 mm was used as the base, and the quantity totaled 20 pieces. Before laser cladding, the substrate was first sandblasted and cleaned with absolute ethanol to make the surface of the substrate flat and free of oil and other debris. To obtain a corrosion-resistant, wear-resistant, high-hardness laser cladding layer, a Ni60 self-fluxing alloy powder was selected as the cladding powder. The powder particle size was 45-106 µm. The powder was previously dried (120 • C, held for 2 h). The chemical compositions of the #45 steel and the Ni60 powder are exhibited in Tables 1 and 2.
Experimental Method
The mechanism of crack formation in the laser cladding layer was investigated [19,20]. It was found that the process parameters, especially the powder feeding rate, the lapping rate, and the scanning speed, had a great influence on the crack sensitivity. A three-factor, four-level orthogonal experiment (L 16 (4 3 )) was designed, as exhibited in Tables 3 and 4. The laser power was 1600 W, and the defocus amount was 16 mm. The crack density was evaluated to measure the impact of the process parameters on the crack sensitivity, and is given by Equation (1). The laser cladding orthogonal experiment results are presented in Table 4. where ρ represents the crack density; l i indicates the crack length; and λ represents the area of the laser cladding layer corresponding to a certain set of process parameters. On the surface of the laser cladding layer, the number of cracks generated under different orthogonal experimental schemes was measured by a colored penetrant detection agent, and the crack length was measured to calculate the crack density. The test schemes with the least and largest numbers of cracks in the cladding layer are respectively shown in Figure 2a where ρ represents the crack density; i l indicates the crack length; and λ represents the area of the laser cladding layer corresponding to a certain set of process parameters. On the surface of the laser cladding layer, the number of cracks generated under different orthogonal experimental schemes was measured by a colored penetrant detection agent, and the crack length was measured to calculate the crack density. The test schemes with the least and largest numbers of cracks in the cladding layer are respectively shown in Figure 2a
Analysis of Orthogonal Experiment Results
To study the degree of influence of various factors on crack sensitivity, the range analysis of the orthogonal test crack density is presented in Table 5.
Ki represents the average crack density corresponding to each factor when the level number is i. If the range of factors is smaller, the degree of influence on the crack is smaller. If the range of factors is greater, the degree of influence on the crack is greater. According to the data on the range analysis in Table 5, the order of the influence of laser cladding on the cracking of the cladding layer was as follows: overlap rate (B) > powder feed rate (A) > scanning speed (C). The overlap rate had the greatest influence on the cracking of the cladding layer, and the scanning speed had the least influence on the cracking of the cladding layer. In the orthogonal test, the optimum process parameters were those of the group with the smallest crack density, so the optimum group was A1B1C1, the parameters of which were a powder feeding rate of 15 g/min, a lapping rate of 45%, and a scanning speed of 5 mm/s. The factor effect curve of the orthogonal experiment is shown in Figure 3. Ki represents the average crack density corresponding to each factor when the level number is i. If the range of factors is smaller, the degree of influence on the crack is smaller. If the range of factors is greater, the degree of influence on the crack is greater.
According to the data on the range analysis in Table 5, the order of the influence of laser cladding on the cracking of the cladding layer was as follows: overlap rate (B) > powder feed rate (A) > scanning speed (C). The overlap rate had the greatest influence on the cracking of the cladding layer, and the scanning speed had the least influence on the cracking of the cladding layer. In the orthogonal test, the optimum process parameters were those of the group with the smallest crack density, so the optimum group was A1B1C1, the parameters of which were a powder feeding rate of 15 g/min, a lapping rate of 45%, and a scanning speed of 5 mm/s. The factor effect curve of the orthogonal experiment is shown in Figure 3. The powder feed rate, overlap rate, and scanning speed had different degrees of influence on the crack sensitivity [21], which is evident from the factor effect curve. With the increase in the level, the crack density produced by factor A showed an increasing trend, and the crack density of factor B increased the fastest; however, the crack density of factor C did not change significantly.
• In the range of the powder feed rate from 15 to 21 g/min, the crack density of the cladding layer increased gradually with the increase in the powder feed rate. The crack density of the cladding layer was the smallest when the powder feed rate was 15 g/min. The main reason for this is that the laser power was constant and the total output energy of the laser equipment was definite. Additionally, the specific energy decreased as the powder feed rate increased, and the temperature gradient became larger, resulting in an increase in crack density. In addition, with the increase of powder feeding, the specific energy was insufficient; some powders were not fully melted, and the hardness and brittleness of the cladding layer increased, which is also an important factor that led to the increase in crack sensitivity. The powder feed rate, overlap rate, and scanning speed had different degrees of influence on the crack sensitivity [21], which is evident from the factor effect curve. With the increase in the level, the crack density produced by factor A showed an increasing trend, and the crack density of factor B increased the fastest; however, the crack density of factor C did not change significantly.
•
In the range of the powder feed rate from 15 to 21 g/min, the crack density of the cladding layer increased gradually with the increase in the powder feed rate. The crack density of the cladding layer was the smallest when the powder feed rate was 15 g/min. The main reason for this is that the laser power was constant and the total output energy of the laser equipment was definite. Additionally, the specific energy decreased as the powder feed rate increased, and the temperature gradient became larger, resulting in an increase in crack density. In addition, with the increase of powder feeding, the specific energy was insufficient; some powders were not fully melted, and the hardness and brittleness of the cladding layer increased, which is also an important factor that led to the increase in crack sensitivity.
•
In the range of the lapping rate from 45% to 60%, the crack density of the cladding layer increased continually with the increase of the overlap rate. When the overlap rate range was the minimum of 45%, the crack density of the cladding layer was the smallest. The reason for this is that with the increase of the overlap rate, the cladding layer became increasingly thicker; when the overlap became too high, it resulted in the excessive overlap phenomenon. As the cladding layer became thicker and inclined at a certain angle, the temperature gradient of the cladding layer increased, as did the internal stress, resulting in the increase in the number of cracks.
•
In the scanning speed range of 5 to 11 mm/s, the crack density of the cladding layer increased with the increase of scanning speed. The crack density was the lowest when the scanning speed was 5 mm/s. The reason for this is that the energy of the input substrate and cladding layer decreased with the increase in scanning speed [22]. The molten pool gradually became shallower, and the thickness of the cladding layer became thinner. The molten pool and the powder were not sufficiently melted, so the crack sensitivity was increased.
Prediction and Control of Cracks in Nickel-Based Laser Cladding Layer Based on Neural Network and Heritage Algorithms
Traditionally, the direct process test has been used to find the optimal process parameters to control cracks; however, this process is time-consuming, laborious, cost-increasing, and wastes materials [23][24][25]. As the direct process test requires the range of process parameters to be set and the process parameters to be optimized by experiments in a certain range, it is difficult to obtain the absolute optimal parameters via the direct process test. This problem must therefore be solved with a method that predicts global optimization as a whole.
The neural network is characterized by nonlinear fitting prediction. The genetic algorithm can find the optimal solution within the specified range via iterative optimization. The combination of the neural network and genetic algorithm can realize intelligent predictive optimization. Due to the limited number of training samples, the genetic algorithm was used in the present study to optimize the weights and thresholds of the neural network to improve its predictive accuracy, and was then used to optimize the predicted results of the neural network. The overall algorithm flow is presented in Figure 4. • In the range of the lapping rate from 45% to 60%, the crack density of the cladding layer increased continually with the increase of the overlap rate. When the overlap rate range was the minimum of 45%, the crack density of the cladding layer was the smallest. The reason for this is that with the increase of the overlap rate, the cladding layer became increasingly thicker; when the overlap became too high, it resulted in the excessive overlap phenomenon. As the cladding layer became thicker and inclined at a certain angle, the temperature gradient of the cladding layer increased, as did the internal stress, resulting in the increase in the number of cracks. • In the scanning speed range of 5 to 11 mm/s, the crack density of the cladding layer increased with the increase of scanning speed. The crack density was the lowest when the scanning speed was 5 mm/s. The reason for this is that the energy of the input substrate and cladding layer decreased with the increase in scanning speed [22]. The molten pool gradually became shallower, and the thickness of the cladding layer became thinner. The molten pool and the powder were not sufficiently melted, so the crack sensitivity was increased.
Prediction and Control of Cracks in Nickel-Based Laser Cladding Layer Based on Neural Network and Heritage Algorithms
Traditionally, the direct process test has been used to find the optimal process parameters to control cracks; however, this process is time-consuming, laborious, cost-increasing, and wastes materials [23][24][25]. As the direct process test requires the range of process parameters to be set and the process parameters to be optimized by experiments in a certain range, it is difficult to obtain the absolute optimal parameters via the direct process test. This problem must therefore be solved with a method that predicts global optimization as a whole.
The neural network is characterized by nonlinear fitting prediction. The genetic algorithm can find the optimal solution within the specified range via iterative optimization. The combination of the neural network and genetic algorithm can realize intelligent predictive optimization. Due to the limited number of training samples, the genetic algorithm was used in the present study to optimize the weights and thresholds of the neural network to improve its predictive accuracy, and was then used to optimize the predicted results of the neural network. The overall algorithm flow is presented in Figure 4. The specific process of the genetic algorithm for the optimization of the neural network (weight and threshold) and the prediction results is as follows. (1) Planning is conducted to determine the network topological structure. (2) The genetic algorithm encodes the weight and threshold of the neural network. (3) The predicted error of the neural network is regarded as the fitness function, and the optimal weights and thresholds are output by selection, crossing, and mutation until the genetic algebra is satisfied. (4) The optimized weights and thresholds are assigned to the neural network and trained through the neural network until the mean square error satisfies the accuracy requirement. (5) The optimized neural network is used for prediction according to the process parameters, and the prediction result is used as the individual fitness function value of the genetic algorithm. (6) The optimal individual is output by cyclical selection, crossing, and mutation until the termination condition of the genetic algorithm is met.
Establishment of Network Topology Model for Crack Prediction
The topological structure of the neural network between the powder feed rate, overlap rate, scanning speed, and crack density was established as shown in Figure 5. The neural network was created by the newff function. The neural network is suitable for prediction and has the characteristics of strong nonlinear approximation ability, a simple training algorithm, and high adaptive ability. The topological structure of the neural network was divided into three layers including the input layer, hidden layer, and output layer. The input layer had three neurons, which represent the powder feed rate, overlap rate, and scanning speed, respectively. According to empirical Equation (2) [26] and the trial-and-error method, it can be determined that there were nine neurons in the hidden layer. The output layer had one neuron, which represented the crack density.
where G is the number of input neurons; Q is the number of output neurons; C is a constant between [1,10]; and M is the number of neurons in the hidden layer. The specific process of the genetic algorithm for the optimization of the neural network (weight and threshold) and the prediction results is as follows. (1) Planning is conducted to determine the network topological structure. (2) The genetic algorithm encodes the weight and threshold of the neural network. (3) The predicted error of the neural network is regarded as the fitness function, and the optimal weights and thresholds are output by selection, crossing, and mutation until the genetic algebra is satisfied. (4) The optimized weights and thresholds are assigned to the neural network and trained through the neural network until the mean square error satisfies the accuracy requirement. (5) The optimized neural network is used for prediction according to the process parameters, and the prediction result is used as the individual fitness function value of the genetic algorithm. (6) The optimal individual is output by cyclical selection, crossing, and mutation until the termination condition of the genetic algorithm is met.
Establishment of Network Topology Model for Crack Prediction
The topological structure of the neural network between the powder feed rate, overlap rate, scanning speed, and crack density was established as shown in Figure 5. The neural network was created by the newff function. The neural network is suitable for prediction and has the characteristics of strong nonlinear approximation ability, a simple training algorithm, and high adaptive ability. The topological structure of the neural network was divided into three layers including the input layer, hidden layer, and output layer. The input layer had three neurons, which represent the powder feed rate, overlap rate, and scanning speed, respectively. According to empirical Equation (2) [26] and the trial-and-error method, it can be determined that there were nine neurons in the hidden layer. The output layer had one neuron, which represented the crack density.
where G is the number of input neurons; Q is the number of output neurons; C is a constant between [1,10]; and M is the number of neurons in the hidden layer.
There were 36 link weights among neurons, there were 10 thresholds for the hidden layer and output layer, and the sum of the link weights and thresholds for the neural network model was 46. The connection weights were continuously modified via the genetic algorithm optimization of the neural network weights and thresholds and neural network sample training to better converge the neural network model and target model.
Training of Crack Prediction Network Model
As presented in Table 4, 16 groups of laser cladding Ni60 coating process parameters and crack densities were used as training samples to train the constructed neural network structure. The unit between the factors and the output of the orthogonal experiment was not uniform, and the range of the activation function of the neural network output layer was limited. Therefore, the target data of the There were 36 link weights among neurons, there were 10 thresholds for the hidden layer and output layer, and the sum of the link weights and thresholds for the neural network model was 46. The connection weights were continuously modified via the genetic algorithm optimization of the neural network weights and thresholds and neural network sample training to better converge the neural network model and target model.
Training of Crack Prediction Network Model
As presented in Table 4, 16 groups of laser cladding Ni60 coating process parameters and crack densities were used as training samples to train the constructed neural network structure. The unit Coatings 2019, 9, 728 8 of 15 between the factors and the output of the orthogonal experiment was not uniform, and the range of the activation function of the neural network output layer was limited. Therefore, the target data of the network training were mapped to the value range of the activation function, and the data needed to be normalized. The training data were normalized and converted into numbers in the interval (0,1). Equation (3) is the normalization processing formula: where P is the normalized data; S is the arbitrary data; and S max and S min represent the maximum and minimum values in the data, respectively. The prediction neural network is a forward-type instructor learning neural network model. The number of iterations was set as 1000. The sample was used to train the network model in Figure 5.
The input layer is represented by a matrix: The connection weights of hidden layers are expressed by matrices: The activation function is: The network output value of each hidden layer node y i (i = 0, 1, 2, 3 . . . 8) was found. The network output value of the output layer is: The training error backpropagation adjusted the connection weight for error correction. The training error target was 10 −5 . The training error formula for the predicted and experimental values of each sample is: where δ represents the training error; Z k is the experimental value; and Y k is the predicted value. The error formula between the nodes of the hidden layer is: Error backpropagation between the hidden layer and the input layer corrected the connection weight: In Equation (11), η is the set value of the learning rate of 0.01. There were 27 connection weights from the input layer to the hidden layer, and the modified connection weights from the input layer to the hidden layer were calculated in turn. f (x) and w are the derivatives of the activation function and the modified weight, respectively.
The weights between the hidden layer and the output layer were updated from w 09 to w 89 : The cycle calculations of Equations (3), (4) and (6)-(8) were performed until the training error requirement was satisfied. The training of the next sample was carried out to make each sample meet the set approximation accuracy, and the training ended. The recognition ability and prediction ability were then tested. The reduction process of the training error is shown in Figure 6.
In Equation (11), η is the set value of the learning rate of 0.01. There were 27 connection weights from the input layer to the hidden layer, and the modified connection weights from the input layer to the hidden layer were calculated in turn.
' ( ) f x and ' w are the derivatives of the activation function and the modified weight, respectively.
The weights between the hidden layer and the output layer were updated from The cycle calculations of Equations (3), (4) and (6)- (8) were performed until the training error requirement was satisfied. The training of the next sample was carried out to make each sample meet the set approximation accuracy, and the training ended. The recognition ability and prediction ability were then tested. The reduction process of the training error is shown in Figure 6.
Optimization of the Neural Network by the Genetic Algorithm
The genetic algorithm-optimized neural network was used to optimize the initial weights and thresholds of the neural network algorithm by using the iterative optimization characteristics of the genetic algorithm. The optimal individuals (weights and thresholds) were then assigned to the neural network for neural network prediction. The genetic algorithm-optimized neural network mainly included the initial population, fitness function, selection, crossover, and mutation. The initialized population was coded by real numbers using individual coding methods. Each individual was a binary string, and the individual contained all the weights and thresholds of the neural network. According to the weights and thresholds optimized by the genetic algorithm, the neural network was trained with test samples, and then used to predict the output, the sum of the absolute error of the
Optimization of the Neural Network by the Genetic Algorithm
The genetic algorithm-optimized neural network was used to optimize the initial weights and thresholds of the neural network algorithm by using the iterative optimization characteristics of the genetic algorithm. The optimal individuals (weights and thresholds) were then assigned to the neural network for neural network prediction. The genetic algorithm-optimized neural network mainly included the initial population, fitness function, selection, crossover, and mutation. The initialized population was coded by real numbers using individual coding methods. Each individual was a binary string, and the individual contained all the weights and thresholds of the neural network. According to the weights and thresholds optimized by the genetic algorithm, the neural network was trained with test samples, and then used to predict the output, the sum of the absolute error of the predicted value, and the test value as the fitness function. The fitness function is given by Equation (13). The genetic algorithm used roulette to select the operation. The crossover operation used real number crossing, and the mutation operation randomly mutated a point in the chromosome according to the mutation probability to generate a new individual.
where E is the value of the individual fitness function; m is the coefficient; n is the number of network output nodes; g i is the test value; and l i is the predicted value.
The parameters of the neural network optimized by the genetic algorithm were set as a genetic algebra of 80, population size of 10, crossover probability of 0.3, and mutation probability of 0.1. The iterative descent process of the fitness value in Figure 7 and the simulation diagram of the neural network prediction of genetic algorithm optimization in Figure 8b were obtained via programming simulation. Figure 8a presents a predictive simulation diagram of the neural network without genetic algorithm optimization. Figure 8 is a comparison of the predicted simulation results, and demonstrates that the neural network predictions optimized by the genetic algorithm were more accurate.
( )
where E is the value of the individual fitness function; m is the coefficient; n is the number of network output nodes; i g is the test value; and i l is the predicted value.
The parameters of the neural network optimized by the genetic algorithm were set as a genetic algebra of 80, population size of 10, crossover probability of 0.3, and mutation probability of 0.1. The iterative descent process of the fitness value in Figure 7 and the simulation diagram of the neural network prediction of genetic algorithm optimization in Figure 8b were obtained via programming simulation. Figure 8a presents a predictive simulation diagram of the neural network without genetic algorithm optimization. Figure 8 is a comparison of the predicted simulation results, and demonstrates that the neural network predictions optimized by the genetic algorithm were more accurate. The comparison between the predicted values of the neural network optimized by the genetic algorithm and the experimental values is shown in Table 6. The maximum relative error between the where E is the value of the individual fitness function; m is the coefficient; n is the number of network output nodes; i g is the test value; and i l is the predicted value.
The parameters of the neural network optimized by the genetic algorithm were set as a genetic algebra of 80, population size of 10, crossover probability of 0.3, and mutation probability of 0.1. The iterative descent process of the fitness value in Figure 7 and the simulation diagram of the neural network prediction of genetic algorithm optimization in Figure 8b were obtained via programming simulation. Figure 8a presents a predictive simulation diagram of the neural network without genetic algorithm optimization. Figure 8 is a comparison of the predicted simulation results, and demonstrates that the neural network predictions optimized by the genetic algorithm were more accurate. The comparison between the predicted values of the neural network optimized by the genetic algorithm and the experimental values is shown in Table 6. The maximum relative error between the The comparison between the predicted values of the neural network optimized by the genetic algorithm and the experimental values is shown in Table 6. The maximum relative error between the predicted and experimental values of the genetic algorithm-optimized neural network was 9.02%, which demonstrates that the precision of the neural network model optimized by the genetic algorithm is acceptable. Equation (14) is the mean square error formula. The mean square error of the neural network optimized by the genetic algorithm was 1.624376 × 10 −5 , and that of the unoptimized neural network was 3.250437 × 10 −4 . The mean square error was thus greatly improved.
where V represents the mean square error; O t is the experimental value; and P t is the predicted value.
Genetic Algorithm-Optimized Neural Network Model Verification
Three groups of process tests were carried out to verify the predictive effect of the genetic algorithm-optimized neural network model on the crack density of the high-hardness, nickel-based cladding layer. The prediction results obtained by using the optimized neural network model are shown in Table 7. It can be concluded that the difference between the test value and the predicted value was small, and the errors of the three groups of test results were within 9.8%. The reliability of the optimized neural network model was verified, and the feasibility of applying the optimized neural network model to predict the crack density in the field of laser cladding was proven. The reasons for the errors in the neural network model optimized by the genetic algorithm are as follows. (1) There were defects in the neural network model optimized by the genetic algorithm. It was only an approximation of the non-linear problem, and was not the real value. (2) The accuracy was limited because sample collection was difficult, the number of samples was small, and the training of the neural network was insufficient. (3) The process of sample data acquisition of the neural network is the manual measurement of the crack data, and there were errors in the measurement data itself. (4) In the process of laser cladding, external environmental factors also have different effects on the crack sensitivity of the cladding layer.
Genetic Algorithms for Optimizing the Prediction Results of the Neural Network Model
According to the prediction results of the optimized neural network model, the genetic algorithm was used again to optimize the predicted values, and the process parameters with the smallest crack density were found to achieve intelligent optimization. The neural network predicted the crack density value as the fitness function value, and the smaller the crack density, the better the individual. For the initialization of the genetic algorithm population, the genetic algebra was 80, the population size was 20, the crossover probability was 0.4, and the mutation probability was 0.2. The fitness function change curve of the genetic algorithm is presented in Figure 9. The average fitness curve and the best fitness curve coincided after about 40 generations of iteration, and the fitness function value did not change. The genetic algebra was set to iterate for 80 generations, the iteration was stopped, and the optimal individual was output.
Genetic Algorithms for Optimizing the Prediction Results of the Neural Network Model
According to the prediction results of the optimized neural network model, the genetic algorithm was used again to optimize the predicted values, and the process parameters with the smallest crack density were found to achieve intelligent optimization. The neural network predicted the crack density value as the fitness function value, and the smaller the crack density, the better the individual. For the initialization of the genetic algorithm population, the genetic algebra was 80, the population size was 20, the crossover probability was 0.4, and the mutation probability was 0.2. The fitness function change curve of the genetic algorithm is presented in Figure 9. The average fitness curve and the best fitness curve coincided after about 40 generations of iteration, and the fitness function value did not change. The genetic algebra was set to iterate for 80 generations, the iteration was stopped, and the optimal individual was output. The orthogonal experimental results showed that the optimal process parameters obtained by the laser cladding test were a powder feed rate of 15 g/min, an overlap rate of 45%, a scanning speed of 5 mm/s, and a crack density of 0.002472 mm/mm 2 . The genetic algorithm optimized the predicted results, and the technological parameters with the smallest crack density were a powder feed rate of 15.0726 g/min, an overlap rate of 49.7997%, a scanning speed of 5.9275 mm/s, and a minimum crack density of 0.001272 mm/mm 2 . The optimal laser cladding process parameters obtained via predictive optimization produced a crack density that was smaller than the crack density obtained by the orthogonal experiments, and the optimization effect was better. Experimental verification was carried out according to the optimal prediction process parameters obtained by the genetic algorithm. The validation results are displayed in Figure 10, which depicts the inside of the cladding layer as observed via SEM. No obvious cracks were found on the surface of the cladding layer, which is different from the minimum crack density obtained by the genetic algorithm optimization.
The main reason for this is that there were errors in the algorithms used for prediction and optimization and the contingency of the verification experiment, and the algorithm needs to therefore be further improved. However, intelligent control of the surface crack of the coating was basically realized by this process. Figure 11 presents the SEM micrographs of the inside of the cladding layer, which show no significant cracks. The orthogonal experimental results showed that the optimal process parameters obtained by the laser cladding test were a powder feed rate of 15 g/min, an overlap rate of 45%, a scanning speed of 5 mm/s, and a crack density of 0.002472 mm/mm 2 . The genetic algorithm optimized the predicted results, and the technological parameters with the smallest crack density were a powder feed rate of 15.0726 g/min, an overlap rate of 49.7997%, a scanning speed of 5.9275 mm/s, and a minimum crack density of 0.001272 mm/mm 2 . The optimal laser cladding process parameters obtained via predictive optimization produced a crack density that was smaller than the crack density obtained by the orthogonal experiments, and the optimization effect was better. Experimental verification was carried out according to the optimal prediction process parameters obtained by the genetic algorithm. The validation results are displayed in Figure 10, which depicts the inside of the cladding layer as observed via SEM. No obvious cracks were found on the surface of the cladding layer, which is different from the minimum crack density obtained by the genetic algorithm optimization. The main reason for this is that there were errors in the algorithms used for prediction and optimization and the contingency of the verification experiment, and the algorithm needs to therefore be further improved. However, intelligent control of the surface crack of the coating was basically realized by this process. Figure 11 presents the SEM micrographs of the inside of the cladding layer, which show no significant cracks.
Conclusions
• It was determined via orthogonal experimentation that the order of influence of the parameters on the crack sensitivity of the laser cladding, high-hardness, nickel-based alloy coating was as follows: overlap rate > powder feed rate > scanning speed. • A genetic algorithm-optimized neural network model between the powder feeding rate, lapping rate, scanning speed, and crack density of the laser cladding, nickel-based cladding layer was established. The simulation results demonstrated that the neural network optimized by the genetic algorithm was more accurate than the neural network model without genetic algorithm optimization. • The crack density of the high-hardness, nickel-based laser cladding layer could be effectively predicted by the process parameters. The prediction error of the genetic algorithm-optimized neural network model was within 9.8%, which proves the reliability of the model. The genetic algorithm optimized the prediction result of the neural network to obtain the process parameters for minimum crack density: a powder feed rate of 15.0726 g/min, an overlap rate of 49.7997%, and a scanning speed of 5.9275 mm/s. The minimum crack density was 0.001272 mm/mm 2 . The predicted and optimized crack densities were smaller than the minimum crack density obtained
Conclusions
• It was determined via orthogonal experimentation that the order of influence of the parameters on the crack sensitivity of the laser cladding, high-hardness, nickel-based alloy coating was as follows: overlap rate > powder feed rate > scanning speed. • A genetic algorithm-optimized neural network model between the powder feeding rate, lapping rate, scanning speed, and crack density of the laser cladding, nickel-based cladding layer was established. The simulation results demonstrated that the neural network optimized by the genetic algorithm was more accurate than the neural network model without genetic algorithm optimization.
•
The crack density of the high-hardness, nickel-based laser cladding layer could be effectively predicted by the process parameters. The prediction error of the genetic algorithm-optimized neural network model was within 9.8%, which proves the reliability of the model. The genetic algorithm optimized the prediction result of the neural network to obtain the process parameters for minimum crack density: a powder feed rate of 15.0726 g/min, an overlap rate of 49.7997%, and a scanning speed of 5.9275 mm/s. The minimum crack density was 0.001272 mm/mm 2 . The predicted and optimized crack densities were smaller than the minimum crack density obtained by the orthogonal test, and the optimization effect was better. Applying this prediction optimization method to the crack control of laser cladding forming has high practical value.
|
2019-11-04T01:09:08.376Z
|
2019-11-03T00:00:00.000
|
{
"year": 2019,
"sha1": "456b2a2fb7869c407b544cc8ac4c29fd804a2d87",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/coatings9110728",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b6fd012fdb4c2ecca2155ac0b1bb617e8468b0e7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
254265831
|
pes2o/s2orc
|
v3-fos-license
|
Dominance of Dermacentor reticulatus over Ixodes ricinus (Ixodidae) on livestock, companion animals and wild ruminants in eastern and central Poland
The most common tick species parasitizing animals in Poland are Ixodes ricinus and Dermacentor reticulatus. These tick species differ in their distribution, habitats, seasonal activity and host specificity. Ixodes ricinus is the most prevalent and widely distributed, whereas the range of D. reticulatus is limited to eastern and central parts of the country with several new foci in the middle-west and the west. However, as in many central European countries, the range of D. reticulatus is expanding, and some authors have correlated this expansion with an increasing number of available hosts. The aim of the present study was to determine the tick fauna on domestic and livestock animals in two areas endemic for I. ricinus and D. reticulatus and to compare the risk of infestation with different tick species in open and forest areas. Over a 14 month period, 732 ticks were collected from five host species including domestic animals (dogs and cats), livestock (cows and horses) and wildlife (European bison) in two areas, central and NE Poland, endemic for D. reticulatus. Three tick species were recorded: D. reticulatus (623 individuals; 85.1 % of all collected ticks), I. ricinus (106 individuals; 14.5 %) and three females of Ixodes hexagonus (0.4 %) from a dog. Dermacentor reticulatus was the dominant tick species found on four host species and constituted 86, 81, 97 and 100 % of all ticks from dogs, horses, cows and bison, respectively, and was collected from animals throughout the year, including during the winter. The common tick, I. ricinus, was the dominant tick collected from cats (94 %). Fully-engorged, ready-for-reproduction females of D. reticulatus were collected from all host species. In May 2012, questing ticks were collected by dragging in forest or open habitats. The density of adult marsh ticks in open areas was around 2 ticks/100 m2 in the majority of locations, with a maximum of 9.5 ticks/100 m2. The density of adult I. ricinus was much lower in its typical habitat (forests: range 0.8–2.2 ticks/100 m2) between three and seven times lower than the density of D. reticulatus in its typical habitat. In regions endemic for marsh ticks, this tick species constitutes the main risk of tick infestation for livestock and dogs throughout the year. Livestock and companion animals are competent hosts for D. reticulatus, enabling the completion of the tick’s life cycle. Anti-tick treatment should be adjusted to marsh tick seasonal activity and drug sensitivity.
Introduction
In Poland 19 species of ticks are known to occur as the established tick fauna of the region. Nine species parasitize domestic and farm animals (Nowak-Chmura and Siuda 2012) and among these the most common species of hard ticks are I. ricinus and D. reticulatus. Both tick species are vectors of Borrelia burgdorferi s.l. and the tick-borne encephalitis virus (TBEV), pathogens that have major significance in human and veterinary medicine (Zygner et al. 2008;Bonnet et al. 2013;Mierzejewska et al. 2013;Reye et al. 2013). They can also transmit Rickettsia spp. and Anaplasma phagocytophilum (Zygner et al. 2008;Bonnet et al. 2013;Wójcik-Fatla et al. 2013). Dermacentor reticulatus is the main vector of Babesia canis, the etiological agent of canine babesiosis. This disease constitutes the most important infectious disease of dogs in regions of Poland endemic for D. reticulatus (Bajer et al. 2014b).
Ixodes ricinus and D. reticulatus differ in distribution, habitats, seasonal activity and host specificity (Nowak-Chmura and Siuda 2012). Ixodes ricinus is the most prevalent and widely distributed tick species in Poland, while the range of D. reticulatus is limited to eastern and central parts of the country with several new foci in the middle-west and the west (Nowak 2011;Mierzejewska et al. 2012Mierzejewska et al. , 2014. The typical habitats of I. ricinus include deciduous, mixed and coniferous woodland, heathland, moorland, rough pasture and urban parks. This tick is most active from May to early October and has a very wide range of hosts: lizards, many species of birds, small, medium-sized and large mammals and humans (Medlock et al. 2013;EFSA 2010). In contrast, D. reticulatus inhabits open areas such as fallow lands, river banks and lake shores covered with tall grasses and shrubs, edges of wetlands, shrubby pastures and forest paths (Bogdaszewska 2004;Bajer et al. 2014a;Zygner et al. 2009). It first appears early in spring and following summer diapause is again active in late autumn/early winter until the first snowfall. The main hosts are believed to be large mammals, mainly elks, red deer, cattle and dogs (Karbowiak 2009).
Although tick infestations on domestic and farm animals have been well studied, data about tick engorgement levels on different hosts under natural conditions or about competency of hosts for particular tick species are still scarce. Engorgement level plays a crucial role for the next off-host phase of the life cycle (molting of instars or oviposition in females). In the case of an uncompleted blood meal, there is no possibility for the completion of the tick's life cycle. Thus identifying animal species that support the completion of life cycles of particular tick species helps us to understand the reasons for expansion of this tick species and the environmental circulation of the pathogen linked to the species concerned. In the face of the rapid expansion of the range of D. reticulatus and the extent of canine babesiosis in many European countries in recent years, several authors have suggested an association between the expansion and an increasing numbers of suitable hosts, especially red deer, elk or wild boar (Sréter et al. 2005;Dautel et al. 2006;Nijhof et al. 2007;Karbowiak 2009Karbowiak , 2014Cochez et al. 2012;Beugnet and Chalvet-Monfray 2013). However, the competence of livestock and domestic animals as hosts for I. ricinus and D. reticulatus has not been verified. In comparison to the overall populations of large mammals in Poland (app. 710,000 roe deer, 150,000 red deer, 180,000 wild boars, 1000 European bisons), the populations of farm and domestic animals are much larger. There are about 5 mln of cattle, 300,000 of sheep, 300,000 horses in Poland. High populations of domestic dogs (about 7-8 mln) and cats (5-6 mln) exist in Poland because almost 60 % of families own a dog or a cat (data from GUS-Main Statistical Office and from pet food industry estimates).
In order to verify the competence of natural hosts for these two tick species, we developed a simple and practical method based on the determination and comparison of body mass of questing and foraging ticks. Then classification of foraging ticks to biologically relevant engorgement classes was conducted, reflecting further opportunity for tick life cycle completion. We predicted that although ticks collected from naturally infested animals can be at different stages of a blood meal (interrupted by tick collection by owner or veterinarian), they should represent a complete range of engorgement stages, including also a significant number of almost full ready-for-reproduction females. We also predicted that in areas endemic for two tick species, D. reticulatus should constitute a significant proportion of foraging ticks, due to 'sharing' of open habitats (pastures, meadows, urban and peri-urban areas) with potential hosts. Thus the aims of the present study were: (1) to determine the composition of the tick community on livestock and Fig. 1 The location of study sites where ticks were collected (filled diamond shapes distinguished by numbered arrows), superimposed on a map of Poland showing the endemic regions for Dermacentor reticulatus (shaded areas), as reported by Mierzejewska et al. (2012Mierzejewska et al. ( , 2014 and Nowak (2011 (3) to monitor seasonal changes in tick communities on hosts and finally (4) to assess the competency of different host species for the completion of the life cycle of D. reticulatus.
Abundance of questing ticks in environment
To compare the risk of infestation by I. ricinus and adult D. reticulatus, the abundance of ticks in the environment was assessed for each location (Mazury Lake District and Mazowsze in central Poland) either in forest or open habitats (Table 2). Questing ticks were generally collected in the same areas as those in which foraging ticks were collected from hosts ( Fig. 1). Questing ticks were collected by dragging a wollen blanket (1.2 9 0.8 m) in the forests or in fallow lands in the Mazowsze and Warmińsko-Mazurskie regions in May 2012 (Table 2; Fig. 1). Additionally, ticks were picked from clothing after dragging or directly from vegetation in front of the dragging person, when noted on a dragged transect. Ticks were collected twice a day at the peak of activity between 9-11 a.m. and 4-6 p.m. and were preserved in 96 % ethanol for further identification in the laboratory. After determination of species and sex by stereo microscopy, abundance was calculated and expressed as the number of ticks per 100 m 2 ( Table 3).
Determination of the mean weight of questing adult Dermacentor reticulatus and Ixodes ricinus To determine 'engorgement success' of ticks on different hosts, we needed to separate engorged and not engorged ticks collected from a host, and therefore we conducted a preliminary study to determine the body mass of questing adult ticks of both species. Questing ticks were collected both in Mazowsze and Warmińsko-Mazurskie regions to control for any regional variation. In the Mazowsze region 50 ticks of I. ricinus (23 females and 27 males) and 50 ticks of D. reticulatus (20 females and 30 males) were collected. Ticks were collected in the capital city of Warsaw (fallow land in Siekierki), in two city forests (Kabacki and Bielański forests) (Welc-Faleciak et al. 2014) and in open habitats (meadows, fallow lands) in Stoski, Kury and Dąbrowica villages (30-50 km outside Warsaw). From the Warmińsko-Mazurskie region, traditionally believed to be a region of high risk of tick infestation and tick-borne diseases, 19 I. ricinus (9 females and 10 males) and 46 D. reticulatus ticks (21 females and 25 males) were used for measurements. Tick abundances were determined on fallow lands in Urwitałt, Stawek and Dziubiele villages situated in the vicinity of the town of Mikołajki ('summer capital' of Poland) in Mazury Lake District (Fig. 1). Before weighing, ticks were dried separately and then the body weight of each specimen was recorded with an analytical balance (Radwag, Poland) with accuracy to 1 lg.
Distribution of tick species on different hosts
In a period of fourteen months (May 2012-June 2013) 586 ticks were collected from domestic (cat, dog) and farm (cow, horse) animals living in regions endemic for the marsh tick D. reticulatus, in the Mazowsze and Warmińsko-Mazurskie regions (Table 1a, Fig. 1). Ticks were collected from dogs and cats presenting at veterinary practices for routine health inspection visits in the Mazowsze region (Warsaw and Tłuszcz, a veterinary practice closest to the villages of Kury and Stoski). Additionally, ticks (n = 29) were removed from 11 sled dogs participating for 14 days in September 2012 in a training camp in Urwitałt, in the Mazury Lake District (Table 1a, b). These dogs were treated with acaricide spot-on containing fipronil (Fiprex, Frontline). Ticks were removed from horses and cows maintained on pastures and fallow lands located near forests. Ticks from horses were collected in Mazury Lake District from two different studs. A total of 178 ticks was removed in May from one semi-wild herd (six animals) kept on a large pasture for a whole year near the University of Warsaw's field station in Urwitałt (Fig. 1). This group was sporadically protected against ectoparasites. The second herd consisted of 32 animals used for horse riding (Dziubiele village). These horses spent nights in stables and days grazing on pasture. Tick treatment was applied on a monthly basis and grooming was carried out as a daily routine. Between June and September 2012, a total of 37 ticks were removed from these horses. In autumn 2012, 97 ticks were collected from 3 dairy cows from Dąbrowica village in the Mazowsze region. These cows were not treated against ticks and were kept on pasture during the day. The mean number of ticks per individual animal was calculated only when the number of examined animals was known.
Additionally, a total of 146 ticks collected from several European bisons shot during selective shooting in the winter of 2002/03 were included in the study (Table 1). All ticks were preserved in 96 % methanol and transferred to the laboratory at the Department of Parasitology, University of Warsaw. The species, stage and sex of adults were recorded for each tick. Ticks were then dried and weighed individually. The level of engorgement was determined on the basis of body weight.
Level of engorgement (weight classes) of foraging ticks
Ticks found on the hosts, especially from companion animals or livestock, are usually at different stages of foraging behavior and are likely to be removed at different levels of engorgement, preventing them from the completion of their blood meal. Thus the mean weight of the ticks (females) collected from any host may not be informative enough to determine the competence of the host species for certain tick species. To minimize the negative effect of these two factors, we (1) eliminated non-engorged females from the calculation of 'the mean weight of foraging tick' (Table 3) and (2) established several classes of engorgement, especially to calculate the percentage of fully-engorged ready-forreproduction females of both species.
These classes of engorgement were established based on the two tick species and two sexes, and level of engorgement: Class 0: non-engorged ticks For males and females, all individuals weighting below or equal to the upper 95 % CL of the mean weight of representative questing tick of each sex and species (Table 2). All 'foraging' I.ricinus males taken from the hosts also fell into this category. Class 1: slightly engorged ticks For D. reticulatus: females weighting in the range: 0.005-0.055 g; For I. ricinus: females weighting in the range: 0.002-0.013 g. This class represented females which have just started their blood meal. The role of D. reticulatus males as vectors of pathogens is not clear although they have been reported being found loosely attached to the host skin (Bartosik and Buczek 2012;own unpublished observations, Dautel et al. 2006). There is no evidence that they actually take a blood meal but because we found males on the hosts weighing above the upper 95 % confidence limit of the mean weight determined for questing males, we included D. reticulatus males weighing above the upper 95 % confidence limit (more than 0.00480 g) in class 1. Class 2: not-fully engorged females For D. reticulatus: females weighting in the range: 0.056-0.099 g; For I. ricinus: females weighting in the range: 0.014-0.050 g. This class was established on the basis of Brown and Askenase (1981) [from Bartosik and Buczek (2012)] as a weight of 0.1 g is considered by these authors as a borderline weight for fully-engorged ready-for-reproduction female ticks. Females representing class 2 are probably still not engorged enough to produce and lay eggs. Class 3: fully engorged females For D. reticulatus: females weighting equal and above 0.1 g; For I. ricinus: females weighting equal and above 0.06 g; Engorged enough to produce and lay the eggs.
Statistical analysis
The distribution of engorged females among 4 engorgement classes was calculated as a percentage of females in each class and was analyzed by maximum likelihood techniques based on log linear analysis of contingency tables, implemented by the software package, SPSS v. 21. Assignment to a particular class (0-3), tick species (1, 2) and host species (1-3) were fitted into a full factorial model. Beginning with the most complex model, involving all possible main effects and interactions, those combinations not contributing significantly to explaining variation in the data were eliminated stepwise (backward selection procedure), beginning with the highest-level interaction (Bajer et al. 2002). A minimum sufficient model was then obtained, for which the likelihood ratio of v 2 was not significant, indicating that the model was sufficient in explaining the data. Quantitative data reflecting weight of questing and foraging ticks were expressed as arithmetic means. The weight of ticks was analyzed by multifactorial ANOVA (software package, SPSS v. 21) using models with normal errors. Tick species, sex and region of origin (Mazowsze or Warminsko-Mazurskie region) were used as the factors for analysis of the mean weight of questing ticks. Tick species, sex, host species and season were used as the factors for analysis of mean weight of foraging ticks.
Then, to estimate the borderline value for 'not engorged' ticks, the sum of the values of the mean weight for each sex and tick species and the value of the associated upper 95 % confidence limit was taken as the maximum borderline value for the '0 engorgement level class'(non-engorged ticks). Three I. hexagonus females and nymphs of I. ricinus were not taken into consideration in the statistical analyses.
Distribution of ticks among host species
During the study period 732 ticks were collected from five host species including companion animals (dogs and cats), livestock (cows and horses) and wildlife (European bison). Three tick species were recorded: D. reticulatus (623 individuals; 85.1 % of all collected ticks), I. ricinus 106 individuals; 14.5 %) and 3 females of I. hexagonus (0.4 %) from a dog (Table 1a). Dermacentor reticulatus was the dominant tick species found on 4 host species and constituted 86, 81, 97 and 100 % of all ticks from dogs, horses, cows and European bison, respectively. The common tick, I. ricinus, was the dominant tick collected from cats (94 % of all ticks) and only two D. reticulatus females were found on a cat (6 %).
There were significant differences in tick community composition between seasons (Table 1a; Fig. 2a, b). The highest number of ticks was collected in the spring peak of tick activity between March and May. However, D. reticulatus was found on animals in every month, including during the winter season. The marsh tick was the dominant species on dogs and horses in spring and autumn (Fig. 2a, b) and was the only species recorded on hosts (dogs and bisons) during winter months. In the winter of 2002/2003, 146 ticks were collected from European bisons and all were identified as marsh ticks. In the winter of 2012/13 in Poland, winter weather lasted from late October until mid-April in both study sites, with snow cover and temperatures below 0°C during most of this period, but there were a few spells of warmer weather (about 2 weeks each) at the end of December 2012, January and February 2013 resulting in the appearance of D. reticulatus on dogs in these periods (Table 1a). D. reticulatus males constituted the vast majority of ticks collected in winter (96 % for dogs, 92 % for European bison). In all other seasons D. reticulatus females were more abundant among all the ticks collected from dogs (54 %), horses (54 %) and cows (56 %) in both regions (Fig. 2a, b). The only marsh ticks removed from cats were single females in April and May. In summer months, I. ricinus was the main tick obtained from all examined host animals (54 % for dogs, 76 % for horses) and it was the major species collected from cats in all seasons (94 %). One I. ricinus nymph was removed from a dog in May, from a horse in June and from a cat in July. Additionally, 3 I. hexagonus females were removed from a dog in June. Cows were monitored only in the autumn season displaying high infestation with D. reticulatus (97 % of all collected ticks) at that time (Table 1a, b).
The mean intensity of tick infestations is presented in Table 1b. Mean intensity was calculated only if the number of sampled hosts was known. Particularly high intensity (D. reticulatus) was recorded on horses in May 2012, on cows in autumn 2012 and on bison in December 2002. Relatively high intensity of I. ricinus infestation was noted on a cat in May but it was attributable to only one infested individual. Mean intensity of tick infestation was about three ticks per dog in a group of sled dogs at the training camp in Mazury Lake District (Table 1b). However, the dogs were treated with acaricide before the camp and the majority of the collected ticks were dead.
Comparison of questing tick abundance in two habitats
To assess the risk of infestation with two tick species, questing ticks were collected by dragging in two habitats: open (mostly abandoned fields and meadows or fallow land in city areas) and in forests, either in the Mazowsze or in Warminsko-Mazurskie regions ( Table 2). The density of adult marsh ticks in open areas was relatively high, above 2 ticks/ 100 m 2 in the majority of locations, with a maximum of 9.5 ticks/100 m 2 but no marsh ticks were collected in the forests. The density of adult I. ricinus ticks was much lower in its typical habitat (forests), in the range of 0.8-2.2 ticks/100 m 2 , between three and seven times lower than that of D. reticulatus density in its typical habitat. Both tick species were found in open areas and the density of I. ricinus was between three and ten times lower than that of D. reticulatus in this habitat (Table 2). When analyzing the proportional occurrence of marsh and common ticks on hosts (dogs and horses) in May (peak activity month for both tick species), a similar relationship was evident (Table 1a). The proportion of foraging D. reticulatus to I. ricinus was 6:1 on dogs and 7:1 on horses. However, on cats in May the proportion was reversed-1:23 in favor of I. ricinus.
Mean weight of questing and foraging ticks
As we expected, tick species and sex affected the mean weight of questing adult ticks (ANOVA: tick species 9 sex: F 1, 164 = 9.11, P = 0.003) ( Table 3). The mean weight for D. reticulatus females was 0.00460 ± 0.00017 g and for males 0.00502 ± 0.00015 g. The mean weight of both sexes of I. ricinus was significantly lower than D. reticulatus and in this species females were significantly heavier than males (Table 3). These data supported the significant well-known differences in body size between studied species and helped to calibrate engorgement classes.
As all ticks found on a host must be treated as foraging despite our efforts to identify and separate a 0 class of engorgement, we calculated and analyzed the mean weight of D. reticulatus and I. ricinus adult ticks collected from certain host species. Arithmetic means are presented in Table 3. Questing and foraging males of both tick species had similar weights (index of engorgement: 0.7-1.03; Table 3), without any significant differences between the host of origin (Fig. 3a, b). Females of both tick species demonstrated a significant increase in body mass when foraging on hosts and overall engorgement success was higher in the case of D. reticulatus females in comparison to those of I. ricinus (index of engorgement: 37.95 and 12.07 for D. reticulatus and I. ricinus, respectively; Table 3).
Ticks collected from European bison in the winter of 2002/2003 were excluded from the statistical analysis to avoid a 'time effect' on body mass. Multifactorial ANOVA revealed a significant 3-way interaction of host species, tick species and sex on mean weight of foraging ticks (F 1, 579 = 3.92, P = 0.048). This interaction is illustrated in Fig. 3a, b for the major host species. For D. reticulatus females the highest mean weight was recorded for ticks foraging on dogs and cows, and was relatively lower for ticks feeding on horses. For I. ricinus females, the highest mean weight was recorded for ticks foraging on horses and cows.
Comparison of female engorgement on different hosts
Finally, to assess host competency, we analyzed the distribution of different engorgement classes of tick females on hosts. The minimal sufficient model comprised one significant interaction of host species, tick species and engorgement class (v 2 12 = 31.72, P = 0.002). This interaction is presented in Fig. 4a, b. Two D. reticulatus females collected from cats were engorged (class 3) and the vast majority of ticks collected from bison were males. For these reasons we have not included the data from cats and bisons in the analysis and figure.
In accordance with analysis of mean weight, for D. reticulatus females, the highest rate of fully engorged females (class 3) was noted for dogs and cows, and it was lower for horses. In contrast, for I. ricinus females, the highest rate of fully engorged females (class 3) was noted for horses (Fig. 4b).
Among 334 D. reticulatus males collected from five host species, 149 (44.6 %) fell in class 1 of engorgement. Males in this class constituted 19, 57, 58 and 78 % of males collected from European bisons, horses, dogs and cows, respectively.
Discussion
The main aim of our study was to determine the composition of the tick community on domestic companion and livestock animals in two endemic areas for I. ricinus and D. reticulatus and to identify competent hosts for the marsh tick. The dominant species found on cows, horses and dogs was D. reticulatus, except for the summer, when I. ricinus was mainly collected from all host species. The highest number of ticks was obtained in spring, and subsequently in the autumn. In spring, the abundance of questing D. reticulatus ticks was much higher than I. ricinus in their typical habitats, constituting a higher risk of infestation. The tick fauna on livestock and dogs depends mainly on the geographical location, as different tick species inhabit different continents, and the geographical range for different tick species determines their occurrence on hosts. In Poland, for example, R. sanquines is an accidental, imported species, but this tick species is dominant on dogs in southern Europe around the Mediterranean basin. Thus, comparison of the composition of the tick community even among cattle or dogs is difficult and we only discuss here data for the three detected tick species in European dogs, excluding Mediterranean countries. In our study, D. reticulatus comprised 86 % of all ticks collected from dogs and this is definitely the highest proportion of this species found to date on dogs in Europe. In an earlier study based in Warsaw, this species dominated over I. ricinus among ticks from dogs (65 vs. 35 %; Zygner and Wedrychowicz 2006). In the Ukraine, near Kiev, among 52 ticks, D. reticulatus constituted 63 % (Hamel et al. 2013). Both eastern and central Poland and Ukraine are inhabited by the eastern population of this tick (Karbowiak 2009(Karbowiak , 2014 and apparently the risk of infestation with this species is high in this area. In other endemic regions for D. reticulatus in Central Europe, this tick species shows comparable frequency to I. ricinus on dogs (i.e. 45 % of ticks in Germany, Beck et al. 2014;49% of ticks in Hungary, Földvári and Farkas 2005). In a recent study in eastern Austria, D. reticulatus constituted 15 % of all ticks, but was dominant in early spring and late autumn . In all countries, in which D. reticulatus is a significant component of the tick fauna on dogs, an increased risk of canine babesiosis is expected, as this tick species is the main vector of B. canis (Rar et al. 2005;Zygner et al. 2008, Schaarschmidt et al. 2013Mierzejewska et al. 2012). The marsh tick is still rare in the UK and in Belgium (0.6-0.8 %) but has been established recently as a permanent feature of the local tick fauna on dogs in these countries (Smith et al. 2011, Claerebout et al. 2013. Interestingly, in southern Poland (Rymanów, Podkarpackie region), D. reticulatus was not found among 236 ticks from dogs, where only I.ricinus and I. hexagonus were identified (Kilar 2011), but this only confirms the existence of the gap between the eastern and western populations of this tick species (Fig. 1, Karbowiak 2009). In our study, the common tick I. ricinus constituted only 13 % of ticks collected from dogs and this is the lowest percentage of this species found on dogs in Europe. The percentage of I. ricinus was 36, 43 and 46 %, in recent studies in Ukraine, Hungary and Germany, respectively (Hamel et al. 2013;Földvári and Farkas 2005;Beck et al. 2014). The highest percentage of this tick species was found among dogs from the UK (52-72 %; Smith et al. 2011;Ogden et al. 2000), Belgium (76 %; Claerebout et al. 2013), Austria (76 %;Duscher et al. 2013) and also in southern Poland (89 %, Kilar 2011). Because of the much higher abundance of D. reticulatus compared with I. ricinus in open habitats (which are more often used by dogs and livestock) where both species are sympatric, and generally similar densities noted for I. ricinus right across Europe (Welc-Faleciak et al. 2014), with further expansion of the marsh tick, the overall risk of tick infestation and consequently, of exposure to canine TBDs is likely to increase up to 6-7 times. Already, some evidence for a rapid recent increase in the risk of contracting canine babesiosis, as a result of the expansion in the range of D. reticulatus, has been noted in central Poland (Bajer et al. 2014b). Interestingly, social perception of ticks and TBDs in dogs (and the proper use of repellents/acaricides) is much higher in central and eastern Poland than in southern and western regions, most likely due to the high abundance of D. reticulatus (Bajer et al. 2014a, b).
Ixodes hexagonus constituted a significant component of the tick fauna on dogs in the UK (22-39 %; Smith et al. 2011;Ogden et al. 2000) and in southern Poland (10.6 %; Kilar 2011) but was rare among dogs from our study and studies from Hungary and Austria (0.1-0.4; Duscher et al. 2013;Földvári and Farkas 2005) and was not recorded in the Ukraine (Hamel et al. 2013).
Analyses of the proportional distribution of common and marsh ticks collected from livestock and dogs, and from their habitats (1:6, 1:7), revealed no marked host selection for D. reticulatus and supported host competency of cattle, horses and dogs for this host species. Higher densities of questing D. reticulatus than I. ricinus (adults) may be related to the much shorter life cycle of D. reticulatus compared with I. ricinus (1-year vs. 2-year long life cycle) (Zahler and Gothe 1995;Cochez et al. 2012). Interestingly, although I. ricinus clearly dominated on cats and these host species are believed to be too small to feed D. reticulatus, we found two fully engorged females on cats, so cats can be considered as hosts for D. reticulatus, though less preferred. Host competence of livestock and dogs for D. reticulatus was supported by the finding of a significant rate of fully-engorged readyfor-reproduction females on these hosts and a significant increase in mean body mass of ticks feeding on them. However comparison of female engorgement classes revealed the highest rate of fully engorged females only on cattle (45 %) and dogs (42 %). The much lower proportion of fully engorged females collected from horses (18 %) may be attributable to the higher number of well-cared-for horses involved in the study. Comparison of mean tick numbers showed a lower intensity of infestation on horses which were groomed and received regular acaricide treatment (Dziubiele, June, October, 84 % of all horses) compared with those which were not treated/groomed (Urwitałt, May).
Dogs were the most numerous group of animals included in this study and were characterized by the most uniform distribution of D. reticulatus females among engorgement classes, but nevertheless the proportion of females in class 3 was the highest. A case of an unprotected bitch from Tłuszcz presenting in April 2014 at the veterinary clinic with a massive tick infestation showed the full potential of those animals as competent hosts for marsh ticks. The dog lived in an endemic region and did not get anti-tick treatment in time due to the atypically early and warm spring. Veterinarians removed altogether 76 ticks from this dog, including 66 D. reticulatus (39 females and 27 males) and 10 I. ricinus (7 females and 3 males). At this time 35 females (90 %) were in the highest class of engorgement (mean body weight 0.398 g) while only 4 (10 %) in the second class of engorgement (mean body weight 0.0801 g). No females from any lower classes were detached from this dog.
In our study D. reticulatus ticks were found on dogs through the whole year, including the winter season and were also collected during the winter from European bison. The occurrence of D. reticulatus on hosts and on vegetation in winter has been previously recorded by other authors, contrary to I. ricinus, which is generally absent in winter months. Questing marsh ticks have been collected in winter in Germany (Dautel et al. 2008) and Poland (Bartosik et al. 2011;Buczek et al. 2014). The presence of marsh ticks on the European bison from Białowie_ za primeval forest in the winter of 1992-2000 was reported by Izdebska et al. (2001) and Karbowiak et al. (2003). Other host species that have been recorded as infested in winter include moose, red deer and wild boar (Izdebska et al. 2001). Both sexes of ticks were collected, but male ticks represented the majority, as in our study. Several tick females were fully engorged and laid eggs in the laboratory, establishing that D. reticulatus is active and most likely able to transmit pathogens also in winter time (Karbowiak 2009). Based on the findings of other authors and our results we have confirmed that year-round activity is the normal behavior of D. reticulatus and consequently it is likely to play an important role in the circulation and maintenance of tickborne pathogens throughout the year. Therefore, whole year repellent/acaricide protection for dogs should be provided in endemic regions, especially in late autumn, during mild winters and in early spring, as D. reticulatus activity in these periods may be responsible for B. canis infections.
The low numbers of ticks collected from cats is probably linked to their behavior (effective self-grooming; Marchiondo et al. 2013) and the specific type of animals brought to the veterinary clinic. It is more likely that cats with a lower risk of ectoparasite infestation (kept indoors or those that are let outside only for a short period at a time) were brought to the veterinary clinic. The majority of ticks collected from cats were I. ricinus, only two female D. reticulatus being recorded. Higher infestation with I. ricinus may be explained either by host selection (adult D. reticulatus are believed to feed on large mammals) or the type of acaricide employed by owners. Most of the spot-on drugs used on cats in Poland are based on fipronil. This agent seems to be less effective against the common tick Beck et al. 2014), but is the most effective acaricide for the marsh tick and should be treated as the reference acaricide in D. reticulatus endemic areas (Bajer et al. 2014b, Beck et al. 2014. Males of D. reticulatus constituted over 90 % of all ticks collected during the winter. A similar sex bias in this season has been observed in other studies (Karbowiak 2009;Izdebska et al. 2001;Bartosik et al. 2011;Buczek et al. 2014), but the role of males that may remain on hosts for several weeks, remains poorly understood. Bartosik and Buczek (2012) suggested that the presence of male D. reticulatus was necessary for females to cease feeding and to enable their fertilization by engorged males. In our study the bodies of detached D. reticulatus males were slightly enlarged, but the mass index did not show significant differences between foraging and questing males (1.03) and the mean weight of D. reticulatus males collected from different hosts was similar, suggesting that it is unlikely that males take a proper blood meal on any of the hosts in the study.
Comparison of the mean body weight of foraging female common and marsh ticks showed that for D. reticulatus the mass index was more than 3 times higher than I. ricinus (37 vs. 12 times). A higher volume of ingested blood may increase the success of pathogen transmission between host and vector, thus D. reticulatus seems to be a greater threat than the common tick for livestock and domestic animals lacking proper acaricide/repellent treatment.
Conclusions
In eastern and central regions of Poland endemic for marsh ticks, this tick species constitutes the main risk of tick infestation in livestock and dogs throughout the year. Domestic and farm animals are competent hosts for D. reticulatus, enabling the completion of the tick life cycle and promoting its expansion.
|
2022-12-06T14:16:57.014Z
|
2015-02-26T00:00:00.000
|
{
"year": 2015,
"sha1": "91c0b98532a0bf96064b461bc3b7c623bf5b41fe",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10493-015-9889-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "91c0b98532a0bf96064b461bc3b7c623bf5b41fe",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
}
|
216353059
|
pes2o/s2orc
|
v3-fos-license
|
Utilization Analysis of Bioethanol (Low Grade) and Oxygenated Additive to COV and Gas Emissions on SI Engine
The growth of motor vehicles over the past five years has reached 8.63 % per year. The increasing number of vehicles has an impact on increasing fuel consumption. One alternative energy as another fuel currently being developed in motor vehicles is bioethanol. The addition of bioethanol will certainly change the fuel properties, the fuel will be more difficult to self-ignite so the pressure generated in the combustion chamber will be more consistent. The coefficient of variation (COV) represents the ratio of the standard deviation to the mean of a set of data, in this study in-cylinder pressure data (IMEP) is used. Based on previous research that discussed the analysis of emission gas and fuel consumption on SI engine fueled with low-grade bioethanol and oxygenated additive, the authors examined further to analyze the characteristic of gasoline-ethanol blend and oxygenated additive to COVIMEP and exhaust gas emissions of the various fuel mixture at variable engine speed were investigated. The results of the study show that gasoline-ethanol blend and oxygenated additive decrease variation in combustion pressure. It also reduces exhaust emissions; CO and HC are found to be reduced while CO2 and O2 increased as concentration increases.
Introduction
Research and development of spark-ignition engines are currently more focused on improving engine performance and reducing exhaust emissions. Those are important to find the substitution or at least additional fuel that can reduce the problems caused by the continuous fossil fuels used 1,18) . Bioethanol (C 2 H 5 OH) in future developments has the potential to become a renewable fuel. Bioethanol is a product of biomass-derived from the fermentation of plants containing starch. Bioethanol encompasses an easy molecular structure that easily defined chemical and physical properties. Bioethanol is often used as fuel either directly or as a combination of another fuel, like gasoline. Ethanol which can be used as an engine fuel is usually anhydrous ethanol with a concentration > 99.5 % (fuel grade). If it is used entirely as fuel, engine modification is needed, but when mixed with gasoline, engine modification is not required 2) . Anhydrous ethanol used has very little water content and can even be said to be pure so that when mixed directly with gasoline, it can directly enter the combustion chamber. While hydrous ethanol which has low concentration and still has water content in it (4.9 % -5 %) so it cannot be directly mixed with gasoline. To be used as a mixture with gasoline the maximum water content is 7.4 %. Therefore we need a simple technology that can accommodate low-grade ethanol produced by the community to be converted into high-grade ethanol, and the results can be directly applied as a mixture of fuel in the engine. Hydrous ethanol has slightly different characteristics compared to anhydrous ethanol 3) . Octane is lower, heating value is lower, latent heat of vaporization is higher, then also the oxygen content is higher. However, the exact costs for each characteristic depend on the mixture content, and the water content contained, so a separate test of the hydrous ethanol is needed.
In addition to engine devoted to ethanol fuel, research into the use of ethanol is also carried out on commercial SI engine (gasoline engine) 4-cylinder, including testing the optimal level of mixtures on a mixture of gasoline-ethanol to maximize the efficiency of brake thermal. In this study engine performance like brake torque and brake specific fuel consumption were also tested with a combination ratio of gasoline (octane 87.5) and 99.5 % ethanol (E10, E20, E30, E40, E50, E60, E70, E85, and E100). This test is carried out at different engine speed and throttle opening, but the ratio is constant. AFR and ignition timing are also adjusted to increase engine torque. From the results show that the proper mixing ratio of gasoline-ethanol can increase engine torque, especially at low engine speeds. E40 and E50 produce maximum thermal brake efficiency at 58 -73 % WOT and a couple of 2,000 -2,500 rpm. E20 -E40 produces the very best MBT at 70-100 % WOT and 1,000-4,000 rpm 4) . Comparative experiments have also been distributed on the port injection of gasoline engines with fuel hydrous ethanol gasoline (E10W), ethanol gasoline (E10) and pure gasoline (E0). In line with experimental results, compared to E0, E10W shows higher pressure within the cylinder and NO x emissions at high loads. However, at low loads the conditions of HC, CO and CO 2 are significantly reduced. E10W also produces less HC and CO, while CO 2 emissions don't seem to be significantly affected. Compared to E10, E10W shows a better cylinder pressure and heat release rate. Also, a discount in NO x emissions was observed for E10W from 5 nm to 100 nm, while HC, CO, and CO 2 were slightly higher under low and medium load conditions. From the results, it will be concluded that the E10W fuel will be considered as a possible alternative fuel that may be applied to gasoline engines 5) .
Paper 6) tests independently using low-quality distilled bioethanol which utilizes waste heat in a compact distillator to supply high-quality bioethanol able to be used as a fuel mixture. From the test, it had been found that the wheel torque and wheel power produced from a mix of gasoline and bioethanol have a better value than gasoline fuel only. The mix of bioethanol and gasoline will enhance power up to 15 %. Whereas the torque values produced within the mixture of E5, E10, and E15 are 6.92 Nm, 6.64 Nm, and 6.92 Nm, respectively, where the worth is on top of pure gasoline at 6.1 Nm. Torque values were produced in an exceedingly mixture of E5, E10, and E15 with oxygenated additives respectively 7.5 Nm, 7.6 Nm, and 7.53 Nm 7) . The addition of oxygenated cyclohexanol, in general, can improve the performance (torque and power) produced by the fuel engine. Torque and brake power increase after engine rotation above 5,000 rpm. The highest torque value is obtained from the variation of E10 ++ at 9.09 Nm at 6,000 rpm engine speed, 2.6 % higher than pure gasoline (E0). The most optimal power (brake power) is generated by a variable E15 of 6.84 kW at 8,000 rpm engine speed which increases 1.94 % from E0 8) .
Paper 9) conducted an experiment to bring down variations of cyclic on test engines, by controlling timing of ignition for the full cycle in an exceedingly row. A stochastic model is performed between ignition timing and cylinder maximum pressure using system identification techniques. The utmost cylinder pressure from consequent cycle is estimated with this model. The control algorithm is generated from LabView and installed into the Field Programmable Gate Array (FPGA) chassis. The test results, the most cylinder pressure next cycle will be predicted quite well, and ignition timing will be adjusted to keep up the specified maximum cylinder pressure to reduce variations of cyclic. In fixed ignition timing trials, COV imep and COV Pmax were 0.677 % and 3.764, while the results decreased to 0.533 % and 3.208 that after GMV controllers were applied.
S. H. Yoon, et. al., investigate characteristics of exhaust emissions, and engine performance of a spark-ignition engine fueled with bioethanol, ethanol-gasoline blend, and gasoline fuel 10) . The test fuels were an ethanol-gasoline blend (E85), which consists of 85 % vol bioethanol and 15 % vol gasoline, pure bioethanol (E100), and gasoline fuel with none additive (G100). The results of this study showed that an ethanol-blended fuel or pure ethanol led to a drastic decrease in exhaust emissions under all operating conditions. The exhaust emissions like hydrocarbons, carbon monoxide, and nitrogen oxides were reduced when using the bioethanol-blended and undiluted ethanol fuel attributable to the highly oxygenated component of ethanol fuel.
Palmer, F.H. 11) , reported that during low-speed acceleration, oxygenated fuel blend gave a better anti-knock performance than hydrocarbon fuel of similar octane range. Srinivasan, et. al. 12) , experimented on the effect of the gasoline-ethanol mixture using oxygenated additives on the SI multi-cylinder Engine. The experiment shows that ethanol-gasoline blended with oxygenated additive indicates a significant reduction in exhaust emissions. CO, CO 2, and NO x were reduced, but in contrast to HC and O 2 which are increasing.
In the previous research, the distillation of low-grade bioethanol with compact distillator was experimented 6) up to the analysis of emission gas and fuel consumption on the SI engine fueled with low-grade bioethanol and oxygenated additive was discussed 13) . In this research, the authors checked further to analyze the characteristic of ethanol blend and oxygenated additive for COV IMEP and exhaust gas emissions of various fuel mixture as well as COV IMEP correlation with exhaust gas emissions at variable engine speed were investigated. The experimental study aims are to use a mixture of gasoline-engine and anhydrous ethanol with additive oxygenated which might reduce the COV of the combustion cycle so the engine driveability is increased as indicated by the resulting exhaust emissions.
Coefficient of Variations (COV)
COV combustion in SI engine is an important subject that has been widely studied because it limits the engine operating range. Many researchers have been done to observe the causes of cycle variations in the combustion process, leading to cycle variations in engine output performance. Cycle variations can be observed and characterized by the combustion pressure in the cylinder which is measured experimentally. Figure 1 shows a curve of cylinder pressure against the rotation of the crank angle in 4 consecutive cycles, it shows that maximum pressure for every cycle is different. The reason for this is that there is a possibility that the fuel in the cylinder does not burn at the same level. Using the statistical method coefficient of variations (COV) to represent the ratio of the standard deviation to the mean of a set of data, the combustion process of every cycle can be analyzed with a variety of in-cylinder pressure (IMEP) data experimented. Indicative Mean Effective Pressure (IMEP) and Pmax are important parameters and are commonly used as a measure of cyclic variation 14) . It should be noted that Pmax is additionally a feedback signal in an exceedingly closed-loop mechanism.
100 (1) The standard deviation (σ) is that the root of the common arithmetic of the square of the deviation from the mean (), and also the variance (σ2) is that the square of the quality deviation. The COV is defined as the ratio of the standard deviation to the mean value. To produce the effect of cyclic variation in combustion, work only given to piston during the compression and expansion steps, therefore COV is calculated as the standard deviation of the IMEP calculated between the closure of the intake valve and the opening of the exhaust valve, divided by average IMEP and is usually expressed in percent.
Experimental Method
In this study, we used a gasoline engine (SI) 125 cc one cylinder with SOHC equipped and an electronic fuel injection system. Table 1 describes general gasoline engine specifications. Fuel type used is pure gasoline (RON 88), gasoline-bioethanol mixture, with a mix of E5, E10, and E15, and therefore the mixture is added with oxygenated cyclohexanol additive (C 6 H 12 O), with a composition of 0.5 % you bored with each mixture (E5++, E10++, E15++). Gasoline and bioethanol are mixed within the fuel tank. So, the premix level is kind of high and almost constant even to the manifold. Thus, the fuel flow may be controlled and measured directly. The fuel properties test of varied gasoline-bioethanol mixtures were also applied during this experiment. Table 2 Kistler type 6617B, one piezoelectric sensor, for measure the combustion pressure on the cylinder (where the most combustion pressure until 200 bar) and therefore the acquisition system like LabView is employed to record the combustion pressure. The crank angle position (until 720 crank angle) is got by shaft encoder; the cylinder pressure is synchronized with the crankshaft angle. The temperature of the fuel, lubricants, spark plugs, and exhaust gas are measured with a temperature sensor unit within the style of a K type thermocouple. The engine test is additionally connected to the engine dyno to investigate engine power, engine torque, and consumption of fuel, while to live the content within the exhaust gas like Hydrocarbons (HC), Carbon Monoxide (CO), Carbon Dioxide (CO2), and excess air (O 2 ) using QROTECH-401 (gas analyzer 4/5). Air-fuel ratio analysis is finished employing an oxygen sensor (lambda) within the end of the manifold. Figure 2 is an experimental arrangement chart on SI engine (125 cc) connected to other components. This experimental test is administrated after the engine operational until a steady-state condition. Temperatures of the cooling water and oil were at 50 o C. The throttle opening angle is kept 100 % open, also the timing of ignition is controlled by following the mechanism within the fuel injection system. Variations in engine speed are set at low speed (4,000 rpm), medium speed, up to high speed (8,500 rpm) with engine speed increases every 500 rpm.
COV
The characteristic of ethanol and oxygenated additive blend to COV IMEP and exhaust gas emissions of various fuel mixture as well as COV IMEP correlation with exhaust gas emissions at variable engine speed were investigated. COV IMEP from an experimental test for every fuel mixture on engine speed 4,000, 6,000 and 8,500 RPM can be seen in Fig. 3. Figure 3 shows that additions of oxygenated additive on E15 fuel mixture can decrease COV in engine speed 4000 rpm with value 3.67 %, decreased 4.27 % compared to E0. And in engine speed 6,000 rpm, additions of oxygenated additive on E5 fuel mixture decreases COV 1.42 % compared to E0. While in engine speed 8,500 rpm on E15 fuel mixture, it decreases COV 1.01 % compared to E0. Lower COV value indicates that the least variation in combustion pressure occurs.
Ethanol's affinity for water is high because it composed a certain amount of water in it. This is not a matter if you use entirely ethanol as fuel because it is mixed with water thoroughly, where ethanol has polar properties that are water-soluble, but some significant problems can arise when a mixture of gasoline-ethanol is used. Phase separation is very possible in this mixture because gasoline and ethanol cannot fully mix homogeneously. This problem can be avoided by using semi-polar solvents (improving solubility). The Oxygenated additive added to each mixture of fuel from E5, E10, until E15 is cyclohexanol with a volume of 5 % vol/vol. Cyclohexanol (C6H12O) including alcohol group, is a cyclic organic compound with carbon C-6 the presence of an OH group (alcohol). By increasing the length of the carbon chain, and with the presence of these groups, the influence of the polar hydroxyl group on the molecular nature tends to decrease. Therefore cyclohexanol is semi-polar. This becomes a binder between gasoline and ethanol so that the mixture can be more homogeneous.
CO Emission
The results of a mixture of the gasoline-ethanol with oxygenated additive to CO emissions are shown in Fig. 4. From the test results it can be obtained that the additions of oxygenated additive on gasoline-ethanol blend decrease CO emissions, especially in engine speed 4,000 rpm. Compared to the mixture without additive, additions of oxygenated additive to E5, E10, and E15 decrease 1.01 %, 0.36 %, and 1.05 % CO gas emissions respectively in engine speed 4,000 rpm. While in engine speed 6,000 rpm and 8,500 rpm it increases 0.99 % and 2.15 % CO gas emissions of E15 fuel mixture respectively. This is because the percentage of ethanol and oxygen from oxygenated additive increases so it has resulted in leaner combustion. In general, for all concentration blend, as the concentration increase, the CO gas emissions are found to be reduced. The results of a mixture of the gasoline-ethanol with oxygenated additive to CO 2 emissions is shown in Fig. 5 The additions of oxygenated additive on gasoline-ethanol blend increases CO 2 emission, especially on lower engine speed. Compared to the mixture without additive, additions of oxygenated additive to E5, E10, and E15 increase 1.1 %, 0.2 %, and 0.5 % respectively in engine speed 4,000 rpm. While in engine speed 6,000 rpm and 8,500 rpm, it increases E5 and E10 0.8 %; 0.2 %, and 1.4 %; 0.1 % respectively. CO2 gas emissions increases due to the high oxygen content from the oxygenated additive, it indicates a better combustion process of the fuel in the combustion chamber.
O2 Emission
The results of a mixture of the gasoline-ethanol with oxygenated additive to O 2 emissions is shown in Fig. 6. The maximum oxygen content found in the exhaust gas was 1.6 % at 4,000 rpm with E15++ fuel mixture. As the concentration increases, O 2 generally increased compared to pure gasoline. This is due to the high oxygen content contained by the oxygenated additive. Higher O 2 emissions indicate that there is enough oxygen in the combustion process and fuel that is not burning, HC will be less rather than the lack of air and HC which will increase later.
HC Emission
The results of a mixture of the gasoline-ethanol with oxygenated additive to HC emissions is shown in Fig. 7. From the test results it can be obtained that the additions of oxygenated additive on gasoline additive blend decrease HC, especially on lower engine speed. Compared to the mixture without additive, additions of oxygenated additive to E5, E10 and E15 decreases 40 ppm, 27.4 ppm and 49.4 ppm HC emissions respectively in engine speed 4,000 rpm. While in engine speed 6,000 rpm it decreases E10 and E15 HC emissions 8.7 ppm and 20 ppm respectively. And in engine speed 8,500 rpm it decreases E5 and E10 HC emissions 20.7 ppm and 12.3 ppm respectively. Decreasing HC levels indicates a better combustion process. This is because HC compounds react with oxygen from ethanol and produce carbon dioxide (CO2) and water (H 2 O).
COV vs. Emission
The correlations between COV and exhaust gas emissions such as CO, CO 2 , O 2 , and HC can be seen in Fig. 8. From the graph obtained that at 6,000 rpm compared to E0, 15 % ethanol blend decreases COV value by 0.26 %. But with a non-significant decrease, it was able to reduce CO gas emissions by 3 % and HC 31 ppm and increase CO 2 emissions by 1.8 %. And in 8,500 rpm engine speed it can be seen that 1.01 % decrease of COV from E0 to E15 can reduce CO gas emissions by 5 %, HC 31 ppm and increase 2.8 % CO 2 gas emissions. While at 4,000 rpm engine speed, COV increases with every addition of 5 % ethanol into the fuel and this still has an impact on reducing HC gas emissions, but not significantly only 5.3 ppm. This decrease is due to the properties of ethanol which contains a lot of oxygen so that CO and HC gas emissions slightly reduced and O2 and CO 2 gas emissions still increase slightly even though the combustion process is not consistent. Compared to the mixture without additive, generally, additions of oxygenated additive to E5, E10, and E15 decrease COV, especially in 4,000 rpm and 6,000 rpm engine speed. At 4,000 rpm fuel mixture E5, E10 and E15 with additive compared to those without it decreases COV value 1.4 %, 1.18 %, and 7.91 % respectively. And in 6,000 rpm it decreases COV value 1.5 %, 0.32 % and 0.24 % respectively to E5, E10 and E15. While in 8,500 rpm it decreases COV value 0.05 % for E5, it also increases COV value 1.21 % and 0.5 1% to E10 and E15 respectively. As COV decreases, CO and HC emissions decreased while CO2 and O 2 increased. on the contrary, while COV increased, CO and HC gas emissions slightly reduced and O 2 and CO 2 gas emissions still increase slightly even though the combustion process is not consistent. This is due to the high oxygen content of ethanol and oxygenated additive.
Conclusion
The following conclusions can be made from this study, ethanol blend and oxygenated additive in gasoline decrease variation in combustion pressure occurrences, lower COV. It also reduces exhaust emissions; CO gas emissions and HC emissions are found to be reduced while CO 2 gas emissions and O 2 gas emissions increased as concentration increases. The correlations between COV and exhaust gas emissions are; as COV decreases, CO and HC emissions decreased while CO 2 and O 2 increased. And when COV increased, CO and HC gas emissions slightly reduced while O 2 and CO 2 gas emissions still increase slightly even though the combustion process is not consistent. This is due to the high oxygen content of ethanol and oxygenated additive.
|
2020-04-16T09:07:49.236Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "88bd6b371f34057d8ec34bc5ac0200bd03ee20a9",
"oa_license": null,
"oa_url": "https://catalog.lib.kyushu-u.ac.jp/opac_download_md/2740940/Pages_43-50_2.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3e8f0e6faf1a92461dc3e44a4fbf737a4379f03c",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
265344168
|
pes2o/s2orc
|
v3-fos-license
|
Planned experiments or autonomous adaptation? An assessment of initiatives for climate change adaptation at the local level in Ghana
Abstract There are increasing concerns about the likely impacts of climate change on poverty, economic growth, and the overall development of poor countries. The need to adapt to the daunting challenges posed by climate change has resulted in a multiplicity of responses from various actors across scales. Evidence suggests that at the national and sub-national levels in Ghana, initiatives to address climate change are nascent. The objective of this paper is to uncover the nature of these initiatives, the actors involved and the successes and challenges for achieving sustainable outcomes through local-level interventions in Ghana. The paper adopted the qualitative cross-sectional case study design involving the use of key informant interviews, observations, focus group discussions and institutional reports. The findings suggest that initiatives for adaptation to climate change were largely autonomous since they were not necessarily outcomes from mainstream planning aimed at addressing climate change. Key actors behind the initiatives were governmental institutions, agencies, non-governmental organisations, and community leaders. The paper recommends that further efforts be made to integrate climate change adaptation initiatives at the local level of planning for proper targeting, coordination, collaboration, and sustainable adaptation.
Introduction
The climate change phenomenon has become a painful reality.The World Bank Group (2021) reports that since 1960, temperatures in Ghana have increased by approximately 1°C representing an average increase of about 0.21°C per decade.Additionally, the number of very hot days have risen from about 13% per year, and hot nights by 20% per year.Within the same period, the data show that precipitation in Ghana has been characterised by a high degree of interannual and interdecadal variability culminating in an overall reduction in cumulative rainfall by 2.4% per decade.Projections from the same World Bank report paint a gloomy future for the country.For instance, the country is projected to continue to get warmer and warmer with mean temperatures projected to soar up to 3.0°C by 2050 and 5.3% by 2100.In addition to rising temperatures, precipitation is projected to be highly variable and would continue to be so throughout the century (The World Bank Group, 2021).
Generally, manifestations of climate change in Ghana are felt through the interplay of high temperatures, sea level rise/coastal erosion, erratic rainfall patterns and weather extremes or disasters (Tuebner, 2023; United States Agency for International Development [USAID], 2017; The World Bank Group, 2021).The most fragile ecological zones particularly the coastal and northern savannah parts of Ghana are the worst affected in terms of these impacts (Tuebner, 2023;The World Bank Group, 2021;Yaro, 2010).The coastal zone is home to 25% of Ghana's population and is vulnerable to coastal erosion and tidal waves (USAID, 2017).On average, Ghana loses 2 meters of its coastline annually to coastal erosion and in some areas, the rate of erosion is as high as 17 meters (Abbey, 2023).The United Nations Educational, Scientific and Cultural Organization ([UNESCO], 2021) reports that coastal erosion and flooding swallowed 37% of Ghana's coastal land between 2005 and 2017.This phenomenon has displaced several coastal communities, destroyed infrastructure and livelihoods.
The Northern part of Ghana is disproportionately susceptible to climate change impacts due to several factors.The region has the highest exposure to temperature increases in the country and by virtue of its location in the Volta Basin is vulnerable to and suffers from riparian flooding (The World Bank Group, 2021).Flooding in the White Volta River Basin has become a perennial affair affecting hundreds of thousands of people and their livelihoods (Li et al., 2022).In addition, the Northern Savanah zone is the poorest region in Ghana and worse of it all, suffers from the worst forms of environmental scarcity due to desertification, deforestation, low and erratic rainfall patterns (Derbile, 2010;Ghana Statistical Service [GSS], 2018; The World Bank Group, 2021).These factors, coupled with the over-reliance on rainfed agriculture, create a vulnerability situation that tickles the interest of climate change researchers.Apparently, the situation calls for immediate action to foster sustainable adaptation to the prevailing impacts and to build resilience against future impacts.
The need to adapt to the daunting challenges posed by climate change has resulted in a multiplicity of responses from various actors across scales.Evidence suggests that at the national and sub-national levels in Ghana, initiatives to address climate change are taking place (USAID, 2017;The World Bank Group, 2021, Würtenberger et al., 2011;Yaro, 2010).Surprisingly, the nature of these initiatives and their impacts at the local level in Ghana are yet to be assessed empirically.This paper seeks to uncover the nature of climate change adaptation initiatives at the local level, the actors involved and the successes and challenges for achieving sustainable outcomes in Ghana.
The nature of climate change adaptation initiatives
The United Nations Framework Convention on Climate Change (UNFCCC) defines climate change adaptation as the adjustment in natural or human systems in response to actual or expected climate change effects which moderates harm or exploits beneficial opportunities (United Nations Framework Convention on Climate Change [UNFCCC], 2006).Climate change adaptation initiatives, plans and strategies are designed to deal with current and future climate change impacts.They are adaptation schemes (programmes and projects) developed and implemented at the global, regional, national, and local levels for adaptation to climate change (United Nations Framework Convention on Climate Change [UNFCCC], 2023a).Therefore, adaptation initiatives are regarded as means of reducing risks posed by climate change (Adger et al., 2007;Dessai & van der Sluijs, 2007).
According to the Intergovernmental Panel on Climate Change (IPCC) and the United Nations Framework Convention on Climate Change (UNFCCC) adaptation measures can be classified based on the timing, goal, and motive of their implementation.Adaptation measures thus, are either autonomous or spontaneous and anticipatory or planned (Intergovernmental Panel on Climate Change [IPCC], 2007;UNFCCC, 2006).The distinction between planned and autonomous adaptation is that, planned adaptation result from deliberate interventions while autonomous (spontaneous) adaptation occurs not as a result of conscious response to climatic stimuli but is triggered by ecological changes in natural systems and by market or welfare changes in human systems (IPCC, 2007).It has also been argued that, because adaptation is a process, it is important to put into use the existing knowledge, which can be readily appreciated and built upon (André et al., 2021).This makes it imperative to move from autonomous to proactive policies, strategies, and plans.Unfortunately, existing efforts have focused more on capacity building for planning and institutional adaptation to the neglect of local level initiatives.For instance, the African Ministerial Conference on the Environment (AMCEN) has identified over sixty (60) adaptation efforts within Africa, all focusing or having a capacity building component (African Ministerial Conference on the Environment [AMCEN], 2010).The emphasis on capacity building was based on the facts surrounding the uncertainty of future impacts of climate change and the need to build capacity for longterm adaptation.Moreover, in many circles, climate change is still considered a "grey" area needing actions to strengthen adaptive capacity (Atanga et al., 2017;Willems & Baumert, 2003).In this context, actions are required to learn what works under which conditions and/or circumstances.Some of these actions include the implementation of National Adaptation Programmes of Action (NAPAs) projects, which are essential as are an increased coverage of types of projects and sectors.
From the view of Agrawal (2008), adaptation initiatives should begin with understanding current vulnerability, building capacity to support adaptation planning and implementation, learning from pilot actions and deploying strategies and measures to operationalise climate change adaptation in vulnerable regions, sectors and populations.The assessment of current, urgent vulnerabilities has resulted in country-driven priorities that are sufficient to invest in building capacity and pilot actions for climate change adaptation (AMCEN, 2010).There remains a shortfall in the knowledge on the forms of adaptation to be promoted at different spatial scales considering the need to make adaptation initiatives context specific as suggested by several researchers (Agrawal, 2008;AMCEN, 2010;Intergovernmental Panel on Climate Change [IPCC], 2023;Mendelsohn, 2000).
At the global and regional levels, it has been suggested that climate change adaptation initiatives should conform to the notion that adaptation is a process and context specific (AMCEN, 2010;UNFCCC, 2023a).In this regard, adaptation initiatives should be linked to existing policies and development frameworks to ensure holistic development (IPCC, 2023).Addressing initiatives as "standalone" projects can produce adverse consequences for development.For instance, what works as an adaptation solution in one place can produce negative consequences in other places.Apparently, climate change adaptation initiatives should form part of the existing development plans and strategies (e.g., sustainable development or poverty reduction strategies) at sub-national, national and regional levels across different scales (Atanga et al., 2017;Mimura et al., 2014 ).Yet, many of the initiatives aimed at addressing climate change remain fragmented, incremental, sector-specific, and unequally distributed across regions (IPCC, 2023).
Globally, the Intergovernmental Panel on Climate Change ([IPCC], 2014) reports that adaptation planning is transitioning from a phase of awareness to the formulation and implementation of actual strategies and plans.So far, over 170 countries and many cities across the world have mainstreamed climate change adaptation into their policies and planning processes (Intergovernmental Panel on Climate Change [IPCC], 2022).In its latest report, the Intergovernmental Panel on Climate Change specifies that, adaptation planning and implementation are flourishing across all sectors and regions with many success stories (IPCC, 2023).The report, however, raises caveats about emerging gaps in adaptation as well as the occurrence of maladaptation amidst the progress and achievements.These gaps and maladaptation are more pronounced in developing countries and are projected to worsen if nothing is done to bolster current adaptation efforts.It is argued that, for lower-income countries to effectively address these gaps in adaptation and reverse maladaptation, the current global financial flows need to be revised (Intergovernmental Panel on Climate Change [IPCC], 2022).The Paris Agreement and its precursor, the Kyoto Agreement acknowledge the gaps in adaptation financing and proposed a rigorous climate change financing regime to facilitate adaptation and mitigation in lowincome countries (United Nations Framework Convention on Climate Change [UNFCCC], 2023b).Empirical evidence from Adonadaga et al. (2022) reveal financial constraints have negatively affected the progress of national climate change adaptation strategies in Ghana.
Adaptation efforts are occurring in Ghana at the national and subnational levels in most of the sectors such as agriculture and food security, water resources, health, urban management, coastal zones, forestry, cities, and tourism.Most of these efforts focus more on planning especially, building adaptive capacity rather than implementing concrete adaptation actions (Atanga et al., 2017).Generally, the delay in adaptation action has been blamed on limited knowledge and capacities (AMCEN, 2010).This includes knowledge pertaining to the identification of climate change events, the need or pertinence for adaptation and perhaps, the what, how, when and who questions about adaptation.
However, in the last decade, significant progress has been made in adaptation policy and planning in Ghana.The 2012 National Urban Policy (currently undergoing review/update) addresses climate change concerns and offers directions for mitigation and adaptation within the urban context.The National Climate Change Policy (NCCP) provides the overall framework for addressing climate change in Ghana.The NCCP focuses on reducing vulnerabilities especially among the most disadvantaged people, sectors, and regions (Food and Agriculture Organisation [FAO], 2023).In order to address the data requirement of national and international bodies, the EPA launched the Climate Ambitious Reporting Programme (G-CARP) in 2013 to support the preparation of national and international reports on greenhouse gas emissions and climate measures including financial and technical support (Nationally Determined Contribution [NDC]-Partnership, 2023).In addition, the National climate change adaptation strategy (NCCAS) and the Ghana National Climate Change Master Plan Action Programmes for Implementation: 2015-2020 were prepared to ensure a consistent, comprehensive and a targeted approach to increasing climate resilience and to reduce the vulnerability of the poor (United Nations Office for Disaster Risk Reduction [UNDRR], 2023).
Similarly, Ghana's Fourth National Communication to the UNFCCC captures the latest situation in terms of emission levels, vulnerabilities, mitigation and opportunities for adaptation (Environmental Protection Agency [EPA], 2020) and in the latest Nationally Determined Contribution (NDC) under the Paris Agreement, Ghana spells outs its strategies for aligning its policies and development to regional and international requirements for safeguarding the environment and accelerating the implementation of climate actions by strengthening the mobilisation of public, private and grassroots participation (Environmental Protection Agency [EPA] & Ministry of Environment, Science, Technology and Innovation [MESTI], 2021).
Besides, the National Development Planning Commission (NDPC) provides guidelines that mandate government Ministries, Departments and Agencies (MDAs) and Metropolitan, Municipal and District Assemblies (MMDAs) to incorporate climate change into their Medium-Term Development Plans (MTDPs) (Atanga et al., 2017;Musah-Surugu et al., 2018).Evidence show that many ministries are progressively introducing emission reduction strategies while making the efforts to climate proof their plans against the adverse effects of climate change.For instance, the National Transport Policy seeks to contribute towards meeting the country's NDC and mainstreaming climate considerations in the development of transport infrastructure (Ministry of Transport [MoT] et al., 2020) and the Ministry of Works and Housing (MoWH) has tailored its strategies towards achieving climate resilience in the housing sector (Ministry of Works and Housing [MoWH], 2021).In the agricultural sector, government's flagship project (Investing for Food and Jobs (IFJ): An Agenda for Transforming Ghana's Agriculture 2018-2021) has a strong focus on climate change adaptation and increasing climate resilience in agriculture (Ministry of Food and Agriculture [MoFA], 2018).Also noteworthy is the ongoing effort to facilitate private participation in climate change mitigation and adaptation.In line with this, the EPA in 2020 released a Private Sector Engagement Strategy for the National Adaptation Plan.The plan outlines key barriers for private sector participation, considers the role of the private sector in planning and implementing adaptation measures, and identifies key stakeholders in the private sector (EPA, 2020).
The literature shows that at the global, regional, and national levels, significant progress has been made to build institutional capacities for climate change adaptation initiatives.However, several gaps in adaptive capacity remains to be addressed.The review further reveals that Ghana's response to climate change looks impressive from the institutional point of view.In deed, in terms of signature and documentation to meet external requirements of the international architecture, Ghana's response to climate change has been very good (Cameron, 2011).However, the actual impacts of these initiatives on the prevailing vulnerability situation are yet to be ascertained.There are also challenges associated with different collaboration and partnerships that have been initiated through these projects to complement and maximise comparative advantages.In addition, sustainability of existing projects remains a challenge under the prevailing conditions of scarce financial resources.
Conceptualising climate change adaptation initiatives within the context of vulnerability theory
Vulnerability as a concept assumes different meanings in different contexts and disciplines.From the natural hazard's perspective, vulnerability is the risk of exposure of an ecosystem to a climate change-related hazard (Cutter, 1996;Dilley & Boudreau, 2001).In a food security standpoint, Dilley andBoudreau (2001) andFellmann (2012) describe vulnerability as the outcome of a situation such as food insecurity or famine.Vulnerability may also be considered in terms of a starting point and an end point.Vulnerability as a starting point addresses the character, distribution and causes of vulnerability and assumes that future vulnerability conditions can best be addressed by reducing prevailing vulnerability conditions (Dilley & Boudreau, 2001;Kelly & Adger, 2000;O'Brien et al., 2004;Vincent, 2004).Vulnerability as an end point, on the other hand, describes the magnitude of the climate change problem and seeks to quantify vulnerability by considering the net impacts of climate change minus adaptation (Levina & Tirpak, 2006;O'Brien et al., 2004).In effect, as an end point, vulnerability is the residual consequences that remain after adaptation had taken place (O'Brien et al., 2004) expressed mathematically as, Vulnerability = (exposure + sensitivity)-adaptive capacity (EPA, 2021).The model considers adaptive capacity as the main determinant of vulnerability arguing in favour of the capacities and abilities of systems to prepare for and respond to threats (Birkmann, 2013;Cardona et al., 2012;Cutter et al., 2003;Intergovernmental Approaches pertaining to vulnerability as an end point advocate for the scientific assessment of adaptive capacities of societies to establish the basis and extent of vulnerability (Birkmanna et al., 2022).
Vulnerability may also be considered as internal and external.According to Füssel (2005Füssel ( , 2007) ) and Füssel and Klein (2006) internal vulnerability is the extent to which persons, areas, or events are vulnerable to unfavourable weather changes, whereas external vulnerability refers to the external stressors that a system is exposed to.Vulnerability in this case encompasses both internal and external factors affecting the ability of the system to adapt to changes.
Vulnerability is generally conceived as the extent to which a system is susceptible to, and/or unable to cope with the adverse effects of climate change, including climate variability and extremes (Intergovernmental Panel on Climate Change [IPCC], , 2007).It may also refer to the ability of an individual, group or community to cope with the effects of a hazard (emanating from climate change) or to recover from it (Tompkins et al., 2005).Vulnerability is applied in this paper to mean the degree to which a system is exposed to a climate change phenomenon, how it is affected or will be affected (adversely or beneficially, directly or indirectly) and how it can potentially cope with, recover from and moderate the impacts.Drawing from the above definition, vulnerability to climate change is a function of a system's exposure, sensitivity, and adaptive capacity (Fellmann, 2012;IPCC, 2007).This is the Triple Imagery of Vulnerability (TIV) (Figure 1).The TIV argues that a vulnerable socio-ecological system is one that is exposed to climate change impact(s), is sensitive to the impacts and has a low capacity to cope with or adapt to the impacts.
Exposure in the TIV defines the degree to which a system is in contact with a climate change phenomenon.As Muriuki (2011) puts it, exposure describes who and what is at the risk of climate change.This could be an entire ecosystems, sub-systems, or elements within a system: flora, fauna, persons, groups or infrastructure and many others.Exposure to climate change and its associated impacts varies from one location to another.The Intergovernmental Panel on Climate Change (IPCC, 2023) reports that increases in global warming is usually associated with widespread and pronounced regional changes in mean climate and extremes.In Ghana, The World Bank Group (2021) observed significant differences in changes in climate and climate extremes across different ecological zones with the northern savanna part of Ghana being the worst affected.The fact that people and places are exposed disproportionately to climate change phenomena suggests that initiatives for climate change must be based on contextual realities.
Atanga (2014).
Sensitivity describes the extent to which a system is affected by climate change impacts which can be adversely or beneficially, directly or indirectly (IPCC, 2007;Kelly & Adger, 2000).The characteristics of a system determine its sensitivity to climate change stressors and vary across different places, systems, sectors and people.For instance, agricultural systems that are raindependent is more sensitive to drought than one that has access to irrigation facilities.Ghana's sensitivity to climate change is considered high due to high dependence on nature-based livelihoods (EPA, 2021).Rainfed and subsistence agriculture remains the main source of livelihood for over half of the population of Ghana (EPA, 2021;GSS, 2018;Tuebner, 2023;The World Bank Group, 2021).
Adaptive capacity defines a system's ability to withstand climate change and its associated extremes, to moderate impending harms and to exploit emerging opportunities (IPCC, 2007).It is argued that adaptive capacity is the overriding element of vulnerability due to its ability to water down the effects of exposure and sensitivity (Birkmanna et al., 2022;Fellmann, 2012;Gallopin, 2006).The adaptive capacity of a system is greatly influenced by its access to and control of human (knowledge of climate risks, technology, conservation agricultural skills, good health to enable labour, etc.); physical (irrigation infrastructure, seed and grain storage facilities, etc.); natural (reliable water source, productive land, etc.); political (policies, institutions and power structures, etc.); social (women's savings and loans groups, farmer-based organizations, stable and effective institutions, etc.) and financial (micro-insurance, diversified income sources, etc.) resources (Atanga, 2014; Cooperative for Assistance and Relief Everywhere, Inc [CARE] International, 2010; Dazé et al., 2009;EPA, 2021;Napogbong et al., 2021).In the realm of climate change, adaptive capacity and adaptation are knottily related such that adaptive capacity is considered as a limit beyond which adaptation is no longer possible.Levina and Tirpak (2006) described adaptive capacity as the limits of adaptation or coping range within which adaptation can occur.
It is brazenly clear that the quantity, quality and control of resources for adaptation is crucial for increasing adaptive capacity and reducing vulnerability.Unfortunately, the access to and control of the resources necessary for building adaptive capacity varies within countries, communities and even households.Generally, poor countries or societies have low adaptive capacities because they lack the resources needed to promote adaptation (Birkmanna et al., 2022).In the same way, vulnerable and marginalised persons such as women and migrants equally have low adaptive capacities owing to poor access to vital resources required for adaptation (CARE, 2010;Dazé et al., 2009;Yaro, 2010).
There is a growing acceptance that sustainable adaptation will greatly depend on the ability of planning regimes to respond to the growing need to increase the adaptive capacities of communities and groups to cope with the daunting challenges of climate change.The IPCC (2014; 2023) has emphasised the need for a cross-scale adaptation planning to address the diverse needs of society.It is further argued that sustainable development outcomes under climate change are the products of planned adaptation processes (Atanga et al., 2017).Consequently, development policies, plans and strategies developed within a multi-stakeholder environment can potentially increase the adaptive capacity of societies, reduced exposure through long-term mitigation efforts and reduce sensitivity by increasing adaptive capacity (Figure 2).Planned adaptation initiatives are more focused, targeted and more impactful and therefore hold the key to sustainable adaptation in resource poor settings.Admittedly, this will require a concerted effort and commitment from global, regional, sub-regional, national and sub-national stakeholders (Figure 2).Apparently, autonomous adaptation measures and the fire-fighting approach to the climate change menace is untenable under the current circumstances and will have disastrous consequences in the future.Intergenerational analysis of the climate of the district shows a significant increase in temperature over a generation with negative consequences on economic and social life (Atanga, 2014).This is backed by scientific evidence from Apuri et al. (2018) that shows that temperature in the KNWD has increased at an average of between 1.1°C to 1.6°C over a period of 30 years.The district experiences an average rainfall of about 921 mm with a range of between 645 mm and 1250 mm (Ministry of Food and Agriculture [MoFA], 2023).The district has a unimodal rainfall regime of about 5 to 6 rainy months from April/May to September/October and a dry season lasting for 6 to 7 months from October to April (KNWDA, 2018).A study by Yaro indicates that the rainfall regime of the district has gone through significant changes from less variations in the 1960s to higher variations between the 1980s and 1990s (Yaro, 2004).This is corroborated by a recent study by Apuri et al. (2018) which reports of declining rainfall patterns in the district.These changes have dire consequences on the lives and livelihoods in the district that largely depend on the natural environment.
Overview of the vulnerability context
The district forms part of the Birimian and Granitic rock formation with generally low-lying and undulating relief characterised by isolated hills rising up to 300 meters in the Western part of the district (KNWDA, 2018).The district lies within the White Volta Basin and is mainly drained by the Sissili, Asibelka, Atankwide and Anayere rivers and their tributaries.These rivers dry up in the dry season but flood at the peak of the rainy season (usually August-September).The Sudano-Sahelian climatic condition has influenced the open grass vegetation that dominates the area with patches of dense vegetation along river basins and forest reserves.The vegetation of the district is largely of the Sahel Savanna ecoregion composed of a mosaic of short grasses, trees, and scrubs (KNWDA, 2018).Common economic trees in the district include the shea, dawadawa, baobab, mango, nim, and acacia.Human activities (particularly farming, overgrazing, logging, sand winning, etc.) have impacted negatively on the vegetation cover leading to deforestation and the emergence of dissert conditions (KNWDA, 2010).The common soil type in the district is the Savanna Ochrosols which are inherently low in fertility (Apuri et al., 2018;Awuni et al., 2023;KNWDA, 2018).
The district has a population of 90,735 out of which 43,909 males and 46,826 females (KNWDA, 2018).The Kassenas and the Ninkarisi (called Nankana by the Kassenas) are the two dominant ethnic groups in the district.The Kassenas speak Kasem whilst the Ninkarisi speak Ninkare (also called Nankam by the Kassenas).Agriculture is the dominant economic activity in the district with about 81.7 percent of the population engaged in agricultural-related activities.The state of agriculture is largely rainfed with pockets of dry-season gardens along riverbanks and dugouts.Over time, the fortunes of agriculture has dwindled considerably, leading to high levels of poverty, food insecurity, outmigration and unemployment particularly among women.The district has high potentials in ecotourism, art, and music.For instance, the crocodile ponds in Paga as well as pottery and traditional murals in Sirigu are internationally recognised tourist attractions (KNWDA, 2018).However, these economic potentials face existential threats from climate change impacts.
The profile of the KNWD shows a clear vulnerability situation in the face of climate change threats.The district is disproportionally exposed to multiple climatic threats such as floods, droughts, and windstorms (Apuri et al., 2018;Atanga, 2014;KNWDA, 2018;Yaro, 2004).Additionally, the district mainly depends on rainfed agriculture thus, making it highly sensitive to the prevailing climate change impacts.The district also experiences high levels of poverty, which shows that the level of adaptive capacity is low.Drawing from the Triple Imagery Vulnerability (TIV) framework, the KNWD provides the peculiarities in terms of exposure to climate change and typicality in terms of sensitivity and adaptive capacity for the study of vulnerability and adaptation measures.
Methodology
The paper adopts a qualitative approach cross-sectional case study design for the analysis of climate change adaptation initiatives in Ghana.The qualitative approach is widely used in social science because of its propensity to address the why and how issues that typify much of social phenomena (Berg, 2001).The approach incorporates the ontological notion of multiple truths and multiple realities (Erlingsson & Brysiewicz, 2013).Thus, in a phenomenon such as climate change; individuals perceive its occurrence and effects in different ways that mirror their worldviews.The qualitative approach exemplifies the researcher as the main instrument of data collection instead of the manipulation of inert objects using questionnaires, machines or inventories.The approach requires direct contact between the researcher and the subjects of the study in natural settings, sites, or institutions to observe or record behaviour or events.
Drawing from the qualitative approach, the paper adopted a cross-sectional case study design.The Kassena-Nankana West District constitutes the case for the study from which analytical generalisation will be made and implications drawn for the rest of Ghana.The case study method was chosen because of its procedural adroitness that provides the means for a systematic analysis of events, data collection, data analysis and reporting the results.The method embraces depths of insights of the research problem and encourages the deployment of participatory rural appraisal (PRA) tools that promote interaction and dialogue between researchers and respondents (Bell, 2004;Erlingsson & Brysiewicz, 2013;Kumar, 1999).These exceptional qualities of case study as a method of inquiry warrant its use for the analysis of climate change adaptation initiatives.The method was helpful in aiding the in-depth assessment of climate change adaptation initiatives from multiple perspectives.
Data was collected from multiple sources; secondary sources (reports and publications) and primary sources (discussions, observation and interviews).Data from secondary sources was sourced from District Medium Term Development Plans (DMTDPs), District Annual Action Plans (DAAPs), composite budgets and monitoring/progress reports.Documents covering three planning periods 2010 to 2013, 2014 to 2017 and 2018 to 2021 were considered.The main methods of collecting primary data were Key-informant Interviews (KIIs), Focused Group Discussions (FGDs) and observation.KIIs were held with five (5) district assembly staff, six (6) heads of decentralised institutions and agencies, heads of Water Resources Commission (WRC) and EPA, project officers of 2 NGOs and five (5) assembly members.The KIIs proved useful in obtaining qualitative data about respondents' views, perceptions and experiences about climate change and adaptation responses to prevailing impacts.Additionally, observation was used to assess the conditions of the initiatives to ascertain first-hand information regarding their prospects for adaptation and sustainability.
The FGDs were conducted with area council members.In all, seven (7) FGDs were conducted.The total number of participants for the FGDs was 58, comprising 10 in Nakong, 5 in Katiu, 7 in Navio, 9 in Chiana, 7 in Paga, 11 in Sirigu and 9 in Mirigu.Respondents for both IDIs and FGDs were purposively sampled (based on their position as leaders and their foreknowledge of initiatives in the district and/or in their communities.
Thematic analysis was used to analyse the data manually.First, the interviews and discussions were recorded, transcribed, read through and important content extracted and organised into themes generated from the interview and discussion guides.Analysis of secondary data followed the same procedure.Lastly, common ideas, topics, and quotes in each theme were further summarised, discussed and interpreted.
Introduction
The purpose of the paper was to uncover the nature of climate change adaptation initiatives, the actors involved and the successes and challenges for achieving sustainable outcomes through local level interventions in Ghana using the Kassena-Nankana West District as a case study.This section presents results of the analysis which include initiatives pursued by communities, organisations, and the District Assembly.The initiatives are categorised into six themes namely, integrated soil and land management, natural vegetation management, integrated water resources management, education and awareness raising, improved seed varieties and animal breeds, and promotion of alternative livelihoods.
Integrated water resources management
The integrated water resources management (IWRM) initiative aims to promote positive environmental practices which are crucial for the improvement of community water resources management.The initiative utilises a participatory and inclusive process for planning and executing water resources conservation while developing initiatives that address the social and economic needs of beneficiaries.Specific initiatives pertaining to the IWRM in the Kassena-Nankana West District were woodlots and buffer zone system.
With the buffer zone system, trees were planted along the margins of a river or the catchment area of a dam or dugout to serve as a buffer or barrier between the water resource and human activities, particularly farming.The creation of the buffer zone maintains the ability of the water resource to retain water and maintain its integrity.The study identified two types of trees used for the buffer zone projects.These were fruit trees (e.g., Kandiga, Katiu, Batiu, Pingu, Nyangania and Kayoro) and woodlots (e.g., Nakong project).The trees for the woodlots were mainly acacia and teak trees.The fruit trees were mango trees, examples of which were the Kandiga and Buru-Kazugu projects (see Figure 4 and 5).
The Kandiga Project was a collaborative initiative between the District Assembly on one hand and the Kandiga Chief, Naba Henry Abawine Amenga-Etego II and the Azeadumah Community in Kandiga on the other hand.The project run on a shared arrangement in which the assembly supplied the mango seedlings and fenced the area, Azeadumah community supplied labour for planting and maintenance of the plantation.The proceeds of the plantation (fruits) when harvested, were to be shared between the assembly and the Azeadumah community according to an agreed ratio.The project is an income generating activity which is expected to benefit both the KNWDA and the Azeadumah community while maintaining the dam.Thus, the project is integrative, participatory, and comprehensive and has high potentials of sustainability.
The Buru-Kazugu component of the IWRM initiatives was a community-based project implemented by EPA through the Ghana Environmental Management Project (GEMP) and funded by the Canadian International Development Agency (CIDA).Pivotal to the success of the project was the Buru-Navio Chief, Pe Adam Kwarasei II who happen to be an environmental enthusiast and managed to galvanise sufficient support and the cooperation of his community on one hand and the funding partners on the other hand.The project involves the duo of tree plantation and potable water supply (see Figure 5).The Buru-Kazugu community was responsible for the maintenance of the plantation which include planting, watering, and pest control.The EPA provides the seedlings and water infrastructure.Field observation revealed that the plantation was not properly taken care of as the community members reneged on their responsibility.For instance, the trees were left to wither, and the solar-powered mechanised component of the borehole had broken down.It was revealed during the FGDs that the caretakers abandoned the plantation for lack of remuneration.
Education and sensitization programmes
Sensitization and education projects were conducted by district-and regional-level organisations on environmental and climate change awareness.Prominent among these initiatives was the annual environmental education and awareness programme organised by the EPA with emphasis on drought and desertification.Document review results show that the EPA between 2013 and 2017 sensitised over eleven (11) communities in the Kassena-Nankana West District on environmental management and conservation.So far, over 3,000 people have benefitted from the campaign.Additionally, the EPA has formed and trained environmental clubs in senior high schools within the district in order to further its sensitization drive.Another awareness and education project by EPA was radio programmes.The radio sensitization programmes were run on four (4) local radio stations (URA Radio, WORD FM, Radio Builsa and Radio A1) in English, Kasem, Buili and Gurune languages.
The District Assembly, World Vision Ghana, Water Resources Commission and Organisation for Indigenous Initiatives and Sustainability (ORGIIS) also organised periodic sensitization programmes on environmental awareness.In particular, the Water Resources Commission (WRC) has erected giant billboards along the Paga-Bolgatanga Highway to raise awareness about climate change.One of these billboards was planted close to the Anayire's bridge along the Paga-Bolgatanga road.
Alternative livelihoods projects
The alternative livelihood projects identified in the study were training of women groups on shea butter extraction, soap and pomade making, weaving, bee keeping, baobab seed oil extraction as well as provision of credit to women.The EPA, World Vision Ghana and ORGIIS were involved in these activities.The EPA through GEMP with funding from CIDA championed an integrated beekeeping project in Nakong in the western part of the district (see Figure 6).The project aims to protect the environment by controlling the menace of bush fires, indiscriminate felling of trees and charcoal production.
ORGIIS and World Vision Ghana also organised trainings for women on shea butter extraction, soap and pomade making, weaving and baobab seed oil extraction.In addition, revolving loans were provided to women groups to support petty trading and other forms of businesses.The essence of these initiatives was to empower women to overcome poverty and thus, reduce their dependence on fuelwood harvesting and other destructive environmental practices.
Improved seed varieties and animal breeds
The main aim of the improved seed varieties and animal breeds was to increase the resilience of farmers against the impacts of climate change.KIIs with the District Agricultural Development Unit (DADU) revealed that the local crop varieties and livestock breeds were performing poorly because of climate change.In particular, the local millet varieties, early millet (naara) and late millet (zεa) as well as guinea corn take a longer duration to mature and are sensitive to drought and heavy rainfall.To remedy this problem, early maturing maize and climate resilient varieties such as pioneer, kapala dorke and panar 53 were promoted and introduced to farmers.However, the intervention was not helpful to many farmers because majority of the farmers do not crop maize.Besides, the maize seeds were costly and required the use of fertilizer for which farmers had no money to buy.During the FGDs, discussants, particularly those in the eastern part of the district (Sirigu and Mirigu area councils), revealed that they prefer millet and guinea corn.Unfortunately, DADU had no improved seeds for the local millet and guinea corn varieties.Some discussants alluded that some years back, improved varieties of guinea corn known locally as agric were introduced in the area, but farmers rejected them because they disliked the taste.
The study found that there were no improved varieties of livestock by DADU for farmers' adaptation.Nevertheless, DADU encouraged farmers to adopt improved breeds of sheep, goats and cattle from Burkina Faso, Mali, and Niger.DADU had also made progress in guinea fowl breeding, the results of which is the fast growing and weightier Belgium breed of guinea fowls.The existence of the new breed of guinea fowls was however, not known to many farmers and therefore not widely adopted.During the FGDs, discussants, particularly in Mirigu and Sirigu, expressed their lack of awareness about the improved breeds of guinea fowls.Discussants and key informants indicated that many farmers in their areas travel to Zebilla in Bawku West District to buy improved breeds of guinea fowl chicks for rearing.
Integrated soil and land management
Integrated soil and land management were initiatives pertaining to agronomic practices aimed at reducing soil nutrient depletion and soil erosion through measures such as zero burning of plant residue, minimum/zero tillage, contour ploughing, chemical and physical changes, maintaining plant residue on the farm as well as the preparation and use of compost.Other measures were zero grazing for livestock through the harvesting of animal feed, creating community pastures, controlled grazing and discouraging the keeping of large flocks.These initiatives were part of African Development Bank (AfDB) and World Bank pilot programmes on sustainable land and water management.The initiatives were not conceived, planned, and implemented by the KNWDA and can best be described planned experimental projects from external organisations.
Farmer-managed natural vegetation regeneration
The natural vegetation regeneration initiatives identified by the study were in two ways: community managed afforestation through shrub pruning and agroforestry.The shrub pruning is described in Ninkare (spoken by the Ninkarisi) and Kasem (spoken by the Kassena people) as tintuori lebike tia and vokogo mo geri tio respectively (See figure 7).Both the shrub pruning, and agroforestry were implemented by World Vision Ghana and ORGIIS.The project officer of ORGIIS explained that the logic behind the shrub pruning was that, as the climate changes, the vegetation also undergoes natural changes such that more adaptable plant species emerges to replace the less adaptable ones.He further argued that it was much easier to allow the climate tolerant species to grow naturally and improve the vegetation than to plant new trees.The process is cheaper because it does not involve the buying of seedlings, planting cost, watering, fencing and other forms of maintenance identified by Apuri et al. (2018) as challenges to tree planting in the area.This intervention is a clear recognition of indigenous cultural adaptive practice supported by World Vision Ghana and ORGIIS.
The agro-forestry practices involved the growing of Acacia albidia trees on the farm.Acacia albida is a deciduous tree which drops its leaves at the onset of the rainy season.The leaves rapidly decompose to release nutrients at the time when young plants need them most.The tree remains leafless and does not cast enough shade to adversely affect crops grown beneath.The tree is grown naturally in the area and the project only discourages farmers from felling them.It is widely used in agroforestry projects (Apuri et al., 2018;Garrity et al., 2010).
Discussion
The paper sought to uncover the nature of climate change adaptation initiatives at the local level in Ghana.The results show that a number of initiatives are taking place which have direct and indirect implications for climate change adaptation.These initiatives included integrated soil and land management, natural vegetation management, integrated water resource management, education and awareness raising, improved seed varieties and animal breeds, and alternative livelihood promotion.Key actors behind the initiatives were the District Assembly, government agencies (EPA and WRC), community leaders, and environmental NGOs.The initiatives achieved modest successes and some challenges as well.Majority of these initiatives were not implemented with the aim of addressing climate change and therefore can be described as autonomous.
According to the Intergovernmental Panel on Climate Change (IPCC, 2007), autonomous or spontaneous adaptation occurs in response to changes in ecological or natural systems as well as changes in human systems pertaining to market or welfare systems.However, the results of the study indicate that the autonomous initiatives were implemented not for market or welfare consideration but for ecological changes in natural systems such as land degradation, desertification, soil infertility and deforestation.This was particularly the case with initiatives from the district assembly which were mainly in response to environmental concerns and not necessarily climate change.One reason for this situation is poor knowledge about the differences between climate change and environmental concerns which has been identified by Atanga et al. (2017).Albeit some of the environmental concerns are invariably linked to climate change, they cannot be considered as climate change initiatives especially when viewed from the triple-imagery conceptualisation of climate change in a vulnerability context.Ideally, initiatives for addressing climate change should be informed by analysis of exposure, sensitivity and adaptive capacity just like the Intergovernmental Panel on Climate Change (IPCC, 2007), Levina and Tirpak (2006) and Fellmann (2012) have stated.This apparent misconception has resulted in misalignment of adaptation measures pursued in the district.For instance, while the district suffers from high temperatures, erratic rainfall patterns, flooding, windstorms, and droughts (Apuri et al., 2018;Atanga, 2014;KNWDA, 2018;Yaro, 2004), the proposed strategies for addressing these concerns were mainly tree planting and buffer zone protection of water resources.This is a clear case of untargeted intervention which has the tendency to create adaptation gaps, widen poverty and lead to maladaptation similar to what File et al. (2023) have observed among farming communities in northern Ghana.Globally, evidence suggests that the climate change situation has reached an unprecedented level and there is the need for urgent action to foster adaptation across levels and sectors (IPCC, 2023).Additionally, there appears to be consensus among scientists that sustainable adaptation will greatly depend on the ability of planning regimes to respond to the growing need for adaptation across scales.It is also widely argued that autonomous adaptation measures, the business as usual, and the fire-fighting approach to adaptation will produce disastrous consequences in poor communities in a worsening climatic situation.Unfortunately, the results of the study suggest that experimental measures that potentially support adaptation to climate change are nascent at the local level in Ghana, yet largely autonomous and the process of formalisation infantile.Drawing from this finding, adaptation planning and implementation are not flourishing across many sectors and regions as the Intergovernmental Panel on Climate Change (IPCC, 2023) has posited.
Similarly, the Intergovernmental Panel on Climate Change (IPCC, 2014) reports that adaptation planning is transitioning from a phase of awareness to the formulation and implementation of actual strategies and plans.In its 2022 report, the IPCC states that over 170 countries and many cities across the world have mainstreamed climate change adaptation into their policies and planning processes (Intergovernmental Panel on Climate Change [IPCC], 2022).The study results support the view that adaptation planning is ongoing at the local levels as local authorities make efforts to incorporate climate change into locals plans as a cross-cutting issue.However, the study reveals that there are yawning gaps in knowledge of climate change among local authorities and communities.Although, sensitization programmes were still being carried out by state and non-state actors aimed at improving local knowledge and understanding of climate change, critical gaps in the awareness and knowledge of climate change as a concept in the local context still persist.This was reflected in the conception of climate change as an age-old environmental issue such as desertification and not a new phenomenon needing urgent action.Agrawal (2008), African Ministerial Conference on the Environment (AMCEN, 2010), Mendelsohn (2000) as well as Intergovernmental Panel on Climate Change (IPCC, 2023) have observed similar knowledge gaps about climate change.Therefore, there is the need to water down the concept of climate change to local contexts for a holistic conceptualisation of the concept in order to appropriately link local initiatives to national and global goals.In this light, the United Nations Framework Convention on Climate Change ([UNFCCC], 2023a) has advocated for a two-way linkage between local and global actions for sustainable adaptation to climate change.
The results also highlight the issues of coordination of climate change adaptation initiatives and collaboration among key actors as necessary conditions for sustainable adaptation.Good coordination of climate action is important for building synergies and sharing experience for successful adaptation planning and implementation.Yet the findings show that, there was less coordination of adaptation initiatives in the district and less collaboration between and among actors involved in the initiatives.For instance, initiatives from NGOs, and some decentralised departments and agencies were neither captured in the district plans nor coordinated by the district assembly.This finding is consistent with the IPCC's assertion that many of the initiatives aimed at addressing climate change are fragmented and incremental (IPCC, 2023).Poor coordination and collaboration have the tendency to create duplication of interventions and waste scarce resources in an urgent time of need.However, at the community level, there was evidence of the active involvement of key stakeholders particularly chiefs.The participation of chiefs and community members in some of the initiatives hold the prospects for fostering sustainability of the initiatives.Overall, initiatives implemented by NGOs had the greater promise of sustainability due to their ability to promote community participation and ownership.According to Qader (2023) leveraging the support of local actors for adaptation initiatives is crucial for the success of adaptation actions.Similarly, the need to make adaptation measures context specific through the engagement of local stakeholders further exemplifies the role of local actors in climate interventions just as Agrawal (2008), African Ministerial Conference on the Environment (AMCEN, 2010), Mendelsohn (2000) and Intergovernmental Panel on Climate Change (IPCC, 2023) have stressed.
The findings also revealed gaps in access to critical resources required for adaptation such as farm inputs.The results show that farmers lacked access to improved seed varieties and animal breeds needed for adaptation to climate change.Bruins (2009) as well as Amole and Ayantunde (2019) have stressed that food security in a changing climate is invariably a seed and breed security issue.Consequently, the access to, and utilisation of improved seeds are vital for adaptation to climate change among farm households.Unfortunately, Hampton et al. (2016) Thompson and Gyatso (2020) and Mbatia (2022) have shown that sub-Saharan Africa still lacks behind in terms of access to and use of improved seeds for adaptation despite the explosion in the growth of the seed industry globally.In Ghana, the passage of the Plant Breeders' Bill in 2013 was to facilitate the proliferation of improved seeds in the country as noted by Poku et al. (2018) and Quarshie et al. (2021).However, institutional limitations, financial constraints and poor knowledge of farmers' needs still militate against farmers' access to improved seeds in Ghana (Quarshie et al., 2021).These bottlenecks need to be addressed within an integrated framework on climate smart agriculture to engender community resilience and sustainable adaptation.
In resource poor contexts, measures that increase adaptive capacity are crucial for building resilience and promoting adaptation to climate change impacts.Gallopin (2006), Fellmann (2012) and Birkmanna et al. (2022) have exalted the need to increase adaptive capacity as an overriding element in reducing vulnerability because of its ability to mitigate the effects of exposure and sensitivity.In this regard, the alternative livelihood projects that were promoted by international agencies and environmental NGOs are highly relevant for promoting community resilience and adaptation.Even gratifying is the case that these alternative livelihood schemes were targeted at empowering women who are disproportionately vulnerable to climate change impacts.The likes of Adger et al. (2007), Dessai and van der Sluijs (2007) and Beltrán-Tolosa et al. (2022) have all argued in favour of livelihood diversification and alternative livelihoods as means of facilitating climate change resilience.
Conclusion and recommendation
The study sought to uncover the nature of climate change adaptation initiatives, the actors involved and the successes and challenges for achieving sustainable outcomes through locallevel interventions in Ghana.The findings suggest that initiatives for climate change adaptation at the local level in Ghana are nascent.These initiatives cut-across diverse areas including, inter alia, integrated water resource management, education and sensitization, alternative livelihoods, improved seed varieties and animal breeds, integrated soil and land management and farmermanaged natural vegetation regeneration.The initiatives were implemented by diverse stakeholders such as the District Assembly, Environmental Protection Agency, Water Resources Commission, environmental NGOs, community leaders and international development agencies.Coordination among these stakeholders regarding the initiatives was poor as the District Assembly failed to integrate and monitor the implementation of initiatives such as the farmer-managed natural vegetation regeneration and the agroforestry projects.The rationale behind initiatives such as integrated soil and land management, farmer-managed natural vegetation regeneration was to respond to the menace of land degradation but with direct benefits in terms of adaptation to climate change.Consequently, the initiatives identified in the study were largely autonomous adaptations as they were initiatives implemented by diverse stakeholders with varied motives and objectives other than climate change.
The findings further revealed that adaptation strategies initiated by the NGOs and implemented by the local communities have better prospects of being sustainable than those under the supervision of the district assembly.The paper recommends that, further efforts be made to integrate climate change adaptation initiatives at the local level of planning for proper targeting, coordination, collaboration and sustainable adaptation.
Kassena-Nankana West District was chosen for the study.The District lies approximately between latitude 10.74° and 11.03° North and longitude 0.88° and 01.53°West in the Upper East Region of Ghana (Kassena-Nankana West District Assembly [KNWDA], 2018).It shares border with Burkina Faso to the North, Bolgatanga Municipality and Bongo District to the East, Sissala East, Builsa North and South Districts to the West and Kassena-Nankana Municipality to the South (KNWDA, 2018)see Figure 3.The district has a total land area of 1,004 sq.km which is predominantly rural with 134 major settlements.The height of the area is 1000 metres above sea level and forms part of the Sudano-Sahelian climatic zone which features a relatively long dry and short wet season (Kassena-Nankana West
Figure
Figure 2. Conceptual framework for climate change adaptation.Author's Construct
Figure
Figure 6.Alternative livelihoods projects -bee keeping project in Nakong.
|
2023-11-22T16:19:22.923Z
|
2023-11-20T00:00:00.000
|
{
"year": 2023,
"sha1": "a53cb4a22a17c55ca543bf7d0b24ef72bce6bc00",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311886.2023.2282719?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "0dc05a0c4fc85f55a1007316a767c08de8016272",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": []
}
|
18224341
|
pes2o/s2orc
|
v3-fos-license
|
Impact of organised programs on colorectal cancer screening
Purpose Colorectal cancer (CRC) screening has been shown to decrease CRC mortality. Organised mass screening programs are being implemented in France. Its perception in the general population and by general practitioners is not well known. Methods Two nationwide observational telephone surveys were conducted in early 2005. First among a representative sample of subjects living in France and aged between 50 and 74 years that covered both geographical departments with and without implemented screening services. Second among General Practionners (Gps). Descriptive and multiple logistic regression was carried out. Results Twenty-five percent of the persons(N = 1509) reported having undergone at least one CRC screening, 18% of the 600 interviewed GPs reported recommending a screening test for CRC systematically to their patients aged 50–74 years. The odds ratio (OR) of having undergone a screening test using FOBT was 3.91 (95% CI: 2.49–6.16) for those living in organised departments (referent group living in departments without organised screening), almost twice as high as impact educational level (OR = 2.03; 95% CI: 1.19–3.47). Conclusion CRC screening is improved in geographical departments where it is organised by health authorities. In France, an organised screening programs decrease inequalities for CRC screening.
Background
Evidence of the efficacy of screening for colorectal cancer (CRC), in terms of both reduced mortality and reduced incidence through removal of adenomatous polyps, led both the U.S. Preventive Services Task Force [1] and the Advisory Committee on Cancer Prevention in the European Union [2] to recommend mass screening. Colorectal cancer organised screening is increasing at different regional and national levels [3]. In 1998 the French National Consensus Conference on Colorectal Cancer distinguished three levels of risk (moderate, high or very high) and advocated the use of Hemoccult II for mass screening of subjects with moderate risk [4]. Based upon academic initiatives, early studies have been carried out in 3 French departments since 1998 or earlier [5]. Later on, the French national cancer plan focused on screening interventions, including CRC and, from 2003 onwards, regional organised screening programs were set up within a national plan with the objective of nationwide coverage by the end of 2007 [6]. In these programs, biennial faecal occult blood test (FOBT) is first provided by Gps, free on charge, to all subjects aged 50 to 74 years. Over a 4 to 6 month period, the test is mailed to non participants with eventualy a reminder letter. The ongoing progressive implementation of colorectal cancer screening in France affords the unique opportunity to look at differences in compliance, knowledge of population and physicians attitudes between areas with or without organised screening programs.
The EDIFICE nationwide survey was carried out in early 2005 to provide a snapshot of cancer screening procedures in France in 4 selected cancer indications, including CRC. Results of this survey for CRC screening are presented hereunder.
Framework
France administration (including Health administration) is divided into 20 "Regions" (Equivalent to Provinces in Canada or Landers in Germany but with less empowerment than states in the USA) and 95 "Departments". The mean number of inhabitants is 3,1 million for Regions and 650 000 for Departments.
When organised, disease screening is currently carried out at the departmental level after decision at the national level. Once a decision is made about which services to offer and to whom (decision and funding at the national level), the local health administration submit to the Health Ministry an authorization to start the program, once fulfil all the specifications described by the Ministry. For colorectal cancer the specifications mainly are the following: Training of GP, an information letter to every affiliated to the National Health Insurance System (almost every person living in France) age 50-74y, no more than one center to analyse FOBT by department, the utilization of Hemoccult, description of criteria for not undergone FOBT (among which familial history of Colorectal cancer...).
Therefore there is a national way to organised screening, but local differences about when the program started.
General Population survey
A nationwide observational survey (opinion poll) was carried out by telephone from January 18 t to February 2, 2005 among a representative sample of subjects living in France and aged 40-74 years. Representativeness of the survey sample for gender, age, profession and double stratification by geographical area and community size as compared to the French general population was ensured by the use of the method of quotas [7], based on the statistics of the French Employment Survey conducted in 2002 by the French National Institute for Statistics and Economical Studies (INSEE). The 170-item survey questionnaire was administered by trained and independent interviewers of TNS-Healthcare SOFRES using the Computer-Assisted Telephone Interview (CATI) technique. Telephone interviews lasted 25 minutes on average. On account of the size of the questionnaire, questions concerning four cancers studied (breast, colorectal, prostate and lung cancer) were rotated during the successive telephone interviews. The survey questionnaire collected information about subjects' socio-demographic characteristics (gender, age, residence area, community size), attitude and behaviour regarding cancer screening (in general and for the four organs concerned), actual experience of cancer screening, and attitude as regards personal health (self-medication, perceptions on vaccination, medical consultation during the past year, tobacco and alcohol consumption). The questionnaire distinguished tests performed for screening purpose and those performed following symptoms. A main sample of 1 509 subjects aged between 40 and 74 years was interviewed. An additional sample of 100 subjects aged between 50 and 74 years (recommended age interval for the screening of CRC) was also interviewed in order to obtain a representative number of subjects living in the 22 French departments involved in organised CRC screening programs. Computerised weighting [8] of the whole sample of 1 609 subjects allowed for compensation of under-representation of the additional sample in the whole sample (adjustment to the proportion of all subjects living in the 22 departments involved in organised CRC screening programs). Subjects with a personal history of cancer (N = 105) were excluded from analysis because actual experience of cancer might affect cancer screening perceptions. Therefore, the whole subject sample analysed was comprised of 1 504 individuals aged between 40 and 74 years, among whom 970 subjects of both genders aged 50-74 years were interviewed for CRC screening. Precision of results for this sample was ± 3.2% with 95% confidence interval (CI).
Survey among General Practitioners
The survey (opinion poll) was carried out by telephone from January 31 to February 18, 2005 among a representative sample of French general practitioners (GPs). Representativeness of the survey sample for age and region of residence (five regions) as compared to the national population of GPs was ensured by the use of the method of quotas [7]. The 45-item survey questionnaire collected information about GPs' socio-demographic characteristics (gender, age, department of France) and their medical practice regarding screening of cancer (breast, colorectal, prostate, and lung cancer), especially perceptions on screening methods, level of screening counselling, screening tests recommended, perceived obstacles to screening, and persons'expectations about cancer screening according to GPs. Six hundred GPs were interviewed in order to obtain a sufficient number of GPs practicing in the departments of France involved in planned screening of colorectal cancer (N = 178; 30%).
Statistical analysis
The departments were divided into two categories according to the existence or absence of an organised colorectal cancer screening program. Among the "organised" departments (N = 22), two groups were defined according to the timing of the initial program implementation: -about 12 months ago: ten "second wave" departments (Allier, Ardennes, Essonne, Finistère, Marne, Mayenne, Moselle, Orne, Puy-de-Dôme, Pyrénées-Orientales) which started in 2004.
Data analysis was essentially descriptive. Quantitative data were described by the means and standard deviations (SD) and categorical data by the numbers in each category and corresponding percentages. Statistical comparisons were carried out by the Student's t test for quantitative data, and by the Z test and the Chi-square test for the comparison of percentages and numbers, respectively, in the case of categorical data. Differences were considered statistically significant when the probability value was less than 0.05 (bilateral test). Multivariate logistic regression analyses were expressed in terms of odd ratio (OR) and 95% CI and performed using the SAS ® software, version 8.2 (proc FREQ and proc LOGISTIC procedures).
Results
At the time of initiation of the EDIFICE Survey, organised screening programs proposed FOBT in 22 of 95 metropolitan departments, corresponding to an estimated 18,230,000 inhabitants in 2003 or 30% of the national population (or 4,650,000 subjects aged 50-74 years, corresponding also to 30% of the national population in the same age range).
Subjects' characteristics
The median age of the 970 interviewed subjects was 61 years, 52% were female, 42% lived in towns with > 100,000 inhabitants, 28% lived alone and 89% had visited a physician within the last 12 months.
Screening tests (Table 1, 2 and 3)
Two hundred and forty subjects (25%) reported having undergone at least one screening test for CRC. Among them, 76% declared having undergone the test based on individual initiative compared to 24% within an organised screening program. The majority (53%) declared having undergone endoscopy alone (without distinction available between colonoscopy and sigmoidoscopy), while 46% report having undergone FOBT ± endoscopy. This trend was reversed in the 22 pilot departments (FOBT 65%; endoscopy alone 35%). Subjects in extreme age categories (50-54 and 70-74 years) declared having undergone significantly less screening tests than other categories (Table 1). Subjects living in the 22 departments with organised screening programs reported having undergone significantly more screening tests than others (OR = 1.99; 95% CI: 1.47-2.69; p < 0.01), including 52% of them within screening programs (Table 1). In these departments, the percentage of subjects declaring having undergone a screening test significantly increased with the age of the local program, from 26% in the most recently implanted, to 37% in the first-wave departments (p = 0.03, OR = 1.76 -IC 95% 1.06-2.93).
Factors influencing screening test performance
Characteristics of screened and unscreened subjects were compared. In the univariate analysis (Table 2), significantly more unscreened subjects lived alone and lived outside the 22 departments with organised screening. Significantly fewer of them had visited a gastroenterologist within the past 12 months, were concerned/motivated by screening, were afraid of CRC and had cancer or CRC cases among their relatives or friends. Lastly, unscreened subjects had significantly lower incomes than screened subjects. After multivariate logistic regression analysis, eight independent variables influenced screening (six positively and two negatively), irrespective of the test used (FOBT and/or endoscopy) ( Table 3). The strongest predictive variable (OR: 5.55; 95% CI: 3.02-10.19) was to have visited a gastroenterologist within the last 12 months. However, when only screening with FOBT is taken into account, only four positive variables remained correlated with screening. Living in one of the 22 departments with organised screening programs was the strongest predictive variable, followed by motivation/concern for CRC screening and educational level, while the influence of gastroenterologists disappeared (Table 3).
Among the subjects declaring having been screened in the 22 pilot departments, 52% did so within an organised program and 46% based upon individual initiative.
Almost all subjects (93%) who participated in mass screening programs underwent FOBT, compared to 34% of subjects screened based upon individual initiative (p < 0.01), whereas only 26% underwent endoscopy, compared to 74% (p < 0.01), respectively. Sixty-four percent of subjects who participated in mass screening programs were invited to do so through a mailing campaign from the French Health Care System ("Social Security"). Subjects in screening programs were significantly older at the time of their first screening (57.8 versus 52.3 years; p < 0.01).
Perception of CRC screening by population
Fifty-six percent of interviewed subjects gave an adequate definition of cancer screening and 88% knew that screening increases the likelihood of CRC cure. In the 22 pilot departments, 86% of interviewed subjects felt the invitation by mail was motivating and only 6% found it worrying. Individuals who did not undergo screening tests were invited to state the reason from a limited pre-established list. Few differences appeared between the two categories of departments. "Feeling of not being concerned" was lower, although not significantly, in organised departments (33% versus 38% of subjects; OR = 0.83; 95% CI: 0.59-1.16);"having no symptoms" was also not significantly lower in organised departments (17% versus 21%; OR = 0.76; 95% CI: 0.50-1.14) and "fear of the test and/ or its results" was higher in organised departments (11% versus 6%; OR = 1.97; 95% CI: 1.11-3.49).
Screening attitudes and perceptions of GPs
Eighteen percent of the 600 interviewed GPs reported recommending a screening test for CRC systematically to their patients aged 50-74 years, while others declared to "often" (48%), "seldom" (28%), or "never" (6%) recommend doing so. The proportion of GPs who reported systematically recommending a test was higher in the 22 pilot departments than in other departments (29% versus 13%; p < 0.01) and increased, but not significantly, with the age of the local program (26% in the "second-wave" departments, and 30% in the first wave depârtments; p = NS, OR = 1.20 -IC95% 0.58-2.51). The main reasons given by GPs for not systematically recommending screening tests (Table 4) were the belief that screening should be restricted to subjects at risk (28%) and the feeling that they were not associated to the general program (19%). On the contrary, GPs considered that patients' reluctance to perform screening tests is related to fear of results (16%), feeling of not being concerned (11%), non-recommendation by GP (9%) or lack of information (8%).
Discussion
The EDIFICE nationwide survey was carried out to provide a snapshot of cancer screening procedures in France in 4 selected cancer indications, including CRC. This survey relies on self-reported data and does not report the actual incidence of screening tests for CRC. Though the questionnaire was discriminative for true screening, it is likely that some of the reported "screening tests" were actually diagnostic tests following discrete symptoms, especially for tests performed based on physician's prescription. Self-report accuracy of screening tests may be test-dependent and FOBT has been shown to be underreported [9] but also over-reported [10]. Nevertheless, self-reported screening behaviour is generally fairly accurate [11,12], and many publications rely upon this. However, this survey does have limitations inherent to the design (cross-sectional) or a limited generalizability due to the economic and organisational French background.
The first observation of this survey is the low rate (25%) of reported screening for CRC in France in the target population aged 50-74 years, in contrast with a high level of scientific evidence and official recommendation [1,2]. This low rate is close to [13,14] rates observed in other Western countries, but significantly lower than the figures in recent publications. For instance in 2004, the rate for US adults above 50 y who reproted receiving either a FOBT within one year or an endoscopic examination within 10 years is 57.1% [15].
In contrast to other developed countries [13], the financing of screening tests, whether they are performed individually or within an organised program, is not an issue, since they are all paid for by the French Health Care System.
The second point is the influence of locally organised screening programs on screening attitudes of both population and GPs. The rate of subjects reporting having been screened (either individually or through screening pro- grams), the rate of GPs systematically recommending CRC screening, the proportion of subjects screened undergoing FOBT, compared to endoscopy, as well as that of subjects having performed a test within the last two years, were all increased in departments where an organised screening program exists. Organised cancer screening is indeed assumed to be more effective than opportunistic screening [16]. Furthermore, it has been shown to improve guideline compliance, especially with regard to the adequacy of examinations that should follow positive FOBT [16][17][18], and therefore are likely to protect subjects from the risk of poor-quality screening practices and to guarantee screening cost-effectiveness [16].
The most important finding of our survey is that the existence of an organised local screening program is the strongest independent predictive factor of performing a screening test in the logistic regression analysis (Table 3). When only FOBT is considered, almost 4 times subjects living in the 22 pilot departments reported undergoing screening tests than those living in other departments and it is anticipated that this difference will grow over time with program implantation. Furthermore, living in a pilot department is almost twice as predictive as educational level (Table 3). This finding suggests that the implementation of organised screening programs minimizes inequality for CRC screening. When either FOBT or endoscopy are considered, the strongest predictive factor is having consulted a gastroenterologist within the last 12 months. This should be put into perspective with the role of endoscopy in individually-based screening procedures. The fact that having consulted a gastroenterologist is no longer an associated factor when considering FOBT and endoscopy minimize the risk of transposition of cause and effect (visits prompted by FOBT).
Within departments where organised programs are implemented the declaration rate of screening tests (37 versus 26%), the reported rate of participation in the local program (21 versus 10%) and the reported systematic recommendation by GPs of performing screening tests (30 versus 26%) were higher in departments in which the local program was first implanted than in those in which it was set up recently. These correlations are likely to be explained by the educational role of organised programs and solicitation of population and physicians. The fact that a decreasing rate of subjects wrongly assume that having no symptoms is a reason for not performing a screening test, supports this assumption. In a US survey, "lack of awareness" and "not recommended by a doctor" were the most common barriers to CRC screening and similarly are decreasing with time [19]. Moreover, the participation rate in screening in the "scout" departments is close to the objective of 50% participation as set in the French Cancer Plan established by the French Health Ministry, and may be the maximum achievable rate with such programs. Nevertheless, these "scout" departments, in which mass screening was initiated based on academic initiative, may be more highly implicated in screening and cannot necessarily be extrapolated to other departments according to national directives.
The role of GPs is important for individual screening practices [19] particurlarly for long term compliance [20]. In France in 2005, only 18% of them systematically recommended CRC screening tests and an additional 48% "often" recommended them. This compares with 59% of GPs recommending tests in a Canadian survey [21]. Surprisingly, the level of knowledge about CRC screening of the general population and GPs seems correlated and similarly influenced by organised local programs. Other yet undetermined disease-or test-related factors may negatively influence CRC screening. It has been shown, for instance, that in a cohort of well informed women, fewer undergo FOBT than mammography for cancer screening [22].
Conclusion
The rate of CRC screening testing is still low in France, but is expected to increase regularly with the nationwide implementation of mass screening programs, which are likely to be the main factor influencing subjects to undergo, and GPs to systematically recommend, screening tests. Nevertheless, the rate of screening test performance in the areas with the oldest organised programs (> 6 years), about 50%, could be the highest rate achievable with time using this kind of organisation. This could still be considered as insufficient. Further public health research is warranted to clarify remaining barriers and improve the methods of informing the population and GPs [23].
manuscript. JYB contributed to the design of the survey, to the data analysis, attented almost all working cession and reviewed the manuscript. YC contributed to the design of the survey, to the data analysis, attented all working cession and reviewed the manuscript. SD contributed to the design of the survey, to the data analysis, attented all working cession and reviewed the manuscript. MN contributed to the design of the survey, to the data analysis, attented all working cession and reviewed the manuscript. XP contributed to the design of the survey, to the data analysis, attented all working cession and reviewed the manuscript. OR contributed to the design of the survey, to the data analysis, attented almost all working cession and reviewed the manuscript. DS contributed to the design of the survey, to the data analysis, attented almost all working cession and reviewed the manuscript. CR contributed to the design of the survey, carried out the coordination of the team, attented all working cession and reviewed the manuscript. JFM contributed to the design of the survey, to the data analysis, attented all working cession and reviewed the manuscript. All authors read and approved the final manuscript.
|
2014-10-01T00:00:00.000Z
|
2008-04-15T00:00:00.000
|
{
"year": 2008,
"sha1": "3877d965ea4a623df8741fdd3b761fb6bd3a2eed",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-8-104",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3877d965ea4a623df8741fdd3b761fb6bd3a2eed",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
30761457
|
pes2o/s2orc
|
v3-fos-license
|
PACAP Promotes Matrix-Driven Adhesion of Cultured Adult Murine Neural Progenitors
New neurons are born throughout the life of mammals in germinal zones of the brain known as neurogenic niches: the subventricular zone of the lateral ventricles and the subgranular zone of the dentate gyrus of the hippocampus. These niches contain a subpopulation of cells known as adult neural progenitor cells (aNPCs), which self-renew and give rise to new neurons and glia. aNPCs are regulated by many factors present in the niche, including the extracellular matrix (ECM). We show that the neuropeptide PACAP (pituitary adenylate cyclase-activating polypeptide) affects subventricular zone-derived aNPCs by increasing their surface adhesion. Gene array and reconstitution assays indicate that this effect can be attributed to the regulation of ECM components and ECM-modifying enzymes in aNPCs by PACAP. Our work suggests that PACAP regulates a bidirectional interaction between the aNPCs and their niche: PACAP modifies ECM production and remodeling, in turn the ECM regulates progenitor cell adherence. We speculate that PACAP may in this manner help restrict adult neural progenitors to the stem cell niche in vivo, with potential significance for aNPC function in physiological and pathological states.
Introduction
The adult mammalian brain contains a population of quiescent cells known as adult neural progenitor cells (aNPC), which can give rise to new neurons and glia throughout the lifetime of the individual (Altman, 1962). These cells populate two areas of the brain-the subventricular zone (SVZ) of the lateral ventricles and the subgranular zone (SGZ) of the hippocampus. The SVZ is located between the striatum and the ependymal cell layer that lines the lateral ventricles (Doetsch et al., 1997). It has been firmly established in rodents that the SVZ aNPCs give rise to transit amplifying cells that proliferate and migrate into the olfactory bulb. They contribute to interneuron replacement in the granule cell layer (Lois and Alvarez-Buylla, 1994) and to the maintenance of cellular circuitry of the olfactory bulb (Cummings et al., 2014). The function of SGZ progenitors is less clearly defined. They give rise to mature excitatory neurons in the granule layer of the hippocampus and have been implicated in some forms of learning and memory (reviewed in Deng et al., 2010). In addition to their roles in physiological brain plasticity, aNPCs have been implicated in neural repair following injury and in neurodegenerative conditions. Thus, high hopes have been placed in aNPCs as a potential source of new neurons for cell replacement therapies of neurodegenerative diseases and brain injury (recently reviewed in Lo´pez-Bendito and Arlotta, 2012; Bellenchi et al., 2013;Miller and Gomez-Nicola, 2014;Ruan et al., 2014).
aNPCs are maintained in a so-called neurogenic niche, where specialized components of the extracellular matrix (ECM) and soluble factors secreted by the stroma contribute to the maintenance of ''stemness,'' that is, self-renewal and multipotency, by these progenitors (Kazanis et al., 2007;Ninkovic and Go¨tz, 2007;Kazanis, 2009;Mercier, 2016). How these different signals are integrated to contribute to the stem cell phenotype of aNPCs is, however, poorly understood.
Pituitary adenylate cyclase-activating polypeptide (PACAP) is a secreted peptide with pleiotropic functions in the central nervous system and beyond (reviewed in Moody et al., 2011;Nakamachi et al., 2011;Shen et al., 2013;Waschek, 2013). Specifically, it has been shown to regulate the proliferation and survival of neuroblasts in the embryonic and postnatal brain (Vaudry et al., 1999;Nicot and DiCicco-Bloom, 2001;Suh et al., 2001;Nicot et al., 2002;Niewiadomski et al., 2013). Moreover, PACAP regulates the differentiation of neural progenitors into different neuronal and glial lineages (Lee et al., 2001;Vallejo and Vallejo, 2002;Nishimoto et al., 2007;Watanabe et al., 2007;Ohtsuka et al., 2008;Hirose et al., 2011). It exerts its actions on the cells through one of three G-protein coupled receptors: PAC1, which is specific for PACAP; VPAC1 and VPAC2, which have an equal affinity for PACAP and a related neuropeptide VIP (Harmar et al., 1998;Vaudry et al., 2000). PACAP is expressed at the SVZ (Mercer et al., 2004), and aNPCs express PAC1 and VPAC2 receptors (Mercer et al., 2004;Ohta et al., 2006;Scharf et al., 2008). PACAP protects aNPCs from a variety of pro-apoptotic insults (Mansouri et al., 2012(Mansouri et al., , 2016(Mansouri et al., , 2017 and has been shown to promote the proliferation and prevent differentiation of aNPCs cultured in the absence of growth factors, both when the cells were maintained as clonally derived neurospheres (Mercer et al., 2004;Sievertzon et al., 2005) and when they were cultured as a monolayer on a poly-lysinecoated surface (Scharf et al., 2008). Moreover, PACAP promotes the proliferation of SVZ and SGZ cells in vivo (Mercer et al., 2004;Ohta et al., 2006). The proliferative effect of PACAP is synergistic with epidermal growth factor (EGF) and is dependent on the phospholipase Cprotein kinase C pathway (Mercer et al., 2004). Notably, previous studies have examined the effects of PACAP on aNPCs in cultures lacking other growth factors known to be essential for the maintenance of their stem cell identity. These factors, which are likely to be present in addition to PACAP in the neurogenic niches, include ligands of epidermal growth factor (EGF) receptors (transforming growth factor a [TGFa] or EGF) and fibroblast growth factor (FGF) receptors (such as basic FGF [bFGF]; Enwere, 2004;Ghashghaei et al., 2007;Zhao et al., 2007;Deleyrolle and Reynolds, 2009). Previous studies of the effects of PACAP on aNPCs have focused on growth factor-independent functions of PACAP (Mercer et al., 2004;Sievertzon et al., 2005;Scharf et al., 2008). To mimic the composition of signals that the aNPCs may be exposed to in the stem cell niche in vivo, that is, under nondifferentiation conditions, we cultured them in the presence of EGF and bFGF. We show here that under such experimental conditions, treatment of the cells with PACAP induced their attachment to rigid surfaces and that this effect is mediated by secreted components of the ECM.
Materials and Methods
Isolation and Culture of SVZ aNPCs aNPCs were isolated from the SVZs of 7-to 8-week-old male C57Bl/6 mice or PAC1 -/mice (Jamen et al., 2000) as described (Deleyrolle and Reynolds, 2009). Briefly, the mice were sacrificed by pentobarbital injection, and their skulls were opened to expose the brain. The brain was cooled in ice-cold DMEM/F12 supplemented with HEPES. The rostral part of the brain was sectioned coronally on a mouse brain slicer, and the periventricular region was excised using a scalpel blade from two to three 1-mm thick sections, starting from the ventralmost section in which the ventricle was apparent. The periventricular tissue was cut into small pieces using fine scissors and transferred to a conical tube containing ice-cold DMEM/F12 with HEPES. The tissue was transferred to a biological safety cabinet and washed thrice with sterile Hank's buffered salt solution. This solution was replaced with DMEM/F12 supplemented with B-27 (Life Technologies) and gently triturated with a 1-mL pipette tip. Trypsin (Trypsin-EDTA solution, Life Technologies) was added to a final concentration of 1 mg/mL and DNAse I (Worthington) was added to a final concentration of 0.1 mg/mL to dissociate the cells. The tissue was allowed to incubate at 37 C for 30 min with manual agitation after every 10 min interval. The trypsin was inactivated by adding fetal bovine serum to a final concentration of 10%. The tissue was then triturated using two sterile diminishing-bore fire-polished glass Pasteur pipettes and passed through a 40 -mm cell strainer (Corning Falcon). The cells were centrifuged at 200 Â g for 2 min and resuspended at 10 4 cells/mL in neurosphere media, containing Neurobasal, L-glutamine (0.5 mM), penicillin/streptomycin (1Â), B-27 supplement (1Â), EGF (10 ng/mL), bFGF (10 ng/mL), and heparin (2 mg/mL). All neurosphere media components were purchased from Life Technologies, except heparin and EGF, which were from Sigma, and bFGF, which was from Peprotech. The cells were placed in non tissue culture-treated 25-cm 2 flasks at 7 Â 10 4 cells per flask (Nunc) for 7 days, and EGF and bFGF were readded at 5 ng/mL each on Day 3 and 5 of culture. The first neurospheres started appearing after 3 to 5 days and were fully grown by Day 7. For replating, the spheres were centrifuged at 200 Â g for 5 min, and the media was replaced with 500 mL Accutase (Life Technologies). The spheres were incubated for 5 min at 37 C, triturated with a 200 mL pipette tip, incubated for 10 more min at 37 C, and triturated with a small-bore fire-polished pipette until very few visible spheres remained. The suspension was centrifuged at 200 Â g for 5 min, and the cell pellet was resuspended in neurosphere media. The remaining spheres were removed from the suspension by passing it through a 40 -mm cell strainer, the cells were then counted and replated at 10 4 cells/mL in nontissue culture-treated flasks (7 Â 10 4 cells per flask). Experiments were performed on cells from Passage 3 to 6. All animal experiments were carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Committee on the Ethics of Animal Experiments of the University of California Los Angeles (Protocol Number: 93-302, IACUC A3196-01).
Measurement of Cell Attachment
Neurospheres were dissociated in Accutase as described earlier, and plated into wells of a six-well cell culture plate (Nunc) at a 10 4 cells/mL in 2 mL of neurosphere media per well supplemented with PACAP-38 (referred to as PACAP in the text; Merck Millipore) and other peptides/drugs at indicated concentrations. For experiments involving forskolin (FSK; Sigma), all wells not treated with FSK were supplemented with 0.25% DMSO to control for DMSO used as FSK vehicle. The cells were incubated at 37 C for 1 week, and were supplemented with EGF, bFGF, PACAP, VIP, PHI, and FSK at half the initial concentration on Day 3 and 5. After 1 week, the neurosphere suspension was gently transferred to an Eppendorf tube, and the cells were centrifuged and dissociated using trypsin. The remaining attached cells were trypsinized in the well and then transferred to an Eppendorf tube. Trypsin was neutralized using 20% fetal bovine serum in DMEM. Trypsinized cells were triturated to obtain a single-cell suspension and the cells in each fraction (suspension vs. attached) were counted.
aNPC Differentiation aNPCs were cultured for 7 days in complete neurosphere media until well-developed neurospheres formed. For anti-nestin staining, neurospheres were dissociated with Accutase and plated on poly-l-ornithine and fibronectincoated glass coverslips in 24-well plates in neurosphere media containing EGF and bFGF. PACAP was added to selected wells and the cells were cultured for 7 days with a media change on Day 4. For all other immunofluorescence procedures, whole neurospheres (approximately 5 Â 10 4 cells per well) were plated in neurosphere media in poly-l-lysine-and laminin-coated coverslips in 24-well plates. The cells were allowed to adhere to coverslips for 2 days in the presence or absence of 100 nM PACAP. Afterwards, neurosphere media was replaced with differentiation media containing Neurobasal, L-glutamine (0.5 mM), penicillin/streptomycin (1Â), B-27 supplement (1Â), heparin (2 mg/mL), and 1% fetal bovine serum without PACAP. The cells were allowed to differentiate for 5 days with half of the media changed every other day.
Live/Dead Cell Assay aNPCs were dissociated with Accutase and plated on polyl-lysine-and laminin-coated 6-well plates at 10 5 cells per well. They were allowed to grow in the presence or absence of PACAP for 5 days, then detached from the wells using Accutase for 3 min at 37 C. Cell dissociation was stopped using Neurobasal containing 1 Â B-27 supplement and the cells were centrifuged at 200 Â g for 5 min and resuspended in PBS. The cells were incubated for 5 min in propidium iodide solution, and propidium iodide uptake was measured by flow cytometry on the BD LSRFortessa instrument (Beckton Dickinson) using the 488 nm excitation laser and 610/20 nm band pass emission filter. Results were analyzed using FACSDiva 6.2 software.
Cell Cycle Assay aNPCs were dissociated with Accutase and plated on poly-l-lysine-and laminin-coated 6-well plates at 10 5 cells per well. They were allowed to grow in the presence or absence of PACAP for 5 days, then detached from the wells using Accutase for 7 min at 37 C. Cell dissociation was stopped using Neurobasal containing 1 Â B-27 supplement and the cells were centrifuged at 200 Â g for 5 min. The cells were resuspended in PBS and then centrifuged at 2500 Â g for 5 min. The cells were resuspended in 300 mL of PBS, and 700 mL of À20 C ethanol was added. Ethanol-fixed cells were stored at À20 C. On the day of the assay, the cells were centrifuged at 300 Â g for 5 min at 4 C, the supernatant was decanted, and the cells were resuspended in 1 mL of 4 C PBS. The cells were recentrifuged at 300 Â g for 5 min and resuspended in 0.5 mL of 4 C PBS. They were incubated in a solution containing saponin, propidium iodide, and RNAse for 30 min, and DNA content of cells was measured by flow cytometry on the BD LSRFortessa instrument as described for the live/dead cell assay.
Immunoblot aNPCs were dissociated with Accutase and plated on poly-l-lysine-and laminin-coated 24-well plates at 5 Â 10 4 cells per well. They were allowed to grow in the presence or absence of PACAP for 4 days and then washed with PBS and lysed with RIPA buffer (50 mM Tris pH 7.4, 150 mM NaCl, 2% Igepal CA-630, 0.25% sodium deoxycholate, 1 mM NaF, 1 Â SigmaFAST protease inhibitor cocktail, and 1 mM DTT). Protein concentration was measured using the BCA assay and equal amounts of protein were loaded onto an SDS-PAGE gel. Following transfer, nitrocellulose membranes were blocked with the blocking buffer containing 5% bovine serum albumin in tris-buffered saline þ 0.05% Tween-20 and incubated overnight at 4 C with anti-phospho-protein kinase A (PKA) substrate antibody (Cell Signaling cat. #9624), anti-phospho-PKC substrate antibody (Cell Signaling cat. #2261), or anti-a-tubulin antibody (Sigma cat. T6199) diluted to 1 mg/mL in the blocking buffer, followed by washes and incubation with HRP-conjugated secondary antibodies. Chemiluminescent signal was detected using the Amersham Imager 600RGB and densitometric measurements were performed in ImageJ.
Conditioned Media-Mediated Cell Attachment
PAC1-null aNPC neurosphere cultures were obtained from mice lacking the PAC1 receptor (Jamen et al., 2000), as detailed earlier. Wild-type (WT) mouse-derived neurospheres were treated with the indicated doses of PACAP for 7 days, and the conditioned media was collected from the culture and sterile-filtered to remove any remaining WT cells. The WT-conditioned media was placed in wells of a six-well plate for 5 days, and then removed. The wells were rinsed three times with sterile PBS. PAC1-null aNPC neurospheres were dissociated into a single-cell suspension using Accutase, and the cells were placed in conditioned media-treated wells in neurosphere media without PACAP. The cells were allowed to grow for 3 days, and their attachment to the well surface was examined using phase-contrast microscopy.
DNA Microarray Experiments
RNA was isolated using TRIzol from all cells (both floating and attached) treated for 24 hr or 96 hr with either vehicle (control) or PACAP (10 nM). RNA from five to six independent samples per group was pooled and cleaned using RNeasy spin columns (Qiagen), yielding $10 mg RNA per treatment. The samples were submitted to the UCLA DNA Microarray Facility for hybridization with Affymetrix GeneChip Mouse Genome 430 2.0 arrays. The.CEL files obtained from each array scan were analyzed using the Affymetrix Expression Console suite (build 1.4.1.46) with the Robust Multichip Analysis algorithm to subtract background and normalize data. The obtained normalized log 2 expression values for control samples at a given time point were subtracted from log 2 PACAP-treated sample values from the same time point to obtain log 2 (PACAP/control) ratio values for each time point. Log 2 ratios of more than 1 (over 2-fold increase) or less than À1 (over twofold decrease) were considered significant. If a gene was represented by more than one probe set on the array, it was considered significantly changed if at least one of the probe sets showed up-or downregulation by more than two-fold. Raw microarray data (.CEL files) and robust multichip analysis results were submitted to Gene Expression Omnibus (GEO; accession number GSE66193).
Real-Time Quantitative Reverse Transcription Polymerase Chain Reaction
aNPCs were dissociated with Accutase and plated on poly-llysine-and laminin-coated 24-well plates at 5 Â 10 4 cells per well. They were allowed to grow in the presence or absence of PACAP for 4 days, then RNA was isolated using TRIzol. Reverse transcription was performed using the AMV First strand synthesis kit (NEB) with random primers. Real-time quantitative-polymerase chain reaction (PCR) was performed on the LightCycler 480 instrument using Roche LightCycler SYBR Green I Master Mix and the following primers: Relative quantification of gene expression was performed using the ÁÁCp method using GAPDH as the housekeeping gene.
Gene Ontology Analysis of DNA Microarray Results
Significantly upregulated or downregulated genes obtained from the DNA microarray experiments were submitted to the DAVID 6.7 online tool for the selection of significantly up-or downregulated functional gene groups (based on Gene Ontology [GO]) and group clusters using the RDAVIDWebService library for R. The GO terms used belonged to three annotation categories: GO_CC_FAT (cellular compartments), GO_BP_FAT (biological process), and GO_MF_FAT (molecular function). The three categories were selected to filter out overly broad GO terms.
Statistical Analysis
Statistical analyses were performed using GraphPad Prism, R, and Microsoft Excel. Significant differences of mean cell numbers among multiple treatments were assessed using analysis of variance, and the post hoc Tukey's test was used to determine the statistical significance of pairwise mean differences. A value of p < .05 was considered significant.
PACAP Induces Surface Attachment of aNPCs
To determine the effects of PACAP on aNPCs in the presence of growth factors, we cultured adult mouse-derived neurospheres in media containing EGF and bFGF. Under these conditions, PACAP induced attachment of the neurospheres to uncoated plastic surface of the dishes in a dose-dependent fashion (Figure 1(a, b)). This phenotype was not associated with cell differentiation because virtually all untreated and PACAP-treated cells expressed the aNPC marker nestin when cultured in neurosphere media in the presence of EGF and bFGF (Figure 1(c)). Consistent with the undifferentiated aNPC phenotype, the attached cells could form secondary and tertiary neurospheres upon dissociation and replating regardless of PACAP addition (Figure 1(d)). A small fraction (<5%) of cells in PACAP-treated but not in control cells stained positive for the astrocyte marker GFAP when plated as monolayers in the same growth medium on poly-L-lysine-and laminin-coated coverslips, suggesting that PACAP can promote astroglial differentiation of aNPCs even in the presence of EGF and bFGF (Figure 1(e)). However, even in PACAP-treated wells GFAP-positive cells were restricted only to some areas, especially those with highest cell densities. Both the cell cycle analysis and the live/dead cell assay revealed only very modest differences between PACAP-treated and untreated cells, suggesting that PACAP does not greatly affect cell proliferation or death ( Figure S1). PACAP-treated cells, like control cells, are able to differentiate into MAP2ab-positive neurons and GFAP-positive astrocytes, which implies that PACAP does not limit the differentiation potential of aNPCs (Figure 1(f)). Nevertheless, astrocytes generated from PACAP-treated aNPCs have a mostly stellate appearance with many thin projections, whereas control astrocytes are flat and epitheloid in shape.
PACAP-Induced Cell Adhesion is Mimicked by VIP and PKA Activation
Because PACAP shares two receptor subtypes (VPAC1 and VPAC2) with another secreted polypeptide-VIP-we wanted to verify whether VIP was also able to induce aNPC attachment to dish surface in the presence of growth factors. VIP did induce some degree of adhesion, but it was significantly less potent than PACAP (Figure 2(a), (c), and (e)). A different PACAP/VIP receptor ligand, peptide histidine-isoleucine, showed no effect on aNPC attachment (Figure 2(a)). We tested the expression of the three PACAP receptor types at the mRNA level and found that the PACAP-specific PAC1 receptor was the dominant receptor subtype in aNPCs, with VPAC2 showing lower detection and VPAC1 undetectable. PACAP-mediated aNPC adhesion was also mimicked by FSK, an adenylate cyclase activator, suggesting that this effect is mediated through the adenylate cyclase-PKA pathway (Figure 2(c, e)). Consistent with this hypothesis, we found that phosphorylation of PKA targets, but not targets of protein kinase C (PKC) was increased in PACAP-treated cells (Figure 2(d)).
PACAP Affects the Transcription of ECM Components and ECM-remodeling Enzymes in aNPCs
Because PACAP treatment of aNPCs increases attachment of spheres to the bottom of plastic dishes, we hypothesized that PACAP may affect the secretion or processing of ECM components in these cells. To test this hypothesis, we performed genome-wide transcriptional profiling of aNPCs untreated or treated with 10 nM PACAP for 1 or 4 days. Genes that were up-or downregulated more than two-fold by PACAP were then subjected to further analyses. PACAP upregulated the expression of 163 genes after 24 hr of treatment (Table S1). Eighty-two genes were upregulated at 96 hr, including 46 of those that were already induced after 1 day of PACAP treatment (Figure 3(a), Table S2). For some of the genes that were up-or downregulated by PACAP, we confirmed our microarray analysis results by performing quantitative real-time reverse transcription (RT)-PCR on independent samples of aNPCs that were cultured as a monolayer on poly-l-lysine-and laminin-coated plates. Consistent with our microarray analysis, PACAP (100 nM) treatment increased the expression of galectin 3 (Lgals3), TGFb receptor 2 (Tgfbr2), sulfatase 1 (Sulf1), osteonectin (Sparc), fibulin 2 (Fbln2), ADAM metalloproteinase with thrombospondin Type 1 motif 6 (Adamts6), ECM protein 1 (Ecm1), collagen type VI a1 (Col6a1), and nephronectin (Npnt), and decreased the expression of F-spondin (Spon1; Figure 3(c)). Of the genes that we tested only fibronectin (Fn1) showed altered expression in microarray but not in RT-PCR assays (not shown), suggesting that our microarray results are robust.
We used Ingenuity Pathway Analysis to suggest what upstream mediators were responsible for the effects of PACAP on gene transcriptions. Consistent with our findings suggesting the involvement of the cyclic adenosine monophosphate (cAMP)/PKA pathway in the effects of PACAP on aNPCs (Figure 2), the upstream regulator with the lowest p value (7 Â 10 À22 ) at 24 hr of PACAP treatment was CREB, a known effector of PKA. The second highest ranked regulator was TGFb (p value 2 Â 10 À19 ). Moreover, one of the TGFb receptors, TGFBR2, was upregulated by PACAP after 24 hr of treatment. At 96 hr of treatment, TGFb was the most probable upstream mediator of the transcriptional effects of PACAP, suggesting that at least some of the effects of PACAP treatment are indirect, and depend on the upregulation of TGFb signaling by the initial PACAP signal.
To determine if the observed PACAP-induced changes in gene expression were dependent on the presence of growth factors in the media, we compared our dataset to that of Sievertzon et al. (2005; ArrayExpress accession number E-MEXP-322), who also looked at the effect of PACAP on aNPCs, but in the absence of growth factors. Importantly, the sets of genes that were significantly regulated by PACAP in the presence of growth factors (this study) showed little overlap with the genes regulated by PACAP in the absence of growth factors (Sievertzon et al., 2005; Figure 3(d)). Specifically, the genes that were upregulated by PACAP in our study were equally likely to overlap with genes that were upregulated as with those that were downregulated by PACAP in the absence of growth factors. This analysis suggests that PACAP activates a different gene expression program depending on the presence or absence of growth factors. Figure 1. Continued per well from the same experiment expressed as mean AE SD. (c) PACAP does not induce differentiation of aNPCs. aNPCs were seeded on poly-L-ornithine-and fibronectin-coated coverslips and cultured for 7 days in neurosphere media in the absence or presence of PACAP. Undifferentiated aNPC marker nestin (green) was detected by immunofluorescence and nuclei were stained with the Hoechst 33342 dye (blue). (d) PACAP does not affect neurosphere formation by aNPCs. aNPCs were cultured as monolayers on poly-l-lysine-and laminincoated plates in the absence or presence of PACAP for 2 days. Afterwards, they were dissociated and replated in neurosphere media without PACAP. For secondary sphere formation assay, the cells were plated at a density of three cells per well in 96-well plates and allowed to grow for 5 days. The number of spheres that grew in each well were counted using phase contrast microscopy. For the tertiary sphere formation assay, the cells were replated at 10 5 cells/well in a 12-well plate, and were dissociated and replated after 5 days into 96well plates at a density of three cells per well as described earlier. Data are mean fold increase of sphere formation over control AE SD from n ¼ 4 independent samples, each sample representing an average count of at least 10 wells. (e) PACAP increases the number of GFAPpositive cells in aNPC cultures. aNPC neurospheres were plated on poly-l-lysine-and laminin-coated coverslips and cultured in growthfactor containing neurosphere media in the presence or absence of PACAP for 2 days. Astrocyte marker GFAP (red) was detected by immunofluorescence and nuclei were stained with DAPI (blue). Two representative micrographs at low (upper) and high (bottom) cell densities are shown for each condition. (f) PACAP-treated and -untreated cells generate astrocytes and neurons upon differentiation. aNPC neurospheres were plated as in (e), but after 2 days, the growth factor-or PACAP-containing media was withdrawn and replaced with differentiation media containing 1% fetal bovine serum. After 5 days, the astrocyte marker GFAP (green) and the neuronal marker MAP2ab (red) were detected by immunofluorescence and nuclei were stained with DAPI (blue). Three representative micrographs are shown for each condition. Insets contain magnified fragments from each parent micrograph. Because aNPC adhesion is often associated with their differentiation, we wanted to more definitively rule out the possibility that PACAP-treated cells were losing their stem-like character. We thus compared our lists of up-and downregulated genes to the genes regulated in aNPCs by growth factor withdrawal (Bonnert et al., 2006; GEO accession number GSE4496; Figure 3(e)), and found no correlation between gene regulation by PACAP treatment and that of growth factor withdrawal. This finding is consistent with results suggesting that PACAP did not induce terminal differentiation of aNPCs in culture (Figure 1(c)).
We then grouped PACAP up-and downregulated genes (24 hr time point) based on their GO categories using the on-line DAVID tool (Database for Annotation, Visualization, and Integrated Discovery; Huang et al., 2009aHuang et al., , 2009b. Strikingly, the top two clusters of GO categories that resulted from this analysis included categories related to ECM, carbohydrate binding, and cell adhesion, which were consistent with the phenotype that we observed in PACAP-treated aNPCs (Figure 3(a, d), Table S3). The most significantly downregulated category clusters were related to synaptic transmission, suggesting that PACAP treatment inhibited neuronal differentiation of aNPCs (Figure 3(b, d), Table S4).
The Effects of PACAP Are Mediated Through Secreted Components
In aNPCs, PACAP upregulated the production of many secreted ECM components, such as ECM protein 1, collagen VI a1, hyaluronan and proteoglycan link protein 4, von Willebrand factor A domain containing 1, nephronectin, galectin 3, osteonectin, and fibulin 2. We therefore hypothesized that these components are responsible for the attachment phenotype seen in PACAP-treated aNPCs, as opposed to a direct effect of PACAP on the intrinsic ability of aNPCs to adhere. To validate this hypothesis, we tested the ability of conditioned media from PACAP-treated aNPCs (PACAP-CM) to induce adhesion of untreated cells. To that end, we preconditioned the wells with PACAP-CM and then removed PACAP-CM, washed the wells with PBS and plated aNPCs in fresh neurosphere media without PACAP. In this way, only highly adhesive media components, like the ECM, remained in the conditioned wells, and PACAP itself, present in PACAP-CM, was washed out. To further rule out any effect of residual PACAP or autocrine PACAP secretion in this assay, we tested the effects of well preconditioning on cells isolated from mice lacking the PAC1 receptor (PAC1 -/-; Jamen et al., 2000). We found that PACAP-CM-treated wells induce the adhesion of PAC1 -/-aNPCs (Figure 4(a)), which is consistent with a model of PACAP-mediated aNPC adhesion that involves the secretion of ECM components to the media (Figure 4(b)).
Discussion
The neuronal population in the postnatal brain had long been thought to be static, with no new neurons being generated past a certain age. The discovery of adult neurogenesis (Altman, 1962) not only broke this long-standing dogma but also raised new hopes for regenerative medicine. However, the use of the organism's intrinsic ability to generate new neurons in replacement therapy for neurodegenerative diseases and acute injuries has so far proved to be an elusive goal (Lindvall and Kokaia, 2010). The main problem appears to be the incomplete understanding of the regulation of adult neurogenesis in physiological and pathological conditions in vivo. Therefore, significant effort is being made to decipher the signaling pathways that regulate aNPC selfrenewal, migration, and differentiation.
PACAP is a short polypeptide initially discovered as a regulator of pituitary function in mammals. It has been shown to have significant potential as a neuroprotective agent in vitro and in vivo (Waschek, 2013;Mansouri et al., 2016Mansouri et al., , 2017. Importantly, PACAP is upregulated during brain ischemia (Stumm et al., 2007;Riek-Burchardt et al., 2010;Lin et al., 2015) and in the cortex following traumatic brain injury (Skoglo¨sa et al., 1999), further supporting the notion that it may be one of the endogenous signals that promote repair during neurodegenerative insults. In addition to its neuroprotective Figure 3. Continued *p < .05, **p < .01, ***p < .001 in a two-sided t test. The genes examined in the assay correspond to the following proteins: Lgals3-galectin 3, Tgfbr2-TGFb receptor 2, Spon1-F-spondin, Sulf1-sulfatase 1, Sparc-osteonectin, Fbln2-fibulin 2, Adamts6-ADAM metallopeptidase with thrombospondin type 1 motif 6, Ecm1-extracellular matrix protein 1, Col6a1-collagen type VI a1, Npnt-nephronectin. (d) Top three categories (clusters) of genes up-or downregulated in aNPCs by PACAP treatment. (e, f) Venn diagrams comparing genes up-and downregulated by PACAP in the presence of growth factors (96 hr treatment) with those up-and downregulated by PACAP in the absence of growth factors for 72 hr (Sievertzon et al., 2005), (d) and those up-and downregulated in cells induced to differentiate by 120-hr long growth factor withdrawal (Bonnert et al., 2006). (e) For (d) and (e), only genes that are represented in both datasets were used for the analysis; hence, different total numbers of genes up-and downregulated by PACAP in the presence of growth factors between panels (a), (b) and (d), or (e). potential, PACAP and its PAC1 receptor have been shown to affect neural progenitors in the embryonic and adult nervous system (Mercer et al., 2004;Ohta et al., 2006;Ago et al., 2011). Extending these published data, our results suggest that during neurodegeneration, PACAP might also be useful in replacing lost neurons by regulating the extracellular molecular environment of endogenous neurogenic niches.
Despite a well-established influence of PACAP on embryonic and adult neural progenitors, its mechanisms Model of the role of PACAP in inducing adhesion of aNPC neurospheres to surfaces. PACAP binds to PAC1 receptors on the surface of aNPCs and stimulates the cAMP/PKA pathway. This results in the production of ECM components, and also of the TGFb receptor 2. We speculate that TGFb1 constitutively secreted by aNPCs activates TGFbR2 in an autocrine and paracrine manner. Both the PAC1/PKA pathway and the TGFbR2 pathway further increase the production of ECM by aNPCs. ECM components are secreted to media and adhere to the surface of the culture dish, which creates an adhesive surface to which aNPCs can attach.
of action are unclear. Previous work that attempted to map PACAP-induced transcriptional changes in aNPCs failed to uncover novel regulatory mechanisms, most likely due to the fact that the study was conducted under conditions of growth factor withdrawal (Sievertzon et al., 2005). In contrast, we show here that in the presence of two growth factors, EGF and bFGF, PACAP promotes aNPC adhesion by modifying their transcriptional output. Interestingly, when similar studies were conducted on embryonic rather than adult neural progenitors cultured as neurospheres in the presence of growth factors, no such PACAP-dependent cellular attachment was reported (Ohta et al., 2006). In embryonic neural progenitors, PACAP and PAC1 have moreover been reported to regulate cell migration in vitro and in vivo (Toriyama et al., 2012;Adnani et al., 2015, p. 1). This implies that the effects of PACAP on neural progenitors are varied and dependent on the developmental stage.
The fact that VIP is less potent than PACAP at inducing attachment of aNPCS strongly suggests that the principal receptor that mediates the effects of PACAP on aNPCs is the PACAP-preferring PAC1 receptor, which has approximately 1,000-fold higher affinity for PACAP than for VIP or PHI (Harmar et al., 1998) and which is abundantly expressed in aNPCs (Mercer et al., 2004;Scharf et al., 2008;Mansouri et al., 2012). Based on the transcriptional profile of PACAP-treated aNPCs, the downstream effectors of PAC1 in this context appear to be the cAMP-PKA pathway and, indirectly, the TGFb pathway. aNPCs have been shown to express TGFb1 (Klassen et al., 2003), suggesting that they may undergo autocrine and paracrine regulation by the TGFb pathway. Importantly, this signaling pathway is known to affect both adult neurogenesis (Buckwalter et al., 2006;Wachs et al., 2006;Kandasamy et al., 2014) and extracellular protein secretion of neural cells (Hellbach et al., 2014).
We discovered that the enhanced aNPC attachment is mediated by factors that are secreted to conditioned media by PACAP-treated cells. We also identified a large number of ECM components, ECM modifying enzymes, and their inhibitors whose expression is regulated by PACAP in aNPCs. Taken together, this data strongly suggest that PACAP affects ECM production and modification by aNPCs.
The ECM has a well-established role in the maintenance of the neurogenic niches in both the embryonic and adult nervous system (Kazanis and ffrench-Constant, 2011;Theocharidis et al., 2014;Reinhard et al., 2016). However, very few ECM components have been studied in detail in the context of adult neurogenesis. The genes differentially regulated by PACAP in aNPCs include several integrin substrates (nephronectin and collagen VI) and other ECM glycoproteins (fibulins 2 and 5, mucin 4, olfactomedin-like 3) as well as glycoprotein-binding proteins Hapln4 and two lectins: galectin 3 and Nkrp1a. Interestingly, galectin 3 has been recently implicated in neuroblast migration from the SVZ to the olfactory bulb (Comte et al., 2011) and a related lectin galectin 1 was shown to play a key role in SVZ neurogenesis (Ishibashi et al., 2007, p. 1). In addition, PACAP increased the production of multiple so-called matricellular proteins (Bornstein and Sage, 2002): osteonectin and the related protein F-spondin, Smoc2, thrombospondin 3, and tenascin C. These proteins do not fulfill the typical ECM functions of mechanical scaffolding, but rather are modulators of cell-ECM interactions. Of these matricellular proteins, only tenascin C has been implicated to some degree in adult neurogenesis (Kazanis et al., 2007).
Besides regulating ECM component expression, PACAP might direct the remodeling of ECM by aNPCs through the modulation of expression of extracellular ECM-modifying enzymes and their inhibitors. The serine protease HtrA1 and tissue inhibitor of metalloproteinase (TIMP) 1 was upregulated, whereas TIMP 4 and several members of the ADAM/ADAMTS family of metalloproteinases were downregulated in PACAP-treated cells. Moreover, by downregulating the expression of heparan sulfate 3-O-sulfotransferase 2 and upregulating the expression of sulfatase 1, PACAP may affect the metabolism of heparan sulfate proteoglycans, which had been suggested to affect aNPC fate decisions (Chipperfield et al., 2002) and participate in the formation of specialized ECM niche structures known as fractones (Mercier et al., 2002;Mercier, 2016).
Very little is currently known about the ability of neural progenitors to remodel their niche through the production or degradation of ECM. It has mostly been assumed that, with few exceptions (Kazanis et al., 2010), the ECM in the niche is a product of cells that surround the NSCs rather than NSCs themselves. Our study suggests that neural progenitors derived from the adult brain also have a rich and regulatable secretome, which in turn feeds back into their behavior in an autocrine and/or paracrine manner.
An important question going forward is whether PACAP itself is in fact required for the ECM-mediated maintenance of adult neurogenesis in vivo. Some behavioral effects of PAC1 and PACAP deficiency have been reported in mice, but their link to adult neurogenesis is unclear (Hannibal, 2002;Mustafa et al., 2015).
Moreover, it will be interesting to determine which cells secrete PACAP at the neurogenic niche and at the site of injury. A recent study suggests that PACAP released from sites of ischemic stress may promote stem cell migration toward hypoxic lesions (Lin et al., 2015), and the involvement of ECM remodeling in this process will require further investigation. Future studies using conditional knockout animals should bring us closer to resolving these important issues.
Other than the identity of cells that produce PACAP, it will be important to determine what classes of cells respond to PACAP treatment both within neurospheres and in vivo. Neurospheres are known to be composed of a heterogeneous population of cells at various stages of differentiation, from multipotent stem cells, through partially differentiated progenitors, all the way to some postmitotic glial and neuronal cells (Suslov et al., 2002;Parmar et al., 2003;Reynolds and Rietze, 2005;Jensen and Parmar, 2006). In our hands, cells derived from cultured neurospheres exhibited essentially uniform staining for nestin, a neural progenitor marker (Figure 1(c)), suggesting that few of the cells in the spheres were terminally differentiated. However, nestin is expressed both in multipotent neural stem cells and in partially differentiated progenitors in the adult central nervous system (Doetsch et al., 1997;Imayoshi et al., 2011), and we show that PACAP promotes the expression of the astrocyte marker GFAP in cells cultured in the presence of EGF and bFGF. Moreover, PACAP changes the phenotype of astrocytes that are generated from aNPCS from epitheloid (Type I-like) to stellate (Type II-like; Raff et al., 1983). Therefore, we cannot exclude the possibility that PACAP affects the ''stemness'' or differentiation potential of at least some neural progenitor classes. Future work should focus on determining which cells within the neurospheres are responsible for the observed increase in ECM component expression and whether PACAP affects the expression profile of these cells or the relative abundances of progenitor populations within neurospheres. An especially promising avenue of research would be to perform single-cell transcriptomics on aNPCs cultured in the presence or absence of PACAP.
Finally, our study shows that the ''neurosphere assay'' as a proxy for ''stemness'' in neural progenitors and cancer cells (Cohen et al., 2010) comes with serious caveats, as has been discussed before (Reynolds and Rietze, 2005). Specifically, PACAP causes aNPCs to no longer grow as neurospheres, but does not induce changes suggestive of the loss of stemness. We should therefore approach conclusions based on whether or not cells grow as floating nonadherent neurospheres with caution, and use multiple secondary assays to validate the selfregenerative potential of a putative stem cell population. Moreover, the neurospheres have only limited resemblance to the in vivo neurogenic niche, and therefore it will be important to find out, using targeted loss-of-function experiments, which of the PACAP-regulated genes are in fact expressed by neural progenitors in vivo and how each of them individually affects the behavior of adult brain stem cells in physiological states and in disease.
|
2018-04-03T01:45:35.502Z
|
2017-05-01T00:00:00.000
|
{
"year": 2017,
"sha1": "7b264f0fae391c47ecda9ecf4872fdf9b5eb8f9b",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1759091417708720",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b264f0fae391c47ecda9ecf4872fdf9b5eb8f9b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
236270657
|
pes2o/s2orc
|
v3-fos-license
|
Modeling coordinated operation of multiple hydropower reservoirs at a continental scale using artificial neural network: the case of Brazilian hydropower system
Reservoirs considerably affect river streamflow and need to be accurately represented in environmental impact studies. Modeling reservoir outflow represents a challenge to hydrological studies since reservoir operations vary with flood risk, economic and demand aspects. The Brazilian Interconnected Energy System (SIN) is an example of a unique and complex system of coordinated operation composed by more than 160 large reservoirs. We proposed and evaluated an integrated approach to simulate daily outflows from most of the SIN reservoirs (138) using an Artificial Neural Network (ANN) model, distinguishing run-of-the-river and storage reservoirs and testing cases whether outflow and level data were available as input. Also, we investigated the influence of the proposed input features (14) on the simulated outflow, related to reservoir water balance, seasonality, and demand. As a result, we verified that the outputs of the ANN model were mainly influenced by local water balance variables, such as the reservoir inflow of the present day and outflow of the day before. However, other features such as the water level of 4 large reservoirs that represent different regions of the country, which infers about hydropower demand through water availability, seemed to influence to some extent reservoirs outflow estimates. This result indicates advantages in using an integrated approach rather than looking at each reservoir individually. In terms of data availability, it was tested scenarios with (WITH_Qout) and without (NO_Qout and SIM_Qout) observed outflow and water level as input features to the ANN model. The NO_Qout model is trained without outflow and water level while the SIM_Qout model is trained with all input features, but it is fed with simulated outflows and water levels rather than observations. These 3 ANN models were compared with two simple benchmarks: outflow is equal to the outflow of the day before (STEADY) and the outflow is equal to the inflow of the same day (INFLOW). For run-of-the-river reservoirs, an ANN model is not necessary as outflow is virtually equal to inflow. For storage reservoirs, the ANN estimates reached median Nash-Sutcliffe efficiencies (NSE) of 0.91, 0.77 and 0.68 for WITH_, NO_ and SIM_Qout respectively, compared to a median NSE of 0.81 and 0.29 for the STEADY and INFLOW benchmarks respectively. In conclusion, the ANN models presented satisfactory performances: when outflow observations are available, WITH_Qout model outperforms STEADY; otherwise, NO_Qout and SIM_Qout models outperform INFLOW.
INTRODUCTION
River dams are structures that affect the natural streamflow and needed to be accurately represented on hydrological studies (Zajac et al., 2017). Reservoirs assume a regulation function, modifying river flow duration curves and attenuating flow peaks (Ayalew et al., 2013;Li et al., 2010;Vogel et al., 2007;Volpi et al., 2018). These structures hold water on land, approximately tripling the mean age of the river water worldwide, which has impact not only on the natural hydrograph but on the sediment flux and the re-oxigenation of surface water (Vörösmarty et al., 1997). Regarding water volume, reservoir operations and water irrigation are responsible to reduce the global river discharge in approximately 2.1% (Biemans et al., 2011).
Worldwide, Lehner et al. (2011) estimated that there are nearly 3 million impoundments larger than 0.1 ha, in consequence, only 36% of rivers longer than 1,000 km are free flowing rivers, i.e. present natural connectivity (Grill et al., 2019). Nowadays, nearly 60,000 large dams (defined as over 15 m height or impounding more than 3 million cubic meters) are listed in the World Register of Dams database of which 20% to 25% are intended for hydropower purposes (International Commission on Large Dams, 2020). And the number of hydropower reservoirs are consistently growing. In 2014, over 3,000 major dams with capacity over 1 MW were planned to be built, mostly in developing countries (Zarfl et al., 2014). The large number of existing and planned reservoirs and their cumulative effect on streamflow justifies an explicit representation of hydropower reservoirs for accurate streamflow estimates.
There are several difficulties in estimating reservoir operation and outflows, as they are influenced by fluctuations on demand, downstream conditions, costs of other sources of energy, bed floor leakage and irregular inflow series. These challenges are more pronounced at large-scale context (national, continental, global). In several cases, operating systems deal with a cascade of reservoirs which demand complex optimization technics to maximize energy production (Liu et al., 2011;Pereira & Pinto, 1985;Zahraie & Karamouz, 2004).
Some studies have proposed simplified operation schemes to estimate reservoir outflows in a large-scale context. For sake of general applicability, these outflow simulations only use inflow and storage as input (Hanasaki et al., 2006;Shin et al., 2019), although extra information about the reservoir purpose, such as water demand for irrigation and maximum discharge for flood control, might be needed on the optimization process (Haddeland et al., 2006) or as a limiting condition (Zhao et al., 2016). In general, these simplified operation schemes are composed by few linear equations, tested on global/continental hydrological models with a monthly time step and they yield adequate results for a limited data situation and a large-scale context.
On the other hand, machine learning techniques are interesting alternatives to evaluate specific reservoirs and their operation when enough data is provided. Machine learning techniques have ability to represent highly non-linear relations and can autonomously detect patterns and provide predictions. Artificial neural networks (ANN), for example, has proved useful for optimizing reservoir operations (Carneiro & Farias, 2013;Chaves & Chang, 2008;Senthil Kumar et al., 2013) and estimating reservoir inflows (Paz et al., 2008;Valipour et al., 2013). Different machine learning techniques were used to simulate operation of specific reservoirs, such as decision-tree methods (Yang et al., 2016), supporting vector machine, ANN of a single layer and deep learning (Zhang et al., 2018). In a broader scale, Ehsani et al. (2016) proposed a general reservoir operation scheme (GROS) based on ANN, suitable for large-scale modelling. The authors coupled an ANN with a water balance model to simulate reservoir storage and release from reservoir inflow, using the output variables as input for the next steps. GROS presented better performance compared to other simplified methods to estimate operation.
Palavras-chave: Estimativa de escoamento do reservatório; Aprendizado de máquina. energy sources and water availability. Thus, Brazil's interconnected system of hydropower reservoirs ends up being an interesting example that require an integrated approach for modeling outflows rather than using local and specific features only.
In this paper, we simulate daily outflows of most of the hydropower reservoirs connected to the SIN using ANN and assess the model capacity to represent a coordinated system. This work characterizes a proof of concept that machine learning techniques can model individual reservoirs of a complex hydropower system in a large-scale context. It was proposed several input features related not only to local reservoir water balance, seasonality, and demand, but also information from other reservoirs. The relevance of each variable was evaluated distinguishing run-of-the-river and storage reservoirs and testing cases whether outflow and water level data were available as input. We have not focused on proposing an optimized operation but simulated daily outflows that can be useful for environmental impact studies on specific river reaches or coupling to hydrological models to understand effects on basin scale.
METHODOLOGY
The national interconnected hydrothermal energy system (SIN) All Brazilian major dams are operated considering the SIN, which concentrate almost 68% of the national electrical production capacity (Operador Nacional do Sistema Elétrico, 2019). Due to the significantly diverse hydrological characteristics, currently the SIN is divided as regional interconnected subsystems (South, Southeast/ Central-west, Northeast, and North), where all the reservoirs within each subsystem are treated as a single equivalent reservoir. Currently, there are more than 160 reservoirs in the SIN (Figure 1). It is coordinated by the ONS, which tries to minimize spills and maximize the hydro electrical energy production, in order to avoid the utilization of the expensive and air-polluting thermal energy, while it also guarantees consumptive water uses and environmental restrictions (Operador Nacional do Sistema Elétrico, 2019). It is a hard task, because it considers the randomness of the affluent flows, the expansion of the system, flows downstream, future demands, the current reservoirs storage, etc. (Zambon, 2015). A system wide operation strategy prevails over individual ones, and the operation of a given hydropower plant affects other units downstream. The system allows hydropower plants to dispatch and transfer the energy to another region, where the reservoirs are low in storage, avoiding the use of local thermal plants.
Artificial neural network model
ANN is a self-learning technique that estimates an output variable giving a proper amount of data. It is composed by an input layer (formed by the input features), one or more hidden layers, and an output layer, which contains the information learned by the neural network. Input features multiplied by coefficients (synaptic weights) feed an activation function, resulting on "neurons" (nodes) that build a first hidden layer. Following the same steps, neurons of the first hidden layer are multiplied by synaptic weights to generate neurons of a next layer and so on. A trained ANN have optimized weight matrices, which is usually obtained through an iterative method called backpropagation (Rumelhart et al., 1986). This method compares observations to ANN outputs, generating deltas that are propagated backwards on the network to correct the weight matrices, from the output to the input layer. ANN are often referred to arrangements with few hidden layers, as neural networks composed by many hidden layers are often called deep-learning techniques (Shen, 2018).
In this paper, it is proposed an ANN of a "nearly" single hidden layer ( Figure 2) and sigmoidal activation functions to predict outflow in time t (OUT0). The input variables are submitted to feature scaling (normalization) in order to guarantee equal range between features and enable weight comparison. We selected iii) state of other reservoirs: upstream and downstream reservoir water levels (UPST and DOWN, respectively) and energy availability (LRL) inferred by SIN reservoir levels (UHE).
The features related to the water balance are often the most relevant variables, sometimes used exclusively as input in an ANN for operation prediction. It was selected as input: inflow on time t (INF0), inflow one and two days before (INF1 and INF2, respectively), water level and outflow one and two days before (LEV1, LEV2, OUT1, OUT2).
Time variables are important to account for seasonality and demand. The time-of-the-year feature consists of a day within a year from 1 to 365 and was adapted to circular representations (i.e. sine and cosine of day 2 365 π ) in order to provide continuity from one year to the other (e.g. from December 31 st to January 1 st ). The weekday feature is important on hydropower operation since there is a significant reduction of energy demand on weekends as industries stop working. WDAY assumed a value of 1 if it is weekend or holiday and 0 if it is a workday. Continuous time refers to the day since the beginning of a reservoir operation and it was considered to infer eventual changes on the operation due to changes on the energy demands.
Finally, operation of Brazilian hydropower reservoirs is integrated and optimized to generate most energy for the interconnected system. Thus, it was considered information of other reservoirs as well. It was evaluated the water level of one reservoir upstream and other downstream, when applicable, in order to account for level restrictions, safety operations (e.g. flood control) and maximum energy generation for the cascade system of reservoirs. These variables are the UPST and DOWN features. In addition, the accumulated water volume of other hydropower reservoirs indicates the water availability for energy generation and, consequently, the energy demand for that specific reservoir. For example, if most of the great hydropower plants are operating in high water levels, consequently the country's energy demand is likely to be met, and the operator decision of a specific reservoir might be to reserve water for times of scarcity. Then, it was selected 4 large storage reservoirs from different regions of Brazil (North, Northeast, Southeast and South) as a proxy of the current potential of hydropower energy generation: Tucuruí, Sobradinho, Marimbondo and Foz do Areia. These reservoirs are important in terms of energy production and are old enough to provide a long time series of inflow, outflow and water level. The water level in time t-1 from these 4 reservoirs (UHE1, 2, 3 and 4) are input to a first node named LRL (large reservoirs level), which consist of the only neuron that composes the first hidden layer, but it can be interpreted as a neuron on the input layer ( Figure 2) if we consider that this ANN has only a single hidden layer.
Data acquisition and ANN training
The ANN input data and output observations were obtained through the Brazilian Reservoir Monitoring System (SAR) database from the Brazilian National Water Agency (ANA). The SAR Brêda et al.
5/12
database provided inflow, outflow, and water level time series from 159 reservoirs connected to SIN by the day it was accessed (Nov/2019) (Table 1). Then, it was selected reservoirs that had at least 8 years of not necessarily consecutive data registered with information of all input features (Figure 3). Thus, 21 reservoirs were discarded, remaining 138 of which 71 are classified by the ONS as run-of-the-river and 67 as storage reservoir.
Data was randomly split into training data (60%), crosstraining data (20%) and validation data (20%). The training data are used on the optimization process, but the weight matrices are selected based on the model fitness to the cross-training data. The backpropagation algorithm is fed exclusively with the training data while the cross-training data helps to select unbiased and not overfitted weight matrices on previous algorithm iterations. Then, the validation data, which was left untouched, indicates the ANN performance.
A complexity test was performed in order to identify an adequate number of neurons in the hidden layer. We selected two large and important reservoirs in terms of energy production for the complexity test, each representing a different type of hydropower facility: run-of-the-river reservoirs (Itaipu) and storage reservoirs (Furnas). It was assessed ANN arrangements with 1, 2, 3, 5, 7, 10, 15 and 20 hidden neurons and in each configuration the ANN was trained 10 times in order to obtain a more reliable and representative result and not depend on the algorithm starting point.
After deciding on an adequate number of hidden neurons, the ANN was specifically trained for each reservoir with the training data. It was used a momentum term of 0.96 (Rumelhart et al., 1986) and an initial learning rate of 0.0001 that is adapted based on the error evolution (Vogl et al., 1988), both technics applied to accelerate the convergence of the gradient descent. The backpropagation algorithm was run three times to improve chances of reaching a good set of weight matrices, selecting the best results based on the cross-training sample.
ANN assessment
Information about the reservoir level and previous releases are essential to estimate the present outflow. However, this data is not always available, or the latency period is too long for immediate applications. So, we have shaped this ANN model to fit a short These three ANN models are compared with two simple benchmarks: outflow of time t is equal to outflow in time t-1, which we called steady hypothesis (STEADY); and outflow is equal to inflow (INFLOW). The performances are evaluated in terms of the normalized root mean squared error (NRMSE) and Nash-Sutcliffe efficiency (NSE) which are given by: where Q is the simulated outflow; o Q is the observed outflow; i Q is the mean simulated outflow; o Q is the mean observed outflow and n is the number of samples.
17 features were proposed on this ANN arrangement, water balance, time or demand related. We evaluated the influence of every feature on the ANN output in order to understand which input variables are more relevant. We adopted the Weight method (Garson, 1991apud Gevrey et al., 2003, which basically multiply normalized synaptic weights from layer to layer. In this specific case, there is just one hidden layer, thus there are only two weight matrices (and a third to build the LRL neuron which was not evaluated, see Figure 2). The weight method was conducted as follow: i) Consider the first weight matrix ( h Θ ), with dimensions n h × (nº of features × nº of neurons); ii) h Θ is normalized dividing each component by the sum of the coefficients related to each neuron (column); iii) the second weight matrix ( o Θ ), with dimensions h 1 × (nº of neurons × nº of outputs), is also normalized; iv) finally the input features influence on the output is given by the dot product of the normalized weight matrices.
where is the number of input features; h is the number of hidden neurons; h Θ is the first weight matrix, from the input to the hidden layer; xh Θ is a specific line of h Θ that correspond to the weights of feature x ; and o Θ is the second weight matrix, from the hidden to the output layer. As the proposed ANN only has one output, the second weight matrix ( o Θ ) has just one column. Thus, the weight influence of feature x in the output ( x W -synaptic weight factor) is given by the dot product of the normalized line correspondent of x on the first weight matrix and the normalized column of the second weight matrix (Equation 3).
RESULTS
The complexity test indicates the number of hidden neurons that should be used in the ANN in order to provide an efficient and accurate performance. Figure 4 demonstrates the ANN performance considering different number of neurons in the hidden layer in terms of NRMSE. The NRMSE have converged to 20% in Furnas and to 4.75% in Itaipu, which suggest that an even larger number of hidden neurons would not improve the ANN Figure 3. Data quantification of all 159 SIN reservoirs available on the SAR database. Blue dots represent reservoirs selected on this study, red dots are the discarded reservoirs and the red line represents a threshold of eight years of data. The black triangles represent four specific reservoirs named above.
7/12
performance. The run-of-the-river reservoir has converged with lesser hidden neurons: 5 in Itaipu compared to 7 in Furnas. This was expected since there is almost no range of storage volume for reservoir operation in a run-of-the-river reservoir and, by definition, outflow is mostly governed by inflow. It is common practice to select the smallest number of hidden neurons that provide a good performance. Since the same ANN arrangement are applied to all reservoirs, we adopted an ANN composed by 10 hidden neurons as a cautious alternative.
The ANN training seemed to succeed. First results show that usual regression problems such as overfitting were avoided, since training (TRAIN), cross-training (X-TRAIN) and validation (VALID) data presented similar performances ( Figure 5). Also, outflow estimation was reasonable as NRMSEs median were around 14% and 18% for WITH_Qout and NO_Qout ANN models, respectively.
The influence of every input feature on the predicted variable was evaluated in terms of synaptic weights through the Weight method. Although this method has no physical meaning, it infers about input features that are more influent on the ANN output. Figure 6 illustrate the Weight method results as a box plot using the 138 selected SIN reservoirs as samples. The input variables DOWN and UPST were not considered in this specific test since they are not applicable to all reservoirs.
For the WITH_Qout model, the inflow in time t (INF0) is the feature that has the most influence on the ANN predicted variable (OUT0), followed by the outflow in time t-1 (OUT1). However, this sample analysis is biased since outflows of runof-the river reservoirs are largely dominated by inflow. Analyzing each type of reservoir individually, we can see that outflow of run-of-the-river reservoirs are indeed governed by inflow; but on storage reservoirs, OUT1 has more influence on the predicted variable than INF0.
For the NO_Qout model, outflow and level are not input variables, thus INF0 largely influences the ANN output. However, looking at storage reservoirs, other features that are related to time or demand presented a high weight factor as well. The LRL feature, for example, is indirectly related to hydropower demand based on the water level of large SIN reservoirs. This feature presented a relatively high synaptic weight factor, which indicates the benefits of an integrated operation to simulate hydropower reservoirs in Brazil.
The ANN was able to predict relatively well outflow for most SIN reservoirs (Figure 7). When outflow and level observations were available as input to the ANN (WITH_Qout), the sample median NRMSE (NSE) was 14% (0.95) compared to 25% (0.83) and 24% (0.85) of the STEADY and INFLOW benchmarks, respectively, and the RMSE upper (NSE lower) quartile was 19% (0.90) compared to 38% (0.74) and 59% (0.08). If outflow and level observations were unknown, the ANN performance deteriorates. In general terms, SIM_Qout presented a performance similar to INFLOW, but for storage reservoirs SIM_Qout performance was much superior. NO_Qout presented a superior performance in average compared to INFLOW and STEADY, but for storage reservoirs exclusively, STEADY was a slightly better. Given this ANN arrangement and input features, these results indicate that it is 9/12 better to train an ANN without non-observed variables (OUT and LEV -NO_Qout), rather than training an ANN with all features and use simulated variables as input (SIM_Qout). The latter was adopted by GROS (Ehsani et al., 2016), which provided adequate results in large-scale, however it is important to remark that the authors used a different ANN structure with more hidden layers and less input features.
The performances of outflow predictions were strongly dependent of the type of the hydropower reservoir. Errors were much smaller for run-of-the-river reservoirs compared to storage reservoirs, which was expected since outflow of the former is easier to be estimated. Indeed, the INFLOW assumption presented a performance as good as WITH_Qout for run-of-theriver reservoirs; however, for storage reservoirs its performance considerably worsens. This indicates that storage reservoirs must be well represented in a hydrological model in order to provide accurate estimation of discharge downstream rather than only simulate natural streamflow. Furthermore, NO_Qout provided results similar to STEADY for storage reservoirs. While the former is a good option if no outflow and level data is available on the simulation period, the latter becomes a simple alternative to represent storage reservoir releases if previous days outflow is known. Figure 8 illustrates and exemplifies the ANN results through outflow hydrographs of four storage reservoirs: Furnas (a), Jurumirim (b), Sobradinho (c) and Tucuruí (d). It can be seen that ANN provide better estimates compared to INFLOW. The ANN were able to represent reservoirs with high regularization capacity that significantly impacts natural streamflow regimes. In fact, WITH_Qout outflow considerably approximates to observation, while outflows from NO_Qout and SIM_Qout provide adequate seasonal tendencies but are rarely accurate in a daily scale. The weekday feature can be detected on the ANN outflow hydrographs as a 7 days cycle where outflow reduces on the last day; this variable seems to be important to predict outflow especially in Furnas. Particularly on Tucuruí, ANN NRMSE varied from 9% (WITH_Qout) to 17% (SIM_Qout), while INFLOW presented a NRMSE of 33%. In terms of Nash Sutcliff coefficient, ANN performances varied from 0.99 (WITH_Qout) to 0.95 (SIM_Qout), while INFLOW was 0.81. These results indicate that Tucuruí has a relatively well-defined operation and the ANN was capable to capture that.
CONCLUSION
This paper offered a first evaluation of the potential of using machine learning techniques to simulate a complex coordinated system of hydropower reservoirs such as the Brazilian SIN. It was proposed an ANN model to predict daily outflow from most of hydropower reservoirs connected to SIN giving water balance, time, and demand input variables.
We used 14 input features and assessed their influence on the model output. As expected, model outflow is mainly influenced by water balance variables, such as the outflow of previous days and inflow. However, features as the water levels in 4 large representative reservoirs (LRL), which infers about hydropower demand through the water availability in different regions of the country, seemed to influence to some extent reservoirs outflow predictions, indicating advantages of an integrated assessment of SIN reservoirs.
The ANN was trained with (WITH_Qout) and without (NO_Qout and SIM_Qout) reservoir water level and outflow observations as input features to represent usual situations of outflow estimates. The ANN results were compared to two simple benchmarks: outflow is equal to the outflow of the day before (STEADY) and outflow is equal to inflow (INFLOW). There is a significant difference between estimating outflow on run-of-theriver and storage reservoirs since the latter considerably modify the natural streamflow hydrograph. Using ANN for run-of-the-river reservoirs seemed unnecessary as inflow is almost equal to outflow. However, for storage reservoirs, the ANN model presented a superior performance compared to the benchmarks. When outflow data is available, the WITH_Qout ANN model (median NSE=0.91) outperforms STEADY (median NSE=0.81). When outflow data is not available, ANN performance deteriorates, dropping to NSE medians of 0.77 (NO_Qout) and 0.68 (SIM_Qout), however it is still much superior to INFLOW (median NSE = 0.29).
In conclusion, we have simulated daily outflow from 138 reservoirs individually but using input features that include information from other reservoirs as well and the model performance has been superior to the benchmarks. These results indicate that an integrated approach benefits simulation of a coordinate hydropower system and machine learning techniques are interesting tools for estimating reservoir outflow. However, other ANN arrangements, other possible input features and/ or deep learning techniques might improve outflow predictions (Zhang et al., 2018) giving this large amount of data and high nonlinearity. The ANN model reasonably approximates to reservoir operations and becomes an interesting tool for large scale analysis of streamflow impacts. However, this is a general model and does not substitute specific reservoirs models that include important local information such as water supply demands, flood control, and environmental legislation for management purposes.
|
2021-07-26T00:06:53.107Z
|
2021-06-02T00:00:00.000
|
{
"year": 2021,
"sha1": "e198e13bee9cb4a95212258376d09b5d9027cbfc",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/rbrh/a/rssnM5YxHqSJhRmszwZB7cb/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c53937965e358c08ae6f2b07bddec3846e17a344",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
115169288
|
pes2o/s2orc
|
v3-fos-license
|
On integrability of the Camassa-Holm equation and its invariants. A geometrical approach
Using geometrical approach exposed in arXiv:math/0304245 and arXiv:nlin/0511012, we explore the Camassa-Holm equation (both in its initial scalar form, and in the form of 2x2-system). We describe Hamiltonian and symplectic structures, recursion operators and infinite series of symmetries and conservation laws (local and nonlocal).
Introduction
The Camassa-Holm equation was introduced in [4] in the form and was intensively explored afterwards (see, for example, Refs. [5,6,7,17]). Its superizations were also constructed, see [1,19]. Since (1) is not an evolution equation, its integrability properties (existence and even definition of Hamiltonian structures, conservation laws, etc.) are not standard to establish. One of the ways widely used to overcome this difficulty is to introduce a new unknown m = u − u xx and transform Eq. (1) to the system which has almost evolutionary form. We stress this "almost", because the second equation in (2) (that can be considered as a constrain to the first one) disrupts the picture and, at best, necessitates to invert the operator 1 − D 2 x . At worst, dealing with Eq. (2) as with an evolution equation may lead to fallacious results.
In our approach based on the geometrical framework exposed in Ref. [3], we treat the equation at hand as a submanifold in the manifold of infinite jets and consider two natural extensions of this equation, cf. with Ref [16]. The first one is called the ℓ-covering and serves the role of the tangent bundle. The second extension, ℓ * -covering, is the counterpart to the cotangent bundle. The key property of these extensions is that the spaces of their nonlocal (in the sense of [15]) symmetries and cosymmetries contain all essential integrability invariants of the initial equation. The efficiency of the method was tested for a number of problems (see Refs. [12,13,14]) and we apply it to the Camassa-Holm equation here.
In Section 2 we briefly expose the necessary definition and facts. Section 3 contains computations for the Camassa-Holm equation in its matrix version (computations and results are more compact in this representation), while in Section 4 we reformulate them for the original form (1) and compare later the results obtained for the two alternative presentations. Finally, Section 5 contains discussion of the results obtained. Throughout our exposition we use a very stimulating conceptual parallel between categories of smooth manifolds and differential equations proposed initially by A.M. Vinogradov and in its modern form presented in Table 1. This table is not just a toy dictionary but a quite helpful tool to formulate important definitions and results. For example, a bivector on a smooth manifold M may be understood as a derivation of the ring C ∞ (M) with values in C ∞ (T * M). Translating this statement to the language of differential equations we come to the definition of variational bivectors and their description as shadows of symmetries in the ℓ * -covering (see Theorem 2 below). Another example: any vector field (differential 1-form) on M may be treated as a function on T * M (on T M). Hence, to any symmetry (cosymmetry) there corresponds a conservation law on the space of the ℓ * -covering (ℓ-covering). This leads to the notions of nonlocal vectors and forms that, in turn, provide a basis to construct weakly nonlocal structures (see Subsections 3.4 and 3.5). Of course, these parallels are not completely straightforward (in technical aspects, especially), but extremely enlightening and fruitful.
The idea of this paper arose in the discussions one of the authors had with Volodya Roubtsov in 2007. We agreed to write two parallel texts on integrability of the Camassa-Holm equation that reflect our viewpoints. The reader can now compare our results with the ones presented in [20].
Underlying theory
We present here a concise exposition of the theoretical background used in the subsequent sections, see Refs. [3,13,15].
Equations, symmetries, etc.
Let π : E → M be a fiber bundle and π ∞ : J ∞ (π) → M be the bundle of its infinite jets. To simplify our exposition we shall assume that π is a vector bundle. In all applications below π is the trivial bundle R m × R n → R n . We consider infinite prolongations of differential equations as submanifolds E ⊂ J ∞ (π) and retain the notation π ∞ for the restriction π ∞ | E . Any such a manifold is endowed with the Cartan distribution which spans at every point tangent spaces to the graphs of jets. A symmetry of E is a vector field that preserves this distribution. The set of symmetries is a Lie algebra over R denoted by sym E .
For any equation E its linearization operator ℓ E : κ → P is defined, where κ is the module 1 of sections of the pullback π ∞ (π) and P is the module of sections of some vector bundle over E . Then sym E can be identified with solutions of the equation For two symmetries ϕ 1 , ϕ 2 ∈ sym E their commutator is denoted by {ϕ 1 , ϕ 2 }. Denote by Λ i h the module of horizontal i-forms on E and introduce the notation for any module Q. The adjoint to ℓ E operator arises and solutions of the equation are called cosymmetries of E ; the space of cosymmetries is denoted by cosym E .
h be the horizontal de Rham differential. A conservation law of the equation E is a closed form ω ∈ Λ n−1 h . To any conservation law there corresponds its generating function δ ω ∈ cosym E , where δ : E 0,n−1 1 → E 1,n−1 1 is the differential in the E 1 term of 1 All the modules below are modules over the ring F of smooth functions on E .
Vinogradov's C -spectral sequence, see [21]. In the evolutionary case δ coincides with the Euler-Lagrange operator. A conservation law is trivial if its generating function vanishes. In particular, d h -exact conservation laws are trivial.
A vector field on E is called a C -field if it lies in the Cartan distribution. A differential operator ∆ : P → Q, P and Q being F -modules, is called a C -differential operator if it is locally expressed in terms of C -fields. For example, ℓ E is a C -differential operator.
A C -differential operator H :P → κ is said to be a variational bivector on E if [11]). Two Hamil- Such operators take symmetries to cosymmetries and in evolutionary case are skew-adjoint. They are elements of the term E 2,n−1 1 of Vinogradov's C -spectral sequence. A variational form is a symplectic structure on the equation E if it is variationally closed, i.e., δ S = 0, where δ : E 2,n−1 1 → E 3,n−1 1 is the corresponding differential. We shall also consider recursion C -differential operators R : κ → κ andR :P →P satisfying the conditions for some C -differential operators R ′ : P → P andR ′ :κ →κ.
Nonlocal theory
Let E andẼ be equations and ξ :Ẽ → E be a fiber bundle. Denote by C andC the Cartan distributions on E andẼ , resp. We say that ξ is a covering if for anyθ ∈Ẽ the differential dθ ξ isomorphically mapsCθ onto C ξ (θ) . A particular case of coverings (the so-called Abelian coverings) is naturally associated with closed horizontal 1-forms 3 . By definition, any C -field X on E can be uniquely lifted to a C -fieldX onẼ such that dξ (X) = X. Consequently, any C -differential operator ∆ : P → Q is extended to a Cdifferential operator∆ F being the algebra of smooth functions onẼ . 2 It is more appropriate to call these objects Poisson structures, but we follow the tradition accepted in the theory of integrable systems. 3 When dim M = 2, Abelian coverings are associated with conservation laws of the equation E .
A nonlocal ξ -(co)symmetry of E is a (co)symmetry of the covering equationẼ . They are solutions of the equations ℓẼ ϕ = 0 and ℓ * Ẽ ψ = 0, resp. Along with these two equations one can consider the equations Their solutions are called ξ -shadows of symmetries and cosymmetries, resp. A shadow of symmetry is a derivation F →F that preserves the Cartan distributions. For any two shadows of symmetries ϕ 1 and ϕ 2 their commutator {ϕ 1 , ϕ 2 } can be defined. This commutator is a shadow in a new covering that is canonically determined by ϕ 1 and ϕ 2 .
2.3
The ℓand ℓ * -coverings Let E ⊂ J ∞ (π) be an equation. Its ℓ-covering τ : L (E ) → E is obtained by adding to E the equation ℓ E (q) = 0, where q is a new variable. Dually, the ℓ * -covering τ * : L * (E ) → E is constructed by adding the equation ℓ * E (p) = 0 with a new variable p. They are the exact counterparts of the tangent and cotangent bundles in the category of differential equations. By the reasons that will become clear later, we regard both q and p as odd variables. The main point of our method is the fundamental relation between integrability invariants of E and shadows in τ and τ * . To formulate this relation, let us give an auxiliary definition: for an arbitrary operator equation Classes of solutions modulo trivial ones will be called nontrivial solutions. Then the following results hold.
Theorem 1 (shadows in the ℓ-covering) There is a one-to-one correspondence between nontrivial solutions of the equation
and τ-shadows of symmetries linear w.r.t. the variables q. In a similar way, there is a oneto-one correspondence between nontrivial solutions of the equation and τ-shadows of cosymmetries linear w.r.t. the variables q.
Theorem 2 (shadows in the ℓ * -covering) There is a one-to-one correspondence between nontrivial solutions of the equation and τ * -shadows of symmetries linear w.r.t. the variables p. In a similar way, there is a one-to-one correspondence between nontrivial solutions of the equation and τ * -shadows of cosymmetries linear w.r.t. the variables p.
Theorem 3 Let R 1 and R 2 be recursion operators for symmetries on
In both cases the curly brackets denote the super bracket of shadows that arises due to oddness of the variables q and p. Additional discussion of Theorem 3 the reader will find in Remark 2.
Theorem 4
To any cosymmetry of E there canonically corresponds a conservation law of L (E ). Dually, to any symmetry of E there canonically corresponds a conservation law of L * (E ).
Computational scheme
Let locally the equation E be given by the system where j = 1, . . . , m and |σ | ≤ k.
Step 1 consists of writing out defining equations for symmetries and cosymmetries of E . Let {u j σ } j∈J σ∈S be internal coordinates on E , S and J being some sets of (multi)indices and u j σ corresponding to ∂ |σ| u j /∂ x σ . Then any C -field on E is a linear combination of the total derivatives The linearization of E is the matrix operator with the entries A symmetry ϕ = (ϕ 1 , . . . , ϕ m ) enjoys the equation and the corresponding field is the evolutionary vector field while the bracket of symmetries ϕ 1 , ϕ 2 is given by The operator adjoint to (12) is and a cosymmetry ψ = (ψ 1 , . . ., ψ r ) satisfies the equation Step 2. Here we look for closed 1-forms and construct Abelian coverings associated to them.
Such a form gives rise to a nonlocal variable w = w ω that satisfies the equations These equations are compatible on (10) due to (18). Recall that for n = 2 closed 1-forms coincide with conservation laws. The total derivatives lifted to the covering equationẼ arẽ Step 3. At this step we compute a number of particular symmetries and cosymmetries (using equations (13) and (17), resp.). They are used to construct canonical nonlocal variables on the ℓ * -covering (nonlocal vectors) and on the ℓ-covering (nonlocal forms), resp., at Step 4. We also use them as seed elements in series generated by recursion operators.
Step 4 consists of construction of the ℓand ℓ * -coverings and introduction of canonical nonlocal variables over them (see Step 3). The ℓ-covering is obtained by adding to (9) the system of equations cf. with Eq. (12), while the ℓ * -covering is given by that comes from (16). If ϕ is a symmetry of E then one can introduce a covering over L * (E ) described by the system ∂p where ∆ l σ,i are C -differential operators (see Theorem 4). In a similar way, to any cosymmetry ψ there corresponds a covering ∇ j σ,i being C -differential operators as well. We omit here a general description of these operators and refer the reader to the particular case of our interest exposed in Sections 4.4 and 4.5.
Step 5. We now use Theorems 1 and 2 to construct recursion operators and Hamiltonian and symplectic structures. Let ψ 1 , . . ., ψ s cosymmetries of E . Let us consider the covering L (E ) over L (E ) with the nonlocal variablesq 1 , . . .,q s defined by (24) and lift the operators ℓ E and ℓ * E to this covering. Then the following result specifies Theorem 1: Then the operator takes shadow of symmetries to shadows of symmetries. In a similar way, to any solution Ψ = (Ψ 1 , . . .,Ψ r ), there corresponds the operator that takes shadows of symmetries to shadows of cosymmetries.
In a dual way, consider symmetries ϕ 1 , . . ., ϕ s of the equation E and the covering L * (E ) over L * (E ) with the nonlocal variablesp 1 , . . .,p s defined by (23). Then, lifting ℓ E and ℓ * E , we obtain a similar specification of Theorem 2: Then the operator takes shadow of cosymmetries to shadows of symmetries. In a similar way, to any solution Ψ = (Ψ 1 , . . . ,Ψ r ), there corresponds the operator that takes shadows of cosymmetries to shadows of cosymmetries.
After finding the operators R, S, H andR we check conditions (6) and (7) and compute necessary Schouten and Nijenhuis brackets.
Step 6 The last step consists of establishing algebraic relations between the invariants constructed above.
The matrix version
We consider Eq. (1) in the form i.e., set µ = 0, and, similar to (2), introduce a new variable w = αu − u xx , where α is a new real constant. Consequently, the initial equation transforms to the system We choose the following variables for internal local coordinates on the infinite prolongation of Eq. (26): Then the total derivatives in these coordinates will be of the form We introduce the following gradings: |x| = −1, |t| = −2, |u| = 1, |w| = 3, |α| = 2 and extend them in a natural way to all polynomial functions of the internal coordinates. Then all computations can be restricted to homogeneous components.
Nonlocal variables
In subsequent computations we shall need the following nonlocal variables arising from conservation laws and defined by the equations The variable s i is of grading i and computational experiment shows that for every grading i = 4n − 2 + ε, ε = 0, 1, there exist an s i such that |s i | = i. In addition, we found conservation laws of fractional gradings: etc.
Symmetries
A symmetry ϕ = (ϕ w , ϕ u ) of Eq. (26) must satisfy the linearized equation Direct computations lead to the following results.
If one adds to the nonlocal setting the variables s γ (see above) then an additional series of nonlocal symmetries arises:
etc.
(x,t)-dependent symmetries. The first three symmetries that depend on x and t are All these symmetries, except for the first one, are nonlocal (description of the nonlocal variable is given in Subsection 3.1) and, as above, the subscript denotes the grading.
Cosymmetries
The defining equation for cosymmetries ψ = (ψ w , ψ u ) is the adjoint to the linearization of (26): Similar to symmetries, we consider two types of cosymmetries.
(x,t)-independent cosymmetries. They are local and may be of integer and semi-integer gradings: etc. and
etc.
Similar to the case of symmetries, when one adds nonlocal variables s γ an additional series of nonlocal cosymmetries arises:
Nonlocal forms
Recall that nonlocal forms are nonlocal variables of a special type on the ℓ-covering. The ℓ-covering itself is obtained from Eq. (26) by adding two additional equations where q w and q u are new odd variables. The total derivatives on the ℓ-covering arẽ The nonlocal form Q i associated to a cosymmetry ψ i = (ψ w i , ψ u i ) (see Subsection 3.3) is defined by the equations
Nonlocal vectors
Dually to nonlocal forms, nonlocal vectors arise as special nonlocal variables on the ℓ *covering associated to symmetries of the initial equation. The ℓ * -covering is the extension of Eq. (26) by two new equations where p = (p w , p u ) is a new odd variable. The total deriavatives are given bỹ The nonlocal vector P i associated to a symmetry ϕ = (ϕ w , ϕ u ) (see Subsection 3.2) is defined by the equationsD
Recursion operators for symmetries
The defining equations for these operators are Theorem 5), where the total derivatives are those described in Subsection 3.4. The following two solutions are essential: The corresponding operators are of the form All other solutions obtained in our computations corresponded to operators that are generated by the two above.
Symplectic structures
Symplectic structures, as it follows from Theorem 5, are defined by the equations where the total derivatives were defined in Subsection 3.4. Here are the simplest nontrivial solutions: with the corresponding symplectic operators . 16
Hamiltonian structures
The equations that should be satisfied by a Hamiltonian operator (see Theorem 6) are where the total derivatives are from Subsection 3.5. In particular, we found the following solutions: The corresponding Hamiltonian operators are
Recursion operators for cosymmetries
By Theorem 6, the equation to find recursion operators for cosymmetries are with the total derivatives given in Subsection 3.5. One of solutions is presented below: The corresponding recursion operator iŝ
Interrelation
We expose here basic facts on structural relations between the above described invariants. The main one is the following Proof The proof consists of direct computations using the results and techniques of Ref. [8] 4 .
A visual presentation of how symmetries are distributed over gradings is given in Table 2. How to prove locality of the first two series of symmetries will be discussed in Sec- Table 3. The action of Hamiltonian and recursion operators for symmetries (up to a constant multiplier) is given in Diagram (27): The action of recursion operators for cosymmetries and simplectic structures is similar.
We shall now prove commutativity of the local hierarchies.
Lemma 1
The symmetryφ 3 is a positive hereditary symmetry, i.e., its action on local symmetries, ϕ → {φ 3 , ϕ}, coincides, up to a multiplier, with the one of the recursion operator R 3 . The only symmetries that vanish under this action are ϕ −3/2 and ϕ 1 . In a similar way, the symmetry ϕ −1 is a negative hereditary symmetry and the only symmetry that is taken to zero under its action is ϕ 1 .
A direct corollary of this result is Theorem 8 Local positive and negative symmetries form commutative hierarchies.
The scalar version
Let us consider now the Camassa-Holm equation in its initial form (1) with µ = 0 and, similar to the matrix case introduce a new real parameter α: For the internal coordinates we choose the functions The total derivatives in these coordinates are of the form where u 3 = (αu 0,1 −u 2,1 +3αuu 1 −2u 1 u 2 )/u. The equation becomes homogeneous if assign the following gradings: |x| = −1, |t| = −2, |u| = 1, |α| = 2.
Symmetries
A symmetry ϕ must satisfy the linearized equation We computed two types of symmetries. Everywhere below the subscript was chosen in a way to correspond the enumeration taken for the matrix case.
All these symmetries are local.
Nonlocal vectors
Nonlocal vectors arise in the ℓ * -covering. The latter is the extension of the initial equation by the equation −α p t + p xxt + up xxx + u x p xx + (u xx − 3αu)p x = 0 with the total derivatives
|
2008-12-27T10:27:52.000Z
|
2008-01-30T00:00:00.000
|
{
"year": 2008,
"sha1": "49c55d7dbc9beda94a073ea3f9ec6aaf0cc47163",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0812.4681",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d22e34dc278479eb6fab7ab5e70ef5697607d41c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
235303246
|
pes2o/s2orc
|
v3-fos-license
|
Uncertainties in Measuring Soil Moisture Content with Actively Heated Fiber-Optic Distributed Temperature Sensing
Actively heated fiber-optic distributed temperature sensing (aFO-DTS) measures soil moisture content at sub-meter intervals across kilometres of fiber-optic cable. The technology has great potential for environmental monitoring but calibration at field scales with variable soil conditions is challenging. To better understand and quantify the errors associated with aFO-DTS soil moisture measurements, we use a parametric numerical modeling approach to evaluate different error factors for uniform soil. A thermo-hydrogeologic, unsaturated numerical model is used to simulate a 0.01 m by 0.01 m two-dimensional domain, including soil and a fiber-optic cable. Results from the model are compared to soil moisture values calculated using the commonly used Tcum calibration method for aFO-DTS. The model is found to have high accuracy between measured and observed saturations for static hydrologic conditions but shows discrepancies for more realistic settings with active recharge. We evaluate the performance of aFO-DTS soil moisture calculations for various scenarios, including varying recharge duration and heterogeneous soils. The aFO-DTS accuracy decreases as the variability in soil properties and intensity of recharge events increases. Further, we show that the burial of the fiber-optic cable within soil may adversely affect calculated results. The results demonstrate the need for careful selection of calibration data for this emerging method of measuring soil moisture content.
Introduction
For environmental monitoring and sensing, soil moisture content is a critical component of the hydrologic system. In near-surface environments, the ground is a combination of soil, water, and air, and is referred to as the vadose zone. Soil moisture is the amount of water within a given soil and changes spatially and temporally [1]. The movement and storage of water in the vadose zone is important for numerous processes, including agricultural engineering, groundwater recharge, predicting the response of streams and rivers to large rainfall events or snowmelt, and geoengineering [2][3][4][5][6]. Soil moisture content is most often measured in situ using sensors such as electrical conductivity or capacitance probes [7]. With proper calibration, these methods can provide very accurate point measurements of soil saturation. However, these sensors are limited in that they provide soil moisture measurements at discrete points and may not capture the intrinsic spatial variability of the subsurface [1]. There have been recent advances in high-resolution remote sensing of soil saturation, but the measurements are limited to very shallow depths [8]. Actively heated fiber-optic distributed temperature sensing (aFO-DTS), the focus of this research, is an emerging technology that has great potential for the measurement of distributed soil moisture [9].
Fiber-optic distributed temperature sensing (FO-DTS) has been utilized in environmental and hydrologic sciences to measure temperature and heat fluxes in rivers, streams, and subsurface boreholes [10]. FO-DTS measures temperature along a fiber-optic cable via nonelastic Brillouin and Raman backscattering. In Raman backscattering, there is a change in the intensity of reflected light when incident light strikes the fiber-optic glass wall and is reflected [10]. The intensity of backscatter at the anti-Stokes frequency is dependent on the temperature of the cable where the reflection occurs, therefore temperature at the point of reflection can be calculated from the ratio of the intensity at the anti-Stokes frequency to that at the Stokes frequency [11,12]. By measuring the two-way travel time of light along the fiber-optic cable, temperature can be measured at sub-metre spacing over kilometers of cable [13]. The precision of this measurement is proportional to the square root of the integration period or the square root of the time in one step interval assuming no errors due to temperature drift [10,12,14]. FO-DTS has been used extensively for temperature monitoring and fire identification [15,16]. FO-DTS systems are now field-portable, durable, compact, and are used for many environmental and hydrologic applications, e.g., [10,17,18].
The aFO-DTS method to measure soil moisture content uses FO-DTS to measure the thermal response of a section of the fiber-optic cable to a controlled heat pulse [14]. In the vadose zone, soil, water, and air have distinct thermal conductivities and heat capacities. If a fiber-optic cable is buried in soil and a heat pulse is created by applying a known electrical current across the steel outer core of the cable, the resulting thermal response (measured with the FO-DTS system) will be a function of the proportions of soil particles, water, and air. Assuming the soil does not change, then the thermal response at a given location is controlled by the amount of water present. As described below, with calibration, it is possible to use aFO-DTS to measure soil moisture [9].
With current off-the-shelf field-ruggedized technology available, aFO-DTS technology has a minimum temperature measurement of 0.125 m along a fiber-optic cable buried in soil, with measurements every 1 s. This measurement density and sampling interval is possible for cables up to 10 km in length. The measurement of water content has great potential with aFO-DTS, but calibration across field scale variable soil conditions is difficult [6,9,14,19,20].
In an applied field-scale deployment of aFO-DTS, the measurement of soil saturation is simultaneously observed at all locations along the fiber-optic cable for each heat pulse event. The changes of thermal responses during a heat pulse at all locations is not only a function of variability in soil saturation but also the spatial heterogeneity of the bulk soil properties, recharge distribution and duration, and the effect of heat pulses on the soil, such as the duration of heating. For subsurface hydrology, recharge events may cause rapid increases in soil moisture content as water from precipitation, snowmelt, or irrigation infiltrates the subsurface and flows downward. Thus, the calibration protocol relating the measured thermal response to soil water content must reflect the particularity of each location along the buried cable. This is challenging due to the limited available information on the spatial variability of soil thermal and physical properties in field scale experiments [1].
Dong et al. [21] provide a strategy for measuring soil moisture with just FO-DTS. They use an adaptive particle batch smoothing algorithm in conjunction with a numerical model to assimilate FO-DTS observations of diurnal soil temperature fluctuations to calculate soil moisture content. Their study assesses variability of soil thermal properties at the scale needed for effectively distributed calibration, though the complexity of their approaches limits an easy application in applied settings.
Soil moisture content values are reported in numerous ways, largely as a function of the technical field. Broadly, agronomists and soil scientists often use volumetric or gravimetric water content, while soil mechanists and hydrogeologists use saturation or degree of saturation. Given soil density and porosity, one can convert between metrics. In the analysis presented herein, we use saturation, which is defined as the fraction of soil pore space occupied by water (i.e., a saturation of 1 indicates the pore space is filled with water and there is no air) [22].
For a field-based aFO-DTS protocol, factors, such as bulk density, mineralogy, organic matter content, and initial temperature, contribute to the spatial variability of heat transport in soils [23]. For example, with an increase in soil compaction, porosity decreases, and as a result, particle contact increases [24]. The increase in particle contact is important in mineral soils where grains have a higher thermal conductivity than water and air. Heat will preferentially conduct across the connected mineral grains instead of the more insulating liquid and gas mediums. Soil composition also contributes to the variability of thermal conductivity. To isolate the effect of permeability and recharge on heat transfer, the research described below considers sand of uniform mineralogy and density.
For aFO-DTS, the thermal response of soil to pulse heating can be calculated as a cumulative temperature increase, T cum (s • C) [14]: where t 0 (s) is the duration of the heat pulse integration period, and ∆T ( • C) is the temperature change with respect to ambient conditions. As T cum is used as a token term for thermal conductivity, all factors that affect thermal conductivity are expected to affect T cum .
In field deployments of aFO-DTS, a calibration protocol relating T cum to soil saturation should ideally account for the spatial variability of soil thermal properties, which is often non-linear. There are several ways to calibrate the relationship for thermal properties to soil saturation for a given soil: • Model calibration curves based on field samples with different soil water contents. For example, Benítez-Buelga et al. [19] collected undisturbed field samples and measured the thermal properties of the samples under varying soil saturation conditions to create a calibration function. The calibration function was used in a heat transport model to generate another calibration function relating T cum to soil saturation for the specific soil in the study.
•
Field generated calibration curves based on soil saturation probe data. For example, Cao et al. [25] generated calibration curves relating the thermal response of an aFO-DTS experiment to soil saturation content measured by soil saturation probes installed next to defined sections of the heated fiber-optic cable. • Laboratory generated curves based on soil columns. For example, Wu et al. [26] constructed a soil column with integrated fiber-optic cable. The water table in the column was controlled to impose different soil saturation conditions inside the soil column. The calibration curve relating T cum to soil saturation was obtained by fitting a curve to the T cum -soil saturation content collocated measurements.
The calibration protocols described above can be challenging to apply when there is large variability in the background soil thermal properties. Variability in thermal conductivity can influence the relationship between T cum and soil saturation, thus affecting the accuracy of the protocol. In natural and heterogeneous environments, it is often impractical to apply these calibration methods given the wide range of conditions and material properties. Observations from different locations are required to cover the range of spatial variability of soil thermal properties and, even when the range of variability is known, there is little literature detailing the potential errors in assigning aFO-DTS measurements to a particular calibration function.
The research objective of our study is to use a thermo-hydrogeologic numerical model to evaluate potential errors in aFO-DTS measurements of soil moisture content. This study is not intended to present a comprehensive model of aFO-DTS but rather aims to identify and test common assumptions in the heat pulse protocol and to analyze the potential errors in soil moisture calculations in field-based experiments. Using an analysis-based approach, we simulate a base scenario model with uniform parameters. We then vary parameters to test the sensitivity of commonly used assumptions, scenarios, and protocols used in field-based aFO-DTS soil saturation studies. Common untested assumptions, such as the effect of ambient soil saturation, amount of recharge, length of active heating, and the distribution of heterogeneities, are tested.
Numerical Thermo-Hydrogeologic Model
The numerical model code used in this research is SUTRA, a finite element model developed by the U.S. Geological Survey that simulates saturated-unsaturated groundwater flow with energy transport [27]. SUTRA uses the Richards Equation to simulate unsaturated porewater flow coupled with conductive-advective energy transport. The model includes temperature-dependent fluid density and the effects of soil saturation on the subsurface hydraulic and thermal properties but does not simulate vapor or air flow. See Voss and Provost [27] for a detailed description of SUTRA's governing equations. The soil saturation is calculated from the modelled pressure at each time step using the van Genuchten function [28].
Modifications Made to the Model
To adapt the model to simulate aFO-DTS, the SUTRA code was modified in two ways. First, SUTRA calculates the subsurface bulk thermal conductivity with a weighted arithmetic mean from the thermal conductivities of constituents of the porous matrix (i.e., soil particles and water), but not the air phase. However, the arithmetic mean is not considered to be the correct estimation of soil bulk thermal conductivity [39] and ignoring the air phase may amplify these inaccuracies [40]. The SUTRA bulk thermal conductivity equation (K) was modified to integrate the air phase with a weighted harmonic mean [39]: where ε is porosity, S L is water saturation, and K L , K A , and K S are the thermal conductivities of water, air phase, and soil particles, respectively. The SUTRA code was also modified so that the source of energy (active heating) is applied to all the mesh nodes of the steel core of the fiber-optic cable to cumulatively add energy into the domain over a set period.
Domain and Mesh
The parameters and domain for the model are based on field and laboratory experiments [6,26,41]. For the field experiment, aFO-DTS was used to measure soil moisture in a constructed sand unit. This field experiment included careful measurement of many of the parameters required for calculating soil moisture from aFO-DTS and provides a reasonable starting point for the numerical modeling. In the numerical model, physical properties of the sand and cable are also based on the laboratory experiment.
The model domain is a two-dimensional cross-section containing a simulated fiberoptic cable buried in homogenous, isotropic sand. The base material properties used in the simulations are listed in Table 1. The model's domain dimension is 0.10 m × 0.10 m. The model mesh layout has 0.004 m × 0.004 m element spacing in the outer bands and 0.001 m × 0.001 m element spacing in the inner bands ( Figure 1). Although fiber-optic cables are round, the rectangular representation in the model domain does not affect heat transfer during the simulations as energy is spreading radially at the sand-cable interface. The model domain represents a homogeneous, medium-grained sand surrounding the fiber-optic cable located in the center of the model domain. The fiber-optic cable is represented by a 0.01 m × 0.01 m steel core surrounded by a 0.001 m thick plastic sheathing. Each medium (sand, plastic, and steel) has its own set of hydraulic and thermal properties ( Table 1). The permeability of the steel core and plastic sheath have been set to be effectively impermeable (10 −90 m 2 ) to avoid water flowing through the cable. The representation of the optical fiber is omitted from the simulations as the thermal effect of the thin glass fiber at the center of the steel core is assumed to be negligible.
Boundary Conditions
The vertical sides of the model domain are no-flow boundaries. The top hydraulic boundary condition allows water to flow into the model domain through a time-dependent specified recharge boundary condition ( Figure 1). This boundary allows water to enter the domain at specified periods during the model simulations, with varying rates and durations of recharge. The hydraulic boundary condition across the bottom of the domain is a specified pressure boundary condition of −65,000 Pa, which corresponds to the pressure at which residual saturation (0.045) is reached ( Figure 1). This boundary condition allows water to exit the model to prevent pooling. The simplicity of this "drain" could induce unintended water flowing upwards from the bottom of the model domain; however, given the model setup, the distance of the drain from the observation area (adjacent to the cable) The model timestep is one second, with an active heat pulse period of either 15 or 2 min. The energy source is applied to the cable steel with 10 W.
All four model edge boundaries are isothermal, with no heat being conducted into or out of the model. Energy may enter the model through the top boundary via inflowing water during recharge. Along with the heat pulse in the cable's steel core, this is the only input of heat to the model domain. The temperature of the inflowing water is 20 • C, except when stated. Water being discharged at the bottom model boundary represents the only heat output.
Calibration Curves
To calculate soil moisture with the T cum method, a calibration curve relating cumulative change in temperature to soil moisture conditions is required [20]. We use the numerical model to develop a synthetic calibration curve. Static water conditions, where gravity is set to 0 m/s 2 and movement of water is negligible, are simulated with the default parameters listed in Table 1. The parameters, in theory, represent the ideal conditions for measuring a calibration curve due to the absence of recharge and groundwater flow. The static water cases are used to obtain T cum values corresponding to each specific soil saturation condition for 10% soil moisture increments, from 10% to 100% saturation. The temperature used to calculate T cum is observed at the center of the steel core where the fiber-optic cable is located. Soil saturation is recorded at a node in the sand that is 0.004 m to the left of the modeled cable ( Figure 1). This node can be conceptualized as a point source moisture probe.
The calculation of T cum is obtained from the integration of ∆T over the time interval of the heat pulse at the observation node ( [14], Equation (1)). Initial temperature conditions are that of 1 s before the start of the heat pulse. Several studies suggest averaging several minutes prior to the start of the heat pulse would produce a more accurate value of ambient temperature. However, Wu et al. [6] noted that this suggestion is impractical for repeated heat pulse cycles because temperature fluctuation following a heat pulse may exceed 1 h, dependent on the soil and heat pulse properties.
The resulting T cum relationship to soil moisture following a heat pulse is nonlinear. There are many suggestions in the literature to calculate soil saturation from a heat pulse based on the specific experimental design and soil properties. For the purpose of our default scenario, we find that a cubic function fitted by the least squares method is the best method to calculate soil saturation from T cum .
To compare the accuracy of the static calibration curve, two additional calibration curves are considered for scenarios with recharge, and are generated from simulations following the same conditions, with three exceptions: 1.
The initial pressure of these simulations is set to −65,000 Pa.
2.
Gravity is set to 9.81 m/s 2 to allow vertical flow.
3.
The top hydraulic boundary condition was changed to a constant specified recharge for 15 min or 2 min heat pulses. As described below, the recharge rates were calculated to provide the model with enough water to reach the total saturation levels tested previously in the static simulations (10% increments).
Protocol Evaluation
The performance of the aFO-DTS protocol to calculate soil moisture content is evaluated in comparison to the observation node with respect to soil saturation and time.
In the results and figures, the Saturation Offset is the difference between the aFO-DTS calculated results and that of the model simulations recorded at the observation node. The Real Saturation is the saturation measured by the observation node for a given simulation, unless otherwise indicated.
The Nash-Sutcliffe Efficiency (NSE; [42]) and the coefficient of determination (R 2 ) are used as performance metrics. The NSE is a useful metric in assessing the quality of time series in hydrological models by analyzing the protocol's ability to predict along a 1:1 comparison line. The NSE is calculated as [42]: where θ obs the modelled observation node soil saturation, θ calc the protocol calculated soil saturation from T cum , and θ obs the mean of the soil saturation time series from the observation node. A value of 1 indicates no variance across the 1:1 line of the time series and that the protocol is perfectly reproducing modelled soil saturation at the sand interface. Conversely, a value of 0 suggests the variance in the time series is equal to the variance of the model.
Model Scenarios
Model scenarios with different conditions and calibration curves were used to evaluate how these parameters would affect model sensitivity (Table 2). The default model parameterization (Table 1) is used to build the initial calibration curve, which represents the relationship between T cum and soil saturation for static water conditions. Gravity is 0 m/s 2 , pressure is held constant, and all sides of the model are set as no flow boundaries to prevent flow. In these scenarios, the hydraulic pressure in the model domain is homogeneous, constant, and set to its respective soil saturation content as calculated from the van Genuchten function (ex., −65,000 Pa for a residual saturation of 0.045). The initial temperature is 20 • C everywhere. The default heat pulse duration is 15 min at 10 W/m (the model is 1 m thick). The resultant heat pulse measured in the fiber-optic steel core is calculated from T cum , and a calibration curve is derived using the soil saturation at the observation node. Simulations were made from residual to full saturation at 10% increments. The changes and error associated with the protocol calculation are assumed to only change with soil saturation and thermal properties of water in the sand pores. To test the accuracy of the aFO-DTS protocol, different model scenarios with varying input parameters are simulated using the same no-flow (static) conditions described above (Section 2.6.1). The initial temperature in the default scenario is 20 • C. To test the effect of bulk temperatures on protocol performance, initial uniform temperatures of 10 • C and 5 • C are simulated. There are no cases in the literature of using aFO-DTS at or below freezing temperatures as the active nature of the protocol renders measurements in the presence of frozen ground impractical. The heat generated by the fiber-optic cable would both melt pore ice and change the bulk thermal properties of the soil [38].
An aFO-DTS methodological assumption is that the change in temperature during the heat pulse is not affected by the antecedent temperature. Wu et al. (2020) observed that antecedent heat pulse temperatures do not return to ambient conditions following successive succeeding heat pulses. A 24-h test of 15-min heat pulses every hour is used to assess the error associated with this assumption, and this test is simulated for every 10% saturation increment.
5 mm to 30 mm Recharge Events-15 min Flowing Water Calibration
For field setting in which the aFO-DTS method is calibrated with field sensors, the potential effect of flowing water on the accuracy of calibration has not been previously systematically evaluated. Using the previously measured static water calibration, simulations are used to evaluate how flowing water affects aFO-DTS measurements. Gravity is 9.81 m/s 2 . A new calibration curve replaces the static calibration based on the general heat pulse curve characteristics observed. The saturation calculated by the protocol is now tested against the observation node in the sand adjacent to the cable. The heat pulses are 15 min every hour for 24 h in initially dry (residual soil saturation) conditions, and an NSE is reported for the entire time series. Recharge into the top of the model begins after the first hour, with a one-hour duration. The protocol is tested with cumulative 5 mm recharge increments (from 5 mm to 30 mm recharge).
5 mm to 30 mm Recharge Events-2 min Flowing Water Calibration
To test if the duration of the heat pulse affects accuracy, a shorter, 2 min heat pulse period is compared with the previous 15-min integration period using the same recharge values and parameters of the previous simulation.
2.6.5. 5 mm to 30 mm Recharge Events-2 min, 20 mm/hr Flowing Water Calibration We evaluate developing a calibration using values from the 20 mm recharge test and compare it to the accuracy from the 15 min and 2 min initial calibrations. The purpose of these simulations is to evaluate if an increase in measurement accuracy is obtained when the calibration curve is measured for a specific recharge rate. The recharge rate of 20 mm/hr is chosen because it is at the midpoint between the lowest and highest rates tested, 5 mm/hr and 30 mm/hr, respectively. All other parameters remain the same as the 2 min and 15 min calibration simulations.
Varying Recharge Duration and Soil Heterogeneity
With the 2 min, 20 mm/hr flowing water calibration curve (Section 2.6.5), the accuracy associated with varying recharge duration is evaluated for events of 20 min, 40 min, 80 min, and 100 min. The purpose of these simulations is to measure the accuracy of the protocol using a specified recharge rate calibration when the recharge length is not 1 h. While the recharge rate remains the same, the total amount of water, and thus the velocity of the wetting front, is different from the one-hour tests.
Soil Heterogeneity
The effect of heterogeneity in the soil matrix is evaluated using the Section 2.6.5 calibration with different scenarios. First, the sand domain's permeability is adjusted to a gaussian distribution with extremums of two orders of magnitude below and above the default permeability value of 10 −12 m 2 . To reduce the impact of local heterogeneity at the point of measurement, the soil saturation measurements are compared at three observation nodes, located above, to the left, and to the right of the cable. The left observation node is shown in Figure 1, and the above and right observation nodes are also 0.002 m from the edge of the cable.
Macropores are large, vertical openings in the soil, effectively acting as pipes to quickly route water through the subsurface [43]. A vertical macropore is added to the domain with a permeability value at 2, 4, and 6 orders higher than the default sand. Additionally, scenarios in which the permeability of the same area is decreased three orders of magnitude are also tested to represent a potentially compacted layer surrounding the cable. The porosity for the macropore is 0.90. The observation node stays within the sand layer at default parameters, outside of the macropore.
During burial of a fiber-optic cable, the physical soil texture is altered and the permeability around the cable is different from the surrounding conditions. To reduce soil disturbance effects, aFO-DTS data acquisition is often initiated weeks after cable burial to allow time for the soil structure to return to its original state. In some cases, vibratory presses are used to accelerate this process [9]. Nevertheless, soil porosity and permeability structure may be altered with the installation of the fiber-optic cable. A lower permeability around the cable is possible following compaction, or the inverse may occur without subsequent compaction. Further, repeated heating cycles can also cause a change in the contact between the cable and soil, and thus the permeability around the fiber-optic cable [24]. To test the effect of compaction around the cable, we simulate a 0.0002 m thick zone surrounding the cable with a permeability of 2, 4, and 6 orders of magnitudes lower and higher than the default value of 10 −12 m 2 .
Natural soils have different layers or horizons, with different physical properties. Three additional scenarios evaluate how a low permeability layer would affect the aFO-TDS results. These scenarios are simulated with a 0.01 m thick horizontal low-permeability layer located 0.01 m above the cable, continuous across the entire model domain (i.e., from the left vertical boundary to the right vertical boundary). Three cases are evaluated with a small 0.001 m wide opening with the default permeability, located 0.02 m to either side of the cable and directly above the cable. The low permeability zone (10 −14 m 2 ) is two orders of magnitude lower than the default value, 10 −12 m 2 .
Static Simulations
The static water cases are set at predetermined soil saturation levels, with water remaining stationary throughout the simulations (see Methodology and Table 2). Using a cubic function to calculate the T cum to soil saturation calibration curve yields an R 2 of 0.99 between the simulated and calculated soil moisture values.
The effect on accuracy of the aFO-DTS protocol of water colder than the antecedent ambient temperature of the bulk medium is negligible (Figure 2). The drop in water temperature to 10 • C and 5 • C from an initial 20 • C produced a change in soil saturation value of ±0.01%, and the R 2 remains 0.99. cubic function to calculate the Tcum to soil saturation calibration curve yields an R 2 of 0.99 between the simulated and calculated soil moisture values.
The effect on accuracy of the aFO-DTS protocol of water colder than the antecedent ambient temperature of the bulk medium is negligible (Figure 2). The drop in water temperature to 10 °C and 5 °C from an initial 20 °C produced a change in soil saturation value of ±0.01%, and the R 2 remains 0.99. The effect of 24 cycles of 15 min heat pulses at one-hour intervals results in an R 2 of 0.99 when comparing the soil saturation at the observation node to the calculated protocol value at the end of the 24th cycle. The saturation offset (i.e., the difference in aFO-DTS calculated and observation node value) is within ±4% for saturations at or above 40% (Figure 3). Below 40% saturation, the aFO-DTS protocol has a saturation bias greater than +4%, and the offset is 13.4% at residual saturation levels. This suggests that the error associated with repeated heating cycles at low saturation may be a potential source of error. The error in the saturation calculation initially increases with each heat pulse but reaches a plateau by the 4th cycle. A shorter integration period may reduce the offset at lower saturations. The effect of 24 cycles of 15 min heat pulses at one-hour intervals results in an R 2 of 0.99 when comparing the soil saturation at the observation node to the calculated protocol value at the end of the 24th cycle. The saturation offset (i.e., the difference in aFO-DTS calculated and observation node value) is within ±4% for saturations at or above 40% (Figure 3). Below 40% saturation, the aFO-DTS protocol has a saturation bias greater than +4%, and the offset is 13.4% at residual saturation levels. This suggests that the error associated with repeated heating cycles at low saturation may be a potential source of error. The error in the saturation calculation initially increases with each heat pulse but reaches a plateau by the 4th cycle. A shorter integration period may reduce the offset at lower saturations.
Simulations with 15 Min Calibration and Recharge
Simulations have set vertical recharge rates. Simulated recharge rates vary from 5 mm/hr to 30 mm/hr to understand aFO-DTS errors across a broad spectrum of recharge intensities (Figure 4). The recharge period is one hour and starts at the beginning of hour two of the simulation. The saturation offset is calculated as the difference between these simulations to that of the 15-min static calibration curve (see Section 2.6.1).
Simulations with 15 Min Calibration and Recharge
Simulations have set vertical recharge rates. Simulated recharge rates vary from 5 mm/hr to 30 mm/hr to understand aFO-DTS errors across a broad spectrum of recharge intensities (Figure 4). The recharge period is one hour and starts at the beginning of hour two of the simulation. The saturation offset is calculated as the difference between these simulations to that of the 15-min static calibration curve (see Section 2.6.1).
Simulations with 15 Min Calibration and Recharge
Simulations have set vertical recharge rates. Simulated recharge rates vary from 5 mm/hr to 30 mm/hr to understand aFO-DTS errors across a broad spectrum of recharge intensities (Figure 4). The recharge period is one hour and starts at the beginning of hour two of the simulation. The saturation offset is calculated as the difference between these simulations to that of the 15-min static calibration curve (see Section 2.6.1). The resulting saturation offset is much larger than for the static calibration cases. The NSE is 0.57 for 30 mm recharge, and 0.07 for 5 mm. The highest offset is during the second hour of the test when recharge is actively being applied to the model domain. The offset is 59% and 39% for 5 mm/hr and 20 mm/hr of recharge, respectively. By the fourth hour, the offset in the 20 mm/hr test is reduced to below 12% and the offset in the 5 mm/hr test is below 28%.
The large offset is a result of the calibration method. Tcum is measured during a 15min period, during which time it is assumed the saturation levels remain constant. With the introduction of flowing pore water, saturation changes throughout the heat pulse measurement period. Heat transfer not only increases through the wetting front, but additional cooler recharge water following the wetting front removes additional heat which is not experienced with the static simulations. The calibration curve is also more sensitive The resulting saturation offset is much larger than for the static calibration cases. The NSE is 0.57 for 30 mm recharge, and 0.07 for 5 mm. The highest offset is during the second hour of the test when recharge is actively being applied to the model domain. The offset is 59% and 39% for 5 mm/hr and 20 mm/hr of recharge, respectively. By the fourth hour, the offset in the 20 mm/hr test is reduced to below 12% and the offset in the 5 mm/hr test is below 28%.
The large offset is a result of the calibration method. T cum is measured during a 15-min period, during which time it is assumed the saturation levels remain constant. With the introduction of flowing pore water, saturation changes throughout the heat pulse measurement period. Heat transfer not only increases through the wetting front, but additional cooler recharge water following the wetting front removes additional heat which is not experienced with the static simulations. The calibration curve is also more sensitive to the length of the heat pulse at lower water contents due to its shape where smaller differences in T cum at lower saturations account for larger changes in soil saturation. For example, the difference in T cum between 10% and 20% saturation is 614 s • C and between 90% and 100% saturation is 1269 s • C.
Simulations with 2 Min Calibrations
To improve the accuracy of the aFO-DTS calibration, a shorter integration period is evaluated. The results marginally improve from the 15-min tests ( Figure 5). Using a 2-min heat pulse resulted in a maximum offset of 41% for a 5 mm recharge period, a decrease of 18% compared to the corresponding 15-min calibration case. The NSE is 0.63 for 30 mm recharge, and 0.57 for 5 mm. To account for the effect of flowing water removing excessive heat, a new calibration curve was arbitrarily made with a 20 mm/hr recharge event for the 2-min flow calibration. The calibration curve has a relationship and results in an NSE of 0.99.
heat pulse resulted in a maximum offset of 41% for a 5 mm recharge period, a decrease of 18% compared to the corresponding 15-min calibration case. The NSE is 0.63 for 30 mm recharge, and 0.57 for 5 mm. To account for the effect of flowing water removing excessive heat, a new calibration curve was arbitrarily made with a 20 mm/hr recharge event for the 2-min flow calibration. The calibration curve has a relationship and results in an NSE of 0.99. Figure 5. Comparison of three calibration curves: a 15 min heat pulses with recharge rates of 5 to 30 mm/hr, a 2 min heat pulses with 5 to 30 mm/hr of recharge, and a 2 min heat pulses with 20 mm/hr recharge. The first and second calibrations incorporate a range of recharge values (i.e., 5 to 30 mm/hr) in the training set, while the third uses only the 20m mm/hr recharge data. All three calibrations methods are tested using a full range of recharge scenarios from 5 to 30 mm/hr. Real Saturation refers to measurements at the observation node.
There is better agreement at all recharge rates using the 2 min 20 mm/hr calibration curve than for the 2 min calibration, with the lowest agreement having an NSE of 0.74 at 5 mm/hr ( Figure 6). We interpret these results to indicate that for a field test, specific calibration curves should be measured for the range of expected recharge. The accuracy of the calibration can decrease if the range of recharge rates is high. In our model, a 50% change in recharge rate can cause the accuracy of the calibration to be offset by 25%. Further, the calibration accuracy is lower at higher saturation, i.e., the aFO-DTS calibration underestimates saturation at lower recharge rates and overestimates at higher recharge rates. There is better agreement at all recharge rates using the 2 min 20 mm/hr calibration curve than for the 2 min calibration, with the lowest agreement having an NSE of 0.74 at 5 mm/hr ( Figure 6). We interpret these results to indicate that for a field test, specific calibration curves should be measured for the range of expected recharge. The accuracy of the calibration can decrease if the range of recharge rates is high. In our model, a 50% change in recharge rate can cause the accuracy of the calibration to be offset by 25%. Further, the calibration accuracy is lower at higher saturation, i.e., the aFO-DTS calibration underestimates saturation at lower recharge rates and overestimates at higher recharge rates.
Simulations with Variable Recharge and Heterogeneous Soil Conditions
The 2 min-20 mm/hr calibration curve improves saturation offset but may present accuracy challenges for different recharge durations. The length of recharge in the previous tests is one hour and occurs during the second hour of the 24 h simulation. We examine the impact of time duration of the recharge event and find that the accuracy decreases when the recharge duration is not one hour. By lowering the recharge time to 40 min and to 20 min, the NSE is 0.90 and 0.79, respectively (Figure 7). Similarly, when increasing the recharge duration to 80 and 100 min, the NSE becomes 0.91 and 0.67 (Figure 7). The calibration tends to overestimate saturation at lower recharge time lengths and underestimate at higher durations. This phenomenon may be due to the velocity of the watering front. Shorter or longer recharge events will lead to different front velocities than the default one-hour duration, resulting in changes to the rate at which flowing water is removing heat from the medium surrounding the cable.
Simulations with Variable Recharge and Heterogeneous Soil Conditions
The 2 min-20 mm/hr calibration curve improves saturation offset but may present accuracy challenges for different recharge durations. The length of recharge in the previous tests is one hour and occurs during the second hour of the 24 h simulation. We examine the impact of time duration of the recharge event and find that the accuracy decreases when the recharge duration is not one hour. By lowering the recharge time to 40 min and to 20 min, the NSE is 0.90 and 0.79, respectively (Figure 7). Similarly, when increasing the recharge duration to 80 and 100 min, the NSE becomes 0.91 and 0.67 (Figure 7). The calibration tends to overestimate saturation at lower recharge time lengths and underestimate at higher durations. This phenomenon may be due to the velocity of the watering front. Shorter or longer recharge events will lead to different front velocities than the default one-hour duration, resulting in changes to the rate at which flowing water is removing heat from the medium surrounding the cable.
Simulations with Spatially Variability of Permeability
Simulations with spatially variable permeability were used to test the effect of heterogeneity. The spatial distribution of permeability heterogeneity follows a gaussian distribution of two orders of magnitude around the default permeability value. Three observation nodes are placed to the left, right, and top of the cable 0.02 m away from the outer edge of the cable. The aFO-DTS method overestimated the saturation compared to that measured by the observation node in all cases. However, the NSE is 0.99 compared to the average areal saturation of the sand from the three observation nodes. This suggests that calibration and validation with in-situ probes in heterogeneous soils may be problematic if the distribution of the soil is spatially biased, highly heterogeneous, or poorly sorted. A
Simulations with Spatially Variability of Permeability
Simulations with spatially variable permeability were used to test the effect of heterogeneity. The spatial distribution of permeability heterogeneity follows a gaussian distribution of two orders of magnitude around the default permeability value. Three observation nodes are placed to the left, right, and top of the cable 0.02 m away from the outer edge of the cable. The aFO-DTS method overestimated the saturation compared to that measured by the observation node in all cases. However, the NSE is 0.99 compared to the average areal saturation of the sand from the three observation nodes. This suggests that calibration and validation with in-situ probes in heterogeneous soils may be problematic if the distribution of the soil is spatially biased, highly heterogeneous, or poorly sorted. A laboratory calibration protocol may be needed in such instances because high variability in testing parameters will need specialized calibration tailored to the soil.
We evaluated the effect of permeability changes in the region immediately surrounding the cable. The results for higher permeabilities yield an NSE of 0.64, 0.60, and 0.60, respectively ( Figure 8). The lower permeability range yielded an NSE of 0.46, 0, and 0. This is an important factor to consider as our model shows that a two orders of magnitude difference in permeability can decrease the accuracy of a protocol by half. Therefore, care should be taken to minimize disturbances to the soil during burial and reduce excessive heating in long-term field tests.
Simulations with Spatially Variability of Permeability
Simulations with spatially variable permeability were used to test the effect of heterogeneity. The spatial distribution of permeability heterogeneity follows a gaussian distribution of two orders of magnitude around the default permeability value. Three observation nodes are placed to the left, right, and top of the cable 0.02 m away from the outer edge of the cable. The aFO-DTS method overestimated the saturation compared to that measured by the observation node in all cases. However, the NSE is 0.99 compared to the average areal saturation of the sand from the three observation nodes. This suggests that calibration and validation with in-situ probes in heterogeneous soils may be problematic if the distribution of the soil is spatially biased, highly heterogeneous, or poorly sorted. A laboratory calibration protocol may be needed in such instances because high variability in testing parameters will need specialized calibration tailored to the soil.
We evaluated the effect of permeability changes in the region immediately surrounding the cable. The results for higher permeabilities yield an NSE of 0.64, 0.60, and 0.60, respectively ( Figure 8). The lower permeability range yielded an NSE of 0.46, 0, and 0. This is an important factor to consider as our model shows that a two orders of magnitude difference in permeability can decrease the accuracy of a protocol by half. Therefore, care should be taken to minimize disturbances to the soil during burial and reduce excessive heating in long-term field tests. Figure 8. The effect on aFO-DTS accuracy due to permeability values changing in the region surrounding the fiber-optic cable relative to the saturation measured by the observation node with the default permeability conditions. These simulations test the effect of variability in soil permeability Figure 8. The effect on aFO-DTS accuracy due to permeability values changing in the region surrounding the fiber-optic cable relative to the saturation measured by the observation node with the default permeability conditions. These simulations test the effect of variability in soil permeability rather than the protocol's ability to account for changes in permeability relative to the observation node's location.
Three additional simulations evaluated the effect of a low permeability horizontal layer above the cable with a gap or "hole" in the layer above, to the left, and to the right of the cable. The NSE is 0.77, 0.75, and 0.46 in the center, left, and right tests, respectively ( Figure 9). Note that the observation node is left of the cable in all three tests. The similar NSE values between the above and left cases are expected due to the proximity between the observation node and the cable. The third simulation diverts water on the right side of the model, where the cable is between the observation node and draining water, shielding the observation node from the water resulting in lower accuracy of aFO-DTS calculation during the time-series. of the cable. The NSE is 0.77, 0.75, and 0.46 in the center, left, and right tests, respectively (Figure 9). Note that the observation node is left of the cable in all three tests. The similar NSE values between the above and left cases are expected due to the proximity between the observation node and the cable. The third simulation diverts water on the right side of the model, where the cable is between the observation node and draining water, shielding the observation node from the water resulting in lower accuracy of aFO-DTS calculation during the time-series.
Conclusions
A numerical model of an aFO-DTS system was developed to evaluate and understand potential errors with this emerging sensor for measuring soil moisture content. Using the model, we evaluate how different calibration approaches, recharge rates, and soil characteristics may affect aFO-DTS results. In summary, our research provides new findings, including an improved understanding of the importance and errors associated with calibration methods for aFO-DTS, an assessment of the limitations of the method due to variable recharge rates, and an understanding of how soil heterogeneity may affect results, including soil disturbance during cable burial.
We find that the calibration method used for aFO-DTS is critical for producing robust results. We employed a simple cubic function to relate model saturation to Tcum with a high degree of accuracy. However, when model parameters change, we observe decreases in accuracy. We find that using static water content to develop the calibration curve, such as would be developed in a laboratory setting, may lead to erroneous results in situations with flowing porewater, such as during recharge events. A calibration curve developed for a site-specific soil, preferably measured in situ, provides more accurate results. In principle, calibration curves for similar soils should have a similar characteristic shape that is scalable with a few measured Tcum-soil moisture couplets. Changing the protocol parameters, such as the duration of active heating and the number of heating cycles, decreased the accuracy of the protocol.
Conclusions
A numerical model of an aFO-DTS system was developed to evaluate and understand potential errors with this emerging sensor for measuring soil moisture content. Using the model, we evaluate how different calibration approaches, recharge rates, and soil characteristics may affect aFO-DTS results. In summary, our research provides new findings, including an improved understanding of the importance and errors associated with calibration methods for aFO-DTS, an assessment of the limitations of the method due to variable recharge rates, and an understanding of how soil heterogeneity may affect results, including soil disturbance during cable burial.
We find that the calibration method used for aFO-DTS is critical for producing robust results. We employed a simple cubic function to relate model saturation to T cum with a high degree of accuracy. However, when model parameters change, we observe decreases in accuracy. We find that using static water content to develop the calibration curve, such as would be developed in a laboratory setting, may lead to erroneous results in situations with flowing porewater, such as during recharge events. A calibration curve developed for a site-specific soil, preferably measured in situ, provides more accurate results. In principle, calibration curves for similar soils should have a similar characteristic shape that is scalable with a few measured T cum -soil moisture couplets. Changing the protocol parameters, such as the duration of active heating and the number of heating cycles, decreased the accuracy of the protocol.
Heterogeneity in soils is difficult to account for in field-based aFO-DTS studies. Our results show that variable recharge rates and localized macropores are challenging to account for and can be a potential source of error in aFO-DTS measurements. The soil saturation values were most sensitive to a low permeability layer surrounding the fiberoptic cable, and in some cases would reduce NSE to 0. This case highlights the importance of having adequate contact between the cable and soil and allowing for an appropriate timeframe for soil regeneration following direct cable burial. While disturbances are likely to cause higher permeability adjacent to the cable, lower permeability area adjacent to the fiber-optic cable can be caused by hysteresis in fine-textured soils. Managing the aFO-DTS protocol by lowering the amount and duration of heat generated by repeated active heat pulses may reduce this problem.
Active FO-DTS measurements offer a very powerful tool to measure distributed soil moisture content. Heterogeneity is a fundamental challenge in subsurface hydrology that is difficult to overcome with isolated point measurements. With the ability to deploy kilometres of cable, aFO-DTS can provide unprecedented measurement capability. However, as our analysis shows, care must be taken in evaluating results. The fundamental challenge is that the method requires a valid calibration that may not always be applicable for varying recharge and soil conditions.
Data Availability Statement:
The results presented in this study use the public-domain SUTRA numerical code [27].
|
2021-06-03T06:17:22.481Z
|
2021-05-27T00:00:00.000
|
{
"year": 2021,
"sha1": "e35bd07745d5ca1ba78e6a34df6f8d48e9752d13",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/11/3723/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5815d5507851f608423e9e3a68bf48976d4e5c1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
174808596
|
pes2o/s2orc
|
v3-fos-license
|
Co-expression of MDM2 and CDK4 in transformed human mesenchymal stem cells causes high-grade sarcoma with a dedifferentiated liposarcoma-like morphology
Amplification and overexpression of MDM2 and CDK4 are well-known diagnostic criteria for well-differentiated liposarcoma (WDLPS)/dedifferentiated liposarcoma (DDLPS). Although it was reported that the depletion of MDM2 or CDK4 decreased proliferation in DDLPS cell lines, whether MDM2 and CDK4 induce WDLPS/DDLPS tumorigenesis remains unclear. We examined whether MDM2 and/or CDK4 cause WDLPS/DDLPS, using two types of transformed human bone marrow stem cells (BMSCs), 2H and 5H, with five oncogenic hits (overexpression of hTERT, TP53 degradation, RB inactivation, c-MYC stabilization, and overexpression of HRASv12). In vitro functional experiments revealed that the co-overexpression of MDM2 and CDK4 plays a key role in tumorigenesis by increasing cell growth and migration and inhibiting adipogenic differentiation potency when compared with the sole expression of MDM2 or CDK4. Using mouse xenograft models, we found that the co-overexpression of MDM2 and CDK4 in 5H cells with five additional oncogenic mutations can cause proliferative sarcoma with a DDLPS-like morphology in vivo. Our results suggest that the co-overexpression of MDM2 and CDK4, along with multiple genetic factors, increases the tendency for high-grade sarcoma with a DDLPS-like morphology in transformed human BMSCs by accelerating their growth and migration and blocking their adipogenic potential.
Introduction
Liposarcoma (LPS) is one of the most frequently occurring types of soft tissue sarcoma in adults [1]. According to its histological characteristics, LPS consists of three categories: well-differentiated or dedifferentiated, myxoid/round cell, and pleomorphic LPSs [1]. Well-differentiated (WD) or dedifferentiated (DD) LPS is the most common subtype and is associated with supernumerary ring and/or giant rod chromosomes formed by the amplification of chromosome 12q13-15, which contains several hundred genes, including MDM2 and CDK4 [2]. Amplification and overexpression of MDM2 and CDK4 are generally accepted as the current diagnostic criteria for WDLPS/DDLPS [3][4][5].
MDM2 inhibits tumor suppressor p53 and is overexpressed in numerous cancers [6]. MDM2 functions as a ubiquitin ligase that targets p53 through the proteasomal degradation pathway; it also participates in its own autodegradation to prohibit MDM2 activity, inhibiting p53 during periods of cellular stress [7,8]. CDK4 forms a complex with cyclin D, which then phosphorylates pRB. This prevents E2F from interacting with phosphorylated pRB, which causes the cell cycle to progress into the G1-S transition and increases cell proliferation [9][10][11]. Knockdown of MDM2 or CDK4 decreased cell proliferation in DDLPS cells [4]. Despite their potency as driving factors, whether MDM2 and CDK4 induce WDLPS/DDLPS tumorigenesis remains unclear.
Cell lines and reagents
Transformed 2H and 5H human bone marrow stem cells (BMSCs) and the LIPO-863B and LP6 cell lines were kindly provided by Dr. Pablo Menendez, Dina Lev, and Jonathan A Fletcher, respectively [17,23,24]. The cells were cultured in the Dulbecco's Modified Eagle Medium (DMEM) (Thermo Fisher Scientific, Waltham, MA, USA) containing 10% fetal bovine serum (FBS) (Gibco, Grand Island, NY, USA) and 1% antibiotic-antimycotic solution (Gibco) at 37°C and in 5% CO 2 conditions. Mycoplasma contamination was not detected in any of the cell cultures.
qRT-PCR
cDNA was generated from the total RNA using SuperScript III Transcriptase, according to the manufacturer's instructions (Invitrogen, Carlsbad, CA, USA). Quantitative reverse transcription (qRT)-polymerase chain reaction (PCR) amplification of stemness-or adipogenesis-related genes was performed using probes and primers with the Universal Probe Library System (Roche, Basel, Switzerland). For MDM2, the following primer pair was used: forward, (5′-ACCTCACAGATTCCAGCTTCG-3′); reverse, (5′-TTTCATAGTATAAGTGTCTTTTT-3′). HPRT1 was used as a reference gene, and the ratio of the expression of each gene to that of HPRT1 was calculated for the relative quantification of the expression level of each gene. To determine the mRNA levels of hTERT, E6, E7, small t, HRAS v12 , and TP53, qRT-PCR was performed using SYBR Green PCR Master Mix (Applied Biosystems) and specific primer sets (Supplementary Table 1).
Immunoblotting, immunocytochemistry, immunohistochemistry, and fluorescence-activated cell sorting (FACS) Equal amounts of protein were subjected to SDS-PAGE on an 8.5% gel before being transferred to a nitrocellulose membrane (Pall Corporation). The membrane was incubated with primary anti-MDM2, CDK4, and β-actin (diluted 1:1000 in 5% nonfat milk, Santa Cruz Biotechnology) and FLAG (diluted 1:1000 in 5% nonfat milk, Sigma-Aldrich) antibodies, and then washed (for 30 min) with T-BST. The membrane was then incubated with horseradish peroxidaseconjugated secondary goat anti-rabbit or anti-mouse antibodies (diluted 1:2000 in 5% nonfat milk, Abcam) for 1 h, followed by 30 min of washing with T-BST. Signals were detected using ECL solution (Thermo Fisher Scientific). Four-micrometer-thick sections from formalin-fixed paraffin-embedded cell or tissue blocks were cut with a microtome and routinely deparaffinized. The antigen retrieval procedure was performed in 0.01 M of citrate buffer (pH 6.0) at 95°C, and counterstaining was performed using hematoxylin. The anti-MDM2 (Invitrogen, IF2, 1:200 dilution), CDK4 (Invitrogen, DCS-31, 1:50 dilution), and KU80 (Cell Signaling; C48E7, 1:200 dilution) antibodies were used for immunocytochemical or immunohistochemical staining using the automated bench-mark XT platform (Ventana Medical Systems). The cells were washed with FACS buffer (PBS, 0.5% BSA, 0.1% NaN 3 sodium azide) and stained with anti-CD34 and CD105 (BD Biosciences) antibodies. Isotype-matched FITC/PE-conjugated controls were also included with each set. The positive cells were analyzed by BD FACS Verse flow cytometry (BD Biosciences).
Generation of stable cell lines overexpressing MDM2 and/or CDK4
The full-length cDNAs of MDM2 or CDK4 were generated from a cDNA library of human BMSCs. The PCR products were cloned into the N-terminal p3XFLAG-CMV-10 vector (Sigma-Aldrich). We confirmed the full sequence of wildtype MDM2 and CDK4 by the Sanger sequencing method. Full-length 3XFLAG-MDM2 or 3XFLAG-CDK4 was cloned into the gateway entry vector pCR8/GW/Topo (Invitrogen), and then subcloned into pLenti6.3/V5-DEST (Invitrogen). Full-length sequences of 3XFLAG-MDM2 or 3XFLAG-CDK4 were validated by Sanger sequencing. pLenti6.3/3XFLAG-MDM2 or 3XFLAG-CDK4 expression vectors were transfected into 293FT cells using ViraPower Packaging Mix (Invitrogen) to produce lentiviruses expressing MDM2 or CDK4. After 48 h, lentiviral supernatants were harvested and transduced into 2H and 5H cells in the presence of 8 µg/mL of polybrene. The transduced cells were grown in DMEM complete medium for 48 h after infection, and then, the medium was replaced with medium containing blasticidin (5 µg/mL) after 24 h. The cells were then seeded into 96-well plates at a density of one cell/well in selective medium for 2 weeks. Live cell clones were checked using microscopy. These colonies were subcultured into 24-or 6-well plates. Stable expression of MDM2 or CDK4 was confirmed by qRT-PCR and immunoblotting.
Cell proliferation and migration assays and soft agar assay
The cell proliferation assay was performed using an EZ-CYTOX kit (Daeil Lab Service), according to the manufacturer's instructions. Cells were plated in 96-well plates (3 × 10 2 cells/well). The 96-well plates were incubated with EZ-CYTOX reagent for 3 h at 37°C after 1 and 2 days. Twenty-four-well transwell chambers (Corning Costar) with 8-μm polycarbonate membrane filters were used to determine the migration ability. For this assay, 5 × 10 4 cells were seeded into the upper chamber in the DMEM without FBS. The lower chamber contained 700 μL of the DMEM containing 10% FBS. The transwell chamber was incubated at 37°C in 5% CO 2 conditions. After 24 h of incubation, the non-migrating cells on the upper filter surface were removed with a cotton swab and the migrated cells were stained with 0.5% crystal violet. The cells were then seeded into 24-well plates with the appropriate concentrations of agarose (0.5% for base and 0.3% for top) to form colonies in 3 weeks. The colonies were stained with crystal violet (0.5% w/v) and counted using a microscope.
Adipogenic differentiation
BMSCs and the 2H and 5H cells were seeded onto a sixwell plate in the DMEM medium; the medium was replaced with adipogenic differentiation medium (StemPro Adipogenic Differentiation Kit, Invitrogen) every 3-4 days. After 21 days, the cells were stained with an Oil Red O staining kit (Lifeline) according to the manufacturer's instructions.
Mouse xenograft modeling
This study was reviewed and approved by the Institutional Animal Care and Use Committee of the Samsung Biomedical Research Institute (SBRI, Seoul, Korea). SBRI is an Association for the Assessment and Accreditation of Laboratory Animal Care International accredited facility and abides by the Institute of Laboratory Animal Resources guide (No. 20160108001). Female nude mice were injected subcutaneously with 2H, 5H, LIPO-863B, or LP6 (5 × 10 6 ) cells. After the indicated number of days, tumor diameter was measured using a digital caliper two or three times per week, and tumor sizes were estimated using the following formula: (3.14/6) (length × width 2 ).
Transformed human BMSCs retain their stemness characteristics
To examine whether the co-overexpression of MDM2 and CDK4 drives WDLPS/DDLPS tumorigenesis, we used two types of transformed BMSCs (2H and 5H cells) containing two to five different oncogenic mutations ( Fig. 1a) [19]. These oncogenic hits include the following: (i) ectopic overexpression of human telomerase reverse transcriptase (hTERT), (ii) TP53 degradation by the expression of the E6 antigen of human papillomavirus-16 (HPV-16), (iii) RB family inactivation by the expression of the E7 antigen of HPV-16, (iv) c-MYC stabilization by the expression of the small T antigen of Simian virus 40 (SV40), and (v) activation of mitogenic signal by the expression of HRAS v12 . The E6 antigen of HPV-16 mediates TP53 degradation via the proteasomal degradation pathway, as observed in the case of MDM2. However, E6 and MDM2 are regulated through well-established distinct mechanisms [25,26]. Therefore, none of the five oncogenic aspects were directly relevant to WDLPS/DDLPS.
We examined the characteristics of 2H and 5H cells and compared them with those of non-transformed BMSCs. The 2H and 5H cells expressed cell surface makers, such as CD105 (human mesenchymal stromal cell marker; positive) and CD34 (hematopoietic progenitor cell marker; negative), which is consistent with BMSCs (Fig. 1b). However, the morphology of 2H and 5H cells differed from that of BMSCs, which are characterized by a spindle-shaped morphology, which includes a large cell body with long and thin tails. Both 2H and 5H cells were shorter and thicker than BMSCs, while 5H cells were much shorter and more radial than 2H cells. To evaluate expression levels of stemness genes in 2H and 5H cells, we compared the expression in these cells with that in two representative liposarcoma cell lines, LIPO-863B (WDLPS) and LP6 (DDLPS). The 2H and 5H cells showed sustained expression of NANOG and OCT-4 mRNA in the presence of an adipogenesis-inducing medium; however, the expression of these genes in 5H cells was lower than that in 2H cells (Fig. 1c). In addition, the 2H and 5H cells showed upregulation of NANOG and OCT-4 only when cultured in the DMEM (Supplementary Fig. 1). 2H and 5H cells were maintained in a manner in which they retained their ability to differentiate into adipocytes in response to adipogenesis medium, despite the low potency rate, relative to the BMSCs (Fig. 1d). These findings suggest that the 2H and 5H cells retained the characteristics of BMSCs.
Co-overexpression of MDM2 and CDK4 synergistically drives tumorigenic phenotypes of transformed human BMSCs
To establish cells co-overexpressing MDM2 and CDK4, 2H and 5H cells were infected with either LacZ-(β-galactosidase, control) or MDM2-and/or CDK4-expresssing lentiviral particles. Expression of the mRNA transcript and protein of MDM2 and/or CDK4 was confirmed by immunoblotting (Fig. 2a) and qRT-PCR ( Supplementary Fig. 2). To evaluate the expression levels of MDM2 and/or CDK4 in transduced 2H and 5H cells, we compared the expression in these cells with that in LIPO-863B and LP6 cell lines. The MDM2 protein expression increased by 1.06-4.14 fold in 2H and 5H cells, respectively, and the fold increase was 2.32-3.24 for LIPO-863B cells. In addition, MDM2 mRNA expression values exhibited fold changes of 0.29 or 0.62 in 2H and 5H cells, respectively, when compared with the expression in LIPO-863B cells (Supplementary Fig. 2). Therefore, both MDM2 protein and mRNA expression values in 2H and 5H cells were observed at biological levels. CDK4 expression in the transduced 2H and 5H cells was more than twofold higher than that in both LP6 and LIPO-863B cells (Fig. 2a). Morphologically, both 2H and 5H cells co-overexpressing MDM2 and CKD4 were much longer and thinner than those solely expressing MDM2, CDK4, or LacZ ( Fig. 2b; Supplementary Fig. 3). Because all these cell lines were generated from single-cell clones, we confirmed the expression of the five oncogenic hits using qRT-PCR ( Supplementary Fig. 3). The hTERT and E6 expression levels were notably high in both the 2H and 5H cells; the expression of E7 and small t mRNA was notably high in the 5H cells ( Supplementary Fig. 3). In addition, the expression of HRAS v12 in the 5H and 2H cells showed no significant difference (Supplementary Fig. 4).
HPV-16 E6 promotes TP53 ubiquitination and degradation [27,28]. To evaluate the TP53 expression levels in the established cells expressing MDM2 and/or CDK4 that had been transduced with E6, we compared the TP53 expression levels in these cells with those in BMSCs and LIPO-863-B and LP6 cells (Supplementary Fig. 5). Similar TP53 mRNA and TP53 protein expression levels were observed between 2H and 5H cells expressing LacZ or only CDK4 and BMSCs ( Supplementary Fig. 5). It is noteworthy that both 2H and 5H cells expressing MDM2 and/or CDK4 showed decreased TP53 expression levels ( Supplementary Fig. 5B). These data suggest that MDM2 overexpression can induce TP53 degradation in the presence of E6 in these cells.
Next, we examined whether the co-overexpression of MDM2 and CDK4 accelerates the tumorigenic potential in transformed cells. In both 2H and 5H cells, cooverexpression of MDM2 and CDK4 significantly increased cell proliferation (Fig. 3a, c), anchorageindependent cell growth (Fig. 3b, d), and cell migration (Fig. 3e) when compared with the sole expression of MDM2 or CDK4. Interestingly, 5H cells only expressing MDM2 showed significantly increased anchorageindependent cell growth (Fig. 3d) and activated cell mobility (Fig. 3e) but not cell proliferation (Fig. 3c), relative to the cells expressing CDK4 alone. These results indicate that the co-overexpression of MDM2 and CKD4 plays a key role in tumorigenesis in transformed BMSCs.
Co-overexpression of MDM2 and CDK4 blocks the potential of adipogenic differentiation
To examine whether overexpression of MDM2 and CDK4 alters the adipogenesis potential, we first performed Oil Red O staining after culturing the cells in the adipogenic induction medium. 2H-MDM2 and CDK4 cells displayed small amounts of lipid droplets relative to 2H-LacZ cells and were longer and thinner than the cells expressing only MDM2 or CDK4 (Fig. 4a). 5H-MDM2 and 5H-MDM2 and CDK4 cells showed a reduced positivity of Oil Red O staining and contained stellar cell bodies, relative to both 5H-LacZ and 5H-CDK4 cells (Fig. 4c). Next, we analyzed the expression levels of genes serially induced during adipogenesis by real-time PCR: C/EBPβ (in the early step), C/EBPα, and PPARγ (from the middle to late steps), C/SREBP1 (full step), and ADIPSIN and LPL (late step) [29]. As shown in Fig. 4b, cells expressing only MDM2 or co-overexpressing MDM2 and CDK4 showed relatively downregulated expression levels of these genes in comparison with those in both 2H-LacZ and 2H-CDK4 cells; the expression levels of all genes except C/EBPβ in these cells were similar to those in the LP6 cells. Compared with 5H-LacZ cells, MDM2 and/or CDK4 expression led to the decreased expression of all genes, except C/EBPβ and PPARγ. Notably, co-overexpression of MDM2 and CDK4 decreased the expression of all adipogenesis-related genes, showing levels similar to those in LP6 cells (Fig. 4d).
Collectively, these data suggest that co-expression of MDM2 and CDK4 blocks the differentiation of adipogenesis from the early to late stages.
Co-overexpression of MDM2 and CDK4 in transformed human BMSCs results in the development of proliferative sarcoma with a dedifferentiated liposarcoma-like morphology in vivo
To verify the in vivo tumorigenic potential of the transformed BMSCs, nude mice were subcutaneously inoculated with 2H and 5H cells co-overexpressing MDM2 and CDK4. The 2H cells did not develop into tumors, regardless of the co-expression of MDM2 and CDK4 (Fig. 5a). Consistent with the results of a study by Rodriguez et al., 5H-LacZ cells formed tumors with high penetrance (4/5, Fig. 5a) [19]. However, the 5H-MDM2 and CDK4 cells developed tumors larger than those observed in the LacZ control cells (Fig. 5b, c). In addition, LP6 cells showed a much more aggressive tumor formation in vivo than the 5H-MDM2 and CDK4 cells, despite their low growth potency in vitro (Fig. 5b, c; Supplementary Fig. 6).
Next, we performed the histological analysis of the 5H cell-derived tumors, including those from the LIPO-863 and LP6 xenograft models. The 5H-MDM2 and CDK4 cellderived tumors were immunostained and found to be strongly positive for MDM2 and CDK4, while the tumors derived from 5H-LacZ cells were not (Fig. 6a). Although relatively high background was observed, their intensity levels were similar to those of LP6 cell-derived tumors (Fig. 6a). The 5H-MDM2 and CDK4 cell-derived tumors exhibited more proliferative features with high cellularity and higher expression of Ki-67 than did the 5H-LacZ-derived tumors ( Fig. 6b; Supplementary Fig. 7). Moreover, these tumors morphologically resembled LP6 cell-derived tumors, displaying large nuclear cells of variable sizes dispersed within a fibrous matrix, but not LIPO-863B cell-derived tumors, which were composed of mature adipocytic cells of diverse sizes and associated with a variable number of atypical stromal cells ( Fig. 6b; Supplementary Fig. 8). In addition, tumors from 5H-MDM2 and CDK4 cells showed a small proportion of lipoblast cells, but they were not immunostained with KU80, which has been reported as a marker of human cells; therefore, these lipoblast cells may not be derived from 5H-MDM2 and CDK4 cells (Supplementary Fig. 9) [30]. These findings indicate that co-overexpression of MDM2 and CDK4 in 5H cells with five additional oncogenic mutations can result in the development of proliferative sarcoma with a DDLPS-like morphology in vivo.
Discussion
WDLPS/DDLPS is characterized by the amplification and overexpression of MDM2 and CDK4. However, several other oncogenes, such as HMGA2, c-JUN, ZIC1, etc., have been reported to contribute to the tumorigenesis and progression of WDLPS/DDLP [3,4,24,[31][32][33][34]. Although amplification and overexpression of MDM2 and CDK4 are hallmark events of WDLPS/DDLPS, whether MDM2 and CDK4 drive WDLPS/DDLPS tumorigenesis remains unclear. Therefore, it is necessary to evaluate whether the amplification and overexpression of MDM2 and CDK4 are critical events for WDLPS/DDLPS development. We addressed this question by establishing a corresponding xenograft. Human BMSCs have not been shown to undergo spontaneous transformation in vitro, with the exception of a few cases in which BMSC-injected patients later developed osteosarcoma [35][36][37][38]. In addition, sarcomagenesis models expressing the fusion proteins EWS-FLI1 or SYT-SSX1 in human BMSCs failed to generate tumor phenotypes [17,18]. However, the genetic introduction of tumor-suppressor genes such as TP53 and RB, and other oncogenes, such as the SV40 T antigen and HRAS, promoted BMSC transformation [39]. Rodriguez et al. first succeeded in inducing myxoid liposarcoma using BMSCs expressing the FUS-CHOP fusion protein and transformation with five oncogenic hits: TP53 deficiency, RB deficiency, hTERT overexpression, C-MYC stabilization, and HRAS v12 overexpression [19]. Thus, cooperating oncogenic hits are needed to transform BMSCs. Based on these reports, we tried to induce WDLPS/DDLPS in vivo by co-overexpressing MDM2 and CDK4 in transformed BMSCs.
Our study has several limitations including that actual patient samples were not used to evaluate the expression levels of MDM2 and/or CDK4 in transduced 2H and 5H cells, and none of the other genes such as HMGA2 and CHOP in the 12q13-15 amplicon were examined. Although the histologically analyzed 5H-MDM2 and CDK4-derived tumors could not be clearly classified as DDLPS, these tumors morphologically resembled LP6 cell-derived tumors, displaying large nuclear cells of variable sizes dispersed within a fibrous matrix. Thus, our model showed that co-expression of MDM2 and CDK4 in transformed human BMSCs increases the tendency of high-grade sarcoma with a DDLPS-like morphology. This will contribute to understanding the intermediate step for DDLPS development. MDM2 and CDK4 overexpression through gene amplification is an early event in liposarcoma tumorigenesis. However, it was reported that increased MDM2 amplification was closely associated with histological grade in liposarcomas [43]. We previously reported that the high CDK4 amplification group exhibited significantly poorer prognosis relative to the low CDK4 amplification group in human WDLPS and DDLPS [22]. Moreover, DDLPS components generally showed higher expression levels of MDM2 and CDK4 than did paired WDLPS components from the same patients, although no significant correlation was revealed in amplification status of CDK4 or MDM2 [44]. Thus, co-overexpression of MDM2 and CDK4 might play a key role in tumorigenicity during the transformation of BMSCs after cooperation with multiple genetic alterations. DDLPS had been thought to develop from WDLPS after a long duration, and these viewpoints have been re-established by the report of the presence of exclusively low-grade dedifferentiated components with a specific genomic profile relative to WDLPS and the fact that most cases of DDLPS occur de novo (90%) [45][46][47]. Therefore, DDLPS has now been identified in the absence of WDLPS. Moreover, our in vivo experiments showed that DDLPS tumor potency may be induced without the WDLPS component. To comprehensively understand the mechanism of WDLPS/DDLPS development, the characterization of both germline and somatic genetic alterations is needed using a massive nextgeneration sequencing approach in different large cohorts of WDLPS or DDLPS.
Although cell proliferation and differentiation are regarded as mutually exclusive events, cross-talk has been reported between both processes during adipogenesis [48]. Previous reports have suggested that MDM2 promotes adipocyte differentiation through CREB-dependent transactivation or CREB-regulated transcriptional coactivatormediated activation of STAT6 using mouse embryonic fibroblasts and mouse preadipocyte cells, and that CDK4 participates in adipogenesis through PPARγ activation [49][50][51]. However, Peng et al. showed that WDLPS/DDLPS cell lines exhibited low or negative levels of Oil Red O positivity and PPARγ relative to pre-adipocytes and adipocytes [23]. We also found that 2H-MDM2 and CDK4 and 5H-MDM2 and CDK4 cells showed a reduced positivity of Oil Red O staining, and co-overexpression of MDM2 and CDK4 decreased the expression of all adipogenesis-related genes. Therefore, MDM2 and/or CDK4 may function as initiating oncogenes to block adipogenic differentiation during WDLPS/DDLPS development.
In summary, co-overexpression of MDM2 and CDK4 causes high-grade sarcoma with a DDLLPS-like morphology in transformed human BMSCs by accelerating cell growth and migration, and the blockage of adipogenic potential, after cooperation with multiple genetic factors.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
|
2019-06-05T13:13:27.515Z
|
2019-06-03T00:00:00.000
|
{
"year": 2019,
"sha1": "65d9f24b46436d87ccfbde4abc0b3adeaac63f68",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41374-019-0263-4",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa7c769e13a3979b60ae1610e29205ebac038a03",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247296833
|
pes2o/s2orc
|
v3-fos-license
|
The Antiarrhythmic Mechanisms of Flecainide in Catecholaminergic Polymorphic Ventricular Tachycardia
Catecholaminergic polymorphic ventricular tachycardia (CPVT) is a severe yet rare inherited arrhythmia disorder. The cornerstone of CPVT medical therapy is the use of β-blockers; 30% of patients with CPVT do not respond well to optimal β-blocker treatment. Studies have shown that flecainide effectively prevents life-threatening arrhythmias in CPVT. Flecainide is a class IC antiarrhythmic drug blocking cardiac sodium channels. RyR2 inhibition is proposed as the principal mechanism of antiarrhythmic action of flecainide in CPVT, while it is highly debated. In this article, we review the current progress of this issue.
INTRODUCTION
Catecholaminergic polymorphic ventricular tachycardia (CPVT) is a rare inherited arrhythmia syndrome characterized by bidirectional or polymorphic ventricular tachycardia (VT) provoked by emotional stress and/or physical activity. Clinical phenotypes include catecholamine-associated syncope and a characteristic pattern of bidirectional VT in the absence of structural heart disease (Leenhardt et al., 1995;Liu et al., 2008;Priori et al., 2013). The primary treatment strategy for CPVT is the use of β-blockers due to the catecholamine-dependent onset of VT, while insufficient protection from cardiac events has been reported despite optimal β-blockers therapy (Padfield et al., 2016;Yang et al., 2016). Flecainide, a classic antiarrhythmic agent, has been gaining the interest of clinicians in the treatment of CPVT. Accumulating clinical evidence shows that flecainide, alone and combined with β-blocker therapy, effectively prevents VT in patients with CPVT and has been recommended in the international guidelines (Liu et al., 2012;Steinfurt et al., 2015;Baltogiannis et al., 2019). In the initial study, the antiarrhythmic mechanism of flecainide in CPVT was the suppression of abnormal calcium release from the sarcoplasmic reticulum (SR) by targeting the cardiac ryanodine receptor (RyR2; Watanabe et al., 2009;Hilliard et al., 2010;Hwang et al., 2011). However, not all studies support this hypothesis (Liu et al., 2011;Sikkel et al., 2013;Bannister et al., 2015Bannister et al., , 2016. In the last decade, the therapeutic mechanisms of flecainide in CPVT have become a major topic of debate in this field. In this review, we summarize and discuss the current progress in this field. March 2022 | Volume 13 | Article 850117 Li et al.
Antiarrhythmic Mechanisms of Flecainide
ARRHYTHMOGENIC MECHANISMS OF CPVT
CPVT has been mainly related to mutations in genes encoding the cardiac ryanodine receptor (RyR2) and cardiac calsequestrin (CASQ2), which can be identified in 60-70% of CPVT patients (Wleklinski et al., 2020). RyR2 and CASQ2 are responsible for calcium homeostasis in cardiomyocytes. The delicate balance of Ca 2+ fluxes between the intracellular compartment and the extracellular space in cardiac myocytes is crucial for normal excitation-contraction (EC) coupling (Wier and Balke, 1999;Bers, 2002). During the plateau phase of the action potential, a small amount of Ca 2+ enters the cytosol of cardiac myocytes via voltage-dependent L-type Ca 2+ channels, resulting in a large amount of Ca 2+ release into the cytosol via the RyR2 channel, which is called Ca 2+ -induced Ca 2+ release (CICR). The cytosolic Ca 2+ sharply from 150 nM to 1 μM activates the contractile apparatus. Then, the elevated cytosolic Ca 2+ promptly resumed to 150 nM during the diastolic phase to ensure regular relaxation properties of the myocytes. The majority of Ca 2+ is reuptake into the SR by Ca 2+ ATPase isoform 2a (SERCA2a). The remaining Ca 2+ is extruded into the extracellular fluid via a forward-mode Na + /Ca 2+ exchanger (NCX).
The mutations of RyR2 and CASQ2 disrupt normal Ca 2+ handling in the SR, enhancing the open probability of RyR2 and leading to spontaneous Ca 2+ release events during the diastolic period. Under adrenergic stress, Ca 2+ overload in the SR can facilitate abnormal Ca 2+ leakage during relaxation. Elevated intracellular Ca 2+ levels during the diastolic period would activate the forward mode of NCX, which extrudes Ca 2+ in exchange for Na + with a stoichiometry of 1:3, generating a net inward current. The transient inward current (Iti) produces delayed afterdepolarizations (DADs) and causes triggered activity once it reaches the threshold of the Na + channel. Taken together, the molecular pathophysiology of arrhythmia occurrence in CPVT involves two critical steps: (1) spontaneous Ca 2+ release from SR during the diastolic period, which could be exaggerated by adrenergic stimulation and (2) triggered activity activated by Iti, which is induced by spontaneous Ca 2+ release.
CLINICAL EFFICACY AND SAFETY OF FLECAINIDE IN CPVT
The insufficient protection of β-blockers in CPVT has been reported (Padfield et al., 2016;Yang et al., 2016). Almost 30% of patients with CPVT still experience cardiac arrhythmias despite optimal β-blocker therapy. Therefore, it is important to explore alternative treatment options for CPVT. Knollman and his collaborators first reported that flecainide monotherapy or flecainide combined with β-blockers exhibited striking efficacy in preventing ventricular arrhythmias in two CPVT patients who did not respond to the combination therapy with β-blockers and verapamil (Watanabe et al., 2009). Subsequently, in a retrospective cohort study, the efficacy of flecainide was assessed in 33 patients with CPVT who were unprotected by conventional therapy (van der Werf et al., 2011). Ventricular arrhythmias were effectively controlled by flecainide in 22 patients (76%) and were completely suppressed in 14 patients (63%). In a randomized clinical trial, 14 patients with CPVT using maximally tolerated β-blockers demonstrated that ventricular arrhythmias during exercise were significantly reduced by flecainide, with complete suppression observed in 11 of 13 patients, and serious adverse events did not differ between the flecainide and placebo arms (Kannankeril et al., 2017). Table 1 lists the clinical efficacy and safety of flecainide treatment for CPVT in the literature (van der Werf et al., 2011;Khoury et al., 2013;Miyake et al., 2013;Watanabe et al., 2013;Roses-Noguer et al., 2014;Roston et al., 2015;Padfield et al., 2016;Kannankeril et al., 2017;Wangüemert Pérez et al., 2018). Overall, flecainide effectively prevented ventricular arrhythmias in patients with CPVT without apparent adverse events. Consequently, flecainide has been recommended for CPVT patients with ventricular arrhythmias who already have optimized β-blocker treatment.
FLECAINIDE PREVENTS ARRHYTHMIAS BY TARGETING RyR2
The abnormal Ca 2+ leak events from RyR2 are the essence of molecular arrhythmogenic mechanism in CPVT. Theoretically, direct RyR2 blockers are promising mechanism-based therapies for CPVT. Tetracaine, a sodium channel blocker, is a RyR2 blocker that effectively inhibits Ca 2+ leak from SR. Thus, Knollmann and his collaborators screened the RyR2 inhibiting effects of clinically available sodium channel blockers in a lipid bilayer study and found that flecainide reduced the duration of RyR2 channel openings, but not its closed channel duration. They then tested the effects of flecainide on CASQ2 knockout mice. Intraperitoneal administration of flecainide completely suppressed exercise-induced VT in vivo, and incubation with flecainide significantly ameliorated the spontaneous Ca 2+ release from SR induced by isoproterenol in isolated myocytes. In contrast, without RyR2 blocking action in the lipid bilayer, lidocaine did not show therapeutic effects in vivo and in vitro. Therefore, they propose that the underlying antiarrhythmic mechanism of flecainide in CPVT attributes to its RyR2 blockade but not its intrinsic sodium channel inhibiting action.
Subsequently, the Knollman group performed a series of experiments to reinforce the concept. Galimberti and Knollmann (2011) reported that flecainide suppressed the spontaneous Ca 2+ wave with IC 50 12.8 μM in permeabilized ventricular myocytes. The blocking action of flecainide is use-dependent, suggesting that RyR2 activity determines the potency and efficacy of flecainide. Given flecainide's blocking features of the sodium channel and RyR2 channel, it is challenging to dissect the antiarrhythmic mechanisms of flecainide in CPVT. Kryshtal et al. (2021) synthesized a flecainide analogues, named N-methyl flecainide, which has the sodium channel blocking action but without the RyR2 inhibiting effect. They reported that flecainide, but not N-methyl flecainide, significantly reduced arrhythmias in CPVT transgenic mice and decreased spontaneous calciumrelease events in intact and membrane-permeabilized myocytes.
Frontiers in Physiology | www.frontiersin.org Therefore, they concluded that RyR2 channel inhibition, but not sodium channel blockade, is likely the principal mechanism of the antiarrhythmic action of flecainide in CPVT.
FLECAINIDE PREVENTS ARRHYTHMIAS BY TARGETING THE SODIUM CHANNEL
Flecainide is a hydrophilic sodium channel blocker with a pKa of 9.3. At pH 7.4, only 1% of flecainide is neutral and is available for diffusion across the membrane of myocytes (Liu et al., 2012). The intrinsic feature of flecainide makes it difficult to quickly achieve sufficient concentration to block RyR2, which is located in the intracellular space. Thus, the RyR2 blocking action of flecainide cannot fully explain the rapid amelioration of spontaneous Ca 2+ release after the acute administration of flecainide in isolated cardiac myocytes. Liu et al. (2011) tested the effects of flecainide in a CPVT RyR2-R4496C +/− mouse model. Flecainide significantly reduced ventricular arrhythmias induced by adrenaline and caffeine in vivo.
In isolated intact RyR2 R4496C+/− myocytes, flecainide did not affect Ca 2+ transient amplitude, decay, or SR Ca 2+ content. In permeabilized RyR2 R4496C+/− myocytes, flecainide did not alter the frequency of spontaneous Ca 2+ sparks. In contrast, when the dosage of flecainide reached 6 μM, the upstroke of action potential was blunted significantly at the pace of 5 Hz (Figure 1). Flecainide effectively prevented isoproterenol-induced triggered activity but had little effect on spontaneous Ca 2+ transients (SCaTs) elicited by isoproterenol (Figure 2). The threshold for action potential induction increased significantly after acute administration of flecainide. Based on the above data, Liu et al. suggested that the antiarrhythmic mechanism of flecainide was mediated by its Na + channel blockade. Sikkel et al. (2013) performed a study to explore the effects of flecainide on Ca 2+ handling in isolated rat ventricular myocytes. They found that sodium channel blockers (flecainide, tetrodotoxin, propafenone, and lidocaine) could reduce spontaneous Ca 2+ release events under their experimental conditions. After inactivation of the sodium channel using the voltage-clamp approach, flecainide could not reduce Ca 2+ waves. Therefore, they proposed that Na + channel blockade by flecainide could reduce Na + influx into cardiac myocytes, resulting in the enhancement of Ca 2+ efflux through NCX and decrease of Ca 2+ in the vicinity of the RyR2 channels, ultimately reducing the frequency of spontaneous Ca 2+ release events. In the HEK293 cell line expressing hRyR2, which is devoided of Na + channels in the cellular membrane, Bannister et al. reported that flecainide did not affect spontaneous Ca 2+ release events (Bannister et al., 2015(Bannister et al., , 2016. Thus, these studies suggest that flecainide's antiarrhthymic action in CPVT relies on its Na + channel blockade but not RyR2 inhibition. Neuronal sodium channels are expressed in the cellular membranes of cardiac myocytes. Radwański et al. (2015a) demonstrated that 100 nM TTX, which blocks neuronal sodium channels but not NaV1.5, significantly reduced and desynchronized spontaneous Ca 2+ release events in isolated myocytes. Next, they demonstrated that the NaV1.6 blocker riluzole ameliorated spontaneous Ca 2+ release events in vitro and reduced arrhythmias in vivo in a CPVT mouse model. Cardiac Na + and Ca 2+ cycling interplay in the nanodomains beneath the membrane. They speculated that targeting neuronal sodium channels may be a promising therapeutic strategy for Ca 2+ dysregulation-associated heart diseases such as CPVT and heart failure, which would not compromise electrical excitability, which is proarrhythmic (Radwański et al., 2015b).
DISSECTING THE ANTIARRHYTHMIC MECHANISMS OF FLECAINIDE IN CPVT
There is a hot debate on the antiarrhythmic mechanisms of flecainide in CPVT. The critical issue is whether flecainide is a RyR2 blocker, as the major mechanism responsible for the efficacy of flecainide in CPVT has been observed in clinical practice. There are multiple targets of flecainide in cardiac myocytes, including sodium channel, potassium channel, and RyR2 channel et al. (Table 2). It is challenging to dissect the therapeutic mechanisms of flecainide in intact cardiac myocytes because of its influence on Na + , Ca 2+ , and K + homeostasis at the cellular level simultaneously. Single RyR2 channel experiments in artificial lipid bilayers appear to resolve this issue directly. The initial study by Knollmann et al. tested the inhibitory potency of flecainide on the current flow in the cytoplasm-to-lumen direction in sheep RyR2 channels and found that it inhibited the duration of channel openings and did not affect closed channel duration (Watanabe et al., 2009). The reduction in open-channel probability was concentrationdependent, with an IC 50 of 55 ± 8.1 μM. Later, this group presented a detailed analysis of the kinetics of RyR2 inhibition by flecainide. Flecainide inhibited RyR2 by two distinct modes: a fast block consisting of brief substrate and closed events with a mean duration of ∼1 ms, and a slow block consisting of closed events with a mean duration of ∼1 s (Mehra et al., 2014). These two modes are independent mechanisms for RyR2 inhibition.
Under physiological conditions, the current flow of cardiac RyR2 is directed from the lumen to the cytoplasm. Bannister et al. (2015) tested the effects of flecainide on the luminalto-cytosolic flux of cations through human RyR2 in a lipid bilayer study and reported that flecainide, even at supraphysiological concentrations, did not inhibit the open probability of RyR2. Since the ion fluxes across the SR membrane are bidirectional, the authors also explored whether flecainide modulates cytoplasm-to-SR luminal "countercurrent. " They found that 50 μM flecainide had a negligible effect on the mechanisms responsible for the SR charge-compensating counter current (Bannister et al., 2015(Bannister et al., , 2016. More recently, the study by Salvage et al. showed that low cytoplasmic concentrations (0.5-10 μM) of flecainide activated isolated mouse RyR2 channels. In contrast, high cytoplasmic concentrations (50-100 μM) of flecainide showed an inhibitory action (Salvage et al., 2021).
RyR2 is a macro-molecular complex, and numerous accessory proteins modulate RyR2's function, such as FKBP12, FKBP12.6, calmodulin, and S100A1 (Ritterhoff et al., 2014(Ritterhoff et al., , 2015. Flecainide may affect RyR2's function directly or indirectly by binding to accessory proteins. The procedure for purifying RyR2 for single-channel recording may disrupt the interaction between accessory proteins and RyR2. This might be an interpretation of the controversial results from different laboratories. Permeabilized myocytes, devoid of the influence of the cellular membrane, allow direct exploration of the effects of flecainide on Ca 2+ handling in SR. Hilliard et al. (2010) demonstrated that 25 μM flecainide significantly reduced the spontaneous Ca 2+ wave frequency in permeabilized ventricular myocytes. Savio- Galimberti and Knollmann (2015) showed that flecainide suppressed spontaneous Ca 2+ wave with IC 50 15.6 ± 3.4 μM in permeabilized CASQ2 −/− myocytes. On the contrary, in permeabilized RyR2 R4496C myocytes, Liu et al. (2011) reported that 6 μM flecainide did not affect the frequency of spontaneous Ca 2+ sparks; Bannister et al. (2015) failed to confirm the reduction of spontaneous Ca 2+ wave frequency after administration of 25 μM flecainide in permeabilized rat cardiac myocytes. However, it is difficult to reconcile these conflicting results. Permeabilized myocytes are indispensable for disrupting the intracellular structure and loss of cytosolic proteins, which depend on the degree of membrane permeabilization induced by the concentration of saponin or β-escin and the duration of exposure to the agent. This may differ among laboratories, leading to controversial results (Smith and MacQuaide, 2015).
Flecainide has a narrow therapeutic window between the effective dose and the dose that can produce adverse toxic effects. The target range for flecainide concentration is 0.5-2.4 μM in the clinical practice (Melgari et al., 2015;Rabêlo Evangelista et al., 2021;Yang et al., 2021). Despite conflicting results in single-channel recordings, the therapeutic concentration of flecainide cannot block RyR2 effectively. Based on the IC 50 of flecainide for inhibiting RyR2, a high flecainide concentration (25 μM) was used in intact or permeabilized myocytes to elucidate antiarrhythmic mechanisms. Watanabe et al. (2009) reported that flecainide concentration in cardiac tissue is 33 ± 0.8 μM 1 h after injection in mice, suggesting that flecainide can accumulate in cardiac tissue. Thus, it seems reasonable to use very high concentrations of flecainide in the experiments. In a study by Liu et al. (2011), 6 μM flecainide completely abolished the upstroke of action potentials in mouse ventricular myocytes, consistent with the adverse toxic effects of high-dose flecainide administration. In this scenario, it is unlikely that the blockade of RyR2 by high concentrations of flecainide (25 μM) is the primary mechanism underlying its dramatic efficacy in CPVT (Liu et al., 2011). Moreover, despite the high concentration of flecainide in cardiac tissue, it cannot be arbitrarily inferred that flecainide in the cytoplasm of cardiac myocytes reaches a sufficient concentration to block RyR2. It is almost impossible for high concentrations of flecainide in cardiac tissue to exclusively affect RyR2, but not cardiac sodium channels.
Flecainide is a multiple potassium channel blocker. Class III antiarrhythmic drugs are effective in the reentrant arrhythmias but not trigger arrhythmias. They are not effective in CPVT in the clinical setting. Potassium channel blockers prolong the action potential duration, leading to Ca 2+ overload, and are detrimental to CPVT (Němec et al., 2010). In this scenario, potassium channel block is unlikely responsible for the flecainide efficacy in CPVT. Most of the studies in CPVT are from small animal models, and mouse has a small heart size and fast heart rate. We need to be aware of the limitations to generalize the results of the small animal research to humans (Joukar, 2021).
SODIUM CHANNEL BLOCKERS TREAT CPVT PHENOCOPY
Bidirectional VT is a typical arrhythmic phenotype observed in patients with CPVT (van der Werf et al., 2012). It has been proposed that bidirectional VT in CPVT, digitalis toxicity, and Andersen-Tawil Syndrome (ATS) share a similar underlying electrophysiological mechanism, which is associated with alternating ectopic foci originating from the distal His-Purkinje system in the left and/or right ventricle, induced by Ca 2+ overload in Purkinje cells (Smith et al., 2006;Tristani-Firouzi and Etheridge, 2010).
Digitalis intoxication is manifested by Na + -K + pump inhibition, resulting in intracellular Ca 2+ overload, which causes triggered arrhythmias, such as bidirectional VT. The sodium channel blockers lidocaine and phenytoin have been recommended for the effective treatment of dysrhythmias associated with digitalis intoxication (French et al., 1984;Antman and Smith, 1985). In isolated myocytes, sodium channel blockers can ameliorate intracellular Ca 2+ overload induced by digitalis and reduce spontaneous Ca 2+ release events. Since there is no RyR2 blocking action of lidocaine and phenytoin, the sole sodium channel blockade is responsible for the antiarrhythmic effects of lidocaine and phenytoin in digitalis intoxication.
ATS, which is mainly caused by KCNJ2 mutations, phenocopies CPVT and may manifest the typical adrenergically mediated bidirectional VT (Zhang et al., 2005). The underlying arrhythmogenic mechanism is triggered arrhythmias induced by Ca 2+ dysregulation in cardiac myocytes. A series of cases have shown that flecainide effectively prevents arrhythmias in patients with ATS (Pellizzón et al., 2008;Kuroda et al., 2017). Given that ATS presents bidirectional VT, which is usually observed in digitalis toxicity, and phenytoin is used to treat arrhythmia in digitalis toxicity, Maneesh et al. tested phenytoin in three ATS patients who did not respond to conventional therapy (β-blockers, flecainide, and verapamil). They reported that phenytoin completely suppressed ventricular arrhythmias in two patients and significantly reduced ventricular arrhythmias burden in one patient (Rai et al., 2019). Thus, sodium channel inhibition is likely the principal mechanism of flecainide action in ATS.
Another interesting issue is the diverse responses to sodium channel blockers in CPVT. Use and frequency dependence are common properties of class I antiarrhythmic agents. Flecainide, but not lidocaine, preferentially blocks sodium channel in the open state (Kojima et al., 1989;Ramos and O'Leary, 2004). The electrophysiological mechanism of CPVT is that DAD reaches the threshold of Na + channel and results in triggered activity, which is also frequency-dependent. Therefore, flecainide is more effective than other sodium channel blockers during fast heart rates, such as conditions in which patients with CPVT develop cardiac arrhythmias. In CPVT, the efficacy of other sodium channel blockers with the strong use dependence block, such as propafenone and pilsicainide, needs further investigation.
FLECAINIDE TREATMENT IN CALCIUM-RELEASE DEFICIENCY SYNDROME
Recently, loss of function (LOF) of RyR2 mutations has been identified as a new clinical entity, termed cardiac ryanodine receptor calcium-release syndrome (CRDS), which is characterized by ventricular fibrillation and sudden death. However, it does not manifest ventricular tachyarrhythmias during stress testing (Roston et al., 2021). CRDS is a mirror image of CPVT due to the opposite of the RyR2 function. It is logical to infer that flecainide might exacerbate the CRDS phenotype if it can inhibit RyR2. To date, flecainide has proven to be a promising therapeutic agent for CRDS (Tester et al., 2020;Ormerod et al., 2021). The programmed electrical stimulation protocol with a pattern of long-burst, longpause, and short-coupled (LBLPS) can induce ventricular arrhythmias in transgenic mice with RyR2 LOF mutations. Sun et al. (2021) demonstrated that treatment with flecainide abolished LBLPS-induced ventricular arrhythmias in model mice. In the induced pluripotent stem cell cardiomyocytes carrying homozygous RYR2 duplication, which presented LOF, Tester et al. (2020) reported that flecainide significantly reduced arrhythmic activity caused by isopropanol. Ormerod et al. (2021) tested flecainide in nine CRDS patients and found that the administration of flecainide substantially reduced arrhythmia inducibility in one subject and abolished arrhythmia in all others. Sun et al. (2021) proposed that the therapeutic mechanisms of flecainide in CRDS are attributable to its multiple blocking of membrane channels.
SUMMARY
Flecainide has a significant impact on the clinical management of patients with CPVT. Efforts have been made to explore the underlying mechanisms of flecainide therapy for CPVT. There is a hot debate regarding the effects of flecainide on RyR2. Understanding the mechanisms of flecainide in CPVT will improve our knowledge of Ca 2+ dysregulation in cardiac myocytes and help develop a more specific therapeutic strategy for CPVT.
AUTHOR CONTRIBUTIONS
YR and NL defined the theme of review. YL wrote the manuscript. XP, RL, XW, and XL took part in preparing the manuscript. YR, NL, RT, CM, and RB prepared and reviewed the manuscript before publication. All authors confirmed that they have read and approved the manuscript and they have met the criteria for authorship.
FUNDING
This work was supported by the National Science Foundation of China (grant nos. 81770318, 82170318, and 81870244) and Beijing Municipal Natural Science Foundation (grant no. 7192051).
|
2022-03-09T14:34:34.575Z
|
2022-03-09T00:00:00.000
|
{
"year": 2022,
"sha1": "43a5fbebb2df86d469f9d93b0d95b1e07d44400a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2022.850117/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "43a5fbebb2df86d469f9d93b0d95b1e07d44400a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229363825
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Classification Algorithms Towards Subject-Specific and Subject-Independent BCI
Motor imagery brain computer interface designs are considered difficult due to limitations in subject-specific data collection and calibration, as well as demanding system adaptation requirements. Recently, subject-independent (SI) designs received attention because of their possible applicability to multiple users without prior calibration and rigorous system adaptation. SI designs are challenging and have shown low accuracy in the literature. Two major factors in system performance are the classification algorithm and the quality of available data. This paper presents a comparative study of classification performance for both SS and SI paradigms. Our results show that classification algorithms for SS models display large variance in performance. Therefore, distinct classification algorithms per subject may be required. SI models display lower variance in performance but should only be used if a relatively large sample size is available. For SI models, LDA and CART had the highest accuracy for small and moderate sample size, respectively, whereas we hypothesize that SVM would be superior to the other classifiers if large training sample-size was available. Additionally, one should choose the design approach considering the users. While the SS design sound more promising for a specific subject, an SI approach can be more convenient for mentally or physically challenged users.
I. INTRODUCTION
Electroencephalography (EEG) pattern recognition is an essential component of Brain Computer Interface (BCI) systems. In BCI systems, the patterns in recorded EEG signals are identified, classified, and finally translated to a control signal to be sent to an external device. In particular, motor imagery (MI) EEG-based BCI systems are believed to show the brain activity in regions that are involved in movement imagination, without real limb execution [1]- [3]. This application is beneficial to the neuro-rehabilitation of a large community of paralyzed patients whose corticospinal track is blocked.
The EEG temporal and spatial characteristics are correlated with an individual's physical and mental state and may vary between subjects. Hence, subject-specific (SS) BCI models are designed and trained per subject, which makes SS BCI designs time consuming and inconvenient. To overcome the limitations of BCI, subject-independent (SI) BCI designs were introduced and received attention recently. The purpose of SI models is Preprint. to train a general model that can be used by new subjects with little or no experimental calibration and model parameter adaptation.
Although MI EEG-based BCI is a major research area, it is not yet fully understood, possibly due to the lack of reliable data as well as complicated data collection and model training. Studies on SS BCI designs show that EEG classification on MI data has considerably lower accuracy than some other BCI tasks, such as P300, ERP, or VEP [4], [5]. For example, a recent study [6] reported a mean classification accuracy of 71.1% for MI data, but 96.7% and 95.1% for the ERP and SSVEP paradigms. Mean accuracies of only 60.4%, 70.0%, and 67.4% for MI EEG classification tasks were reported in [7]- [9], respectively. This can be partially attributed to the inability of the target users in operating the MI-related BCI system, also known as "BCI-illiteracy" [7], [8], [10], [11]. The recent study in [6] reported an average BCI-illiteracy of 53.7% for the MI BCI paradigm, but 11.1% and 10.2% for ERP and SSVEP, respectively. BCI illiteracy can increase the number of outliers considerably, resulting in unreliable data.
On the other hand, SI BCI studies have reported variable classification accuracies. In [12], [13], maximum mean accu-racies of 67% and 72% were reported, while in [14], [15] a comparison of the SS and SI BCI approaches reported an accuracy of 73% and 64% for the former, whereas the latter achieved 77% and 68% accuracy. The SI accuracy in [15] was increased to 84% by considering various modifications, such as trial removal and partial inclusion of new user data in the training procedure.
Low classification accuracy can result in an inappropriate control signal and consequently make the MI EEG-based BCI system fail in either the SS or SI paradigms. BCIilliteracy, lack of required experimentation, high betweensubject variability, and small sample sizes can significantly contribute to low classification accuracy. Small sample size require the use of simple/linear statistical machine learning (ML) methods with few degrees of freedom [16]. However, even with simple ML methods, small sample size can pose issues due to the curse of dimensionality phenomenon [17], [18] and limitations in test-train split of the data, among many other issues [19], [20]. In addition, well-known methods like cross-validation may become inadmissible due to their high variance in MI EEG data-poor environments [21]- [23].
In this paper, we performed an empirical study of the classification accuracy of MI EEG-based SS and SI BCI designs, using Linear Discriminant Analysis (LDA), Support Vector Machines (SVM), Classification and regression tree (CART), and k-Nearest Neighbors (KNN), while investigating the effect of varying training sample size on classification accuracy. The rest of this paper is organized as follows. The basic classification methods and performance evaluation are described in Section II. Section III contains the experimental setups and results, which are discussed in Section IV. Finally, Section V contains conclusions.
A. Sample Data
We used the publicly available MI EEG data provided by the GigaScience Repository [6]. 20 subjects (ages: 24-32 years) were seated in front of an LCD display, in a comfortable position, wearing a 62-channel EEG headset. Brain activity was recorded with a sampling rate of 1,000 Hz. The individuals were instructed to complete a in a single session a total of 40 trials: 20 left hand and 20 right hand imagery tasks.
Each trial started by displaying a "+" symbol in the center of the screen which signaled the subjects to relax their muscles and prepare for the MI task. After 3 seconds, a right or left arrow was displayed and subjects performed the imagery task of grasping with the appropriate hand for 4 seconds. Subsequently, the screen turned black and subjects rested for 6 seconds. Trials were coded as a "left" or "right" class according to the displayed arrow. Figure 2 displays the study protocol for one trial.
B. Pre-Processing
We restored the EEG signals from only the 20 channels that were placed on or close to the motor cortex [6]. Subsequently, we extracted the MI-specific segment of the signals, excluding a transitioning time between tasks. Then a Butterworth bandpass filter was applied to the signals to remove high-frequency noise and low-frequency artifacts and retain the brainwaves of interest [24], which include theta (deep relaxation and meditation state), alpha (relaxed, calm, and no-thinking state), and beta (awake and normal alert consciousness state).
C. Feature Extraction and Selection
Studies have demonstrated that features based on the power spectral density (PSD) of MI EEG signals lead to consistent and robust pattern identification [25]- [27]. Let x(t) be the EEG signal of a recording channel. The PSD of x(t), which is defined in terms of the autocorrelation function R XX (τ ), is given by: The PSD is used to compute the signal power over given frequency ranges.
Since the autocorrelation function R XX (τ ) of the signal x(t) is not known, we estimate the PSD per recording channel using the periodogram, where x[n] = x(n∆t) is the discrete-time version of x(t) using the sampling interval ∆t.
To reduce the dimensionality, we selected the maximum of the periodogram over small intervals of 10 samples. Next, we appended the resulting values of the channels to create a feature vector. Finally, we performed a t-test to select the most discriminating features.
D. Classification
Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM) have been among the most popular parametric classification algorithms for EEG-based BCI system design [6], [28]- [30]. In particular, [31] highlighted that SVM has often outperformed other classifiers in the previous literature. In this section, we introduce some notation and briefly describe the LDA and SVM classification algorithms, as well as nonparametric classification algorithms, CART and k-NN.
A classifier assigns a label y = ψ(x) to each feature vector x ∈ R d , where y = 0 and y = 1 indicate the "right" and "left" hand imagery tasks, respectively. The classifier is designed using training data by S n = {X 1 , Y 1 ), . . . , (X n , Y n )}, where n is the sample size. Furthermore, n 0 and n 1 denote the sample sizes for class y = 0 and y = 1, respectively, with n 0 +n 1 = n. 1) Linear Discriminant Analysis (LDA): Assume that p(x | y = 0) = N (µ 0 , Σ 0 ) and p(x | y = 1) = N (µ 1 , Σ 1 ) are the class conditional densities for class 0 and 1, respectively. The unknown class means and covariance matrices are usually approximated by their maximum-likelihood estimatorŝ respectively. LDA makes the additional assumption that Σ 0 = Σ 1 = Σ, where the common covariance matrix Σ can be estimated aŝ The LDA classifier corresponds to plugging in the aforementioned parameter esimtators in the expression for the optimal classifier under the Gaussian distributional assumption, which yields where 2) Support Vector Machine (SVM): Linear SVM is a linear discriminant with maximum margins, i.e., maximum separation between the decision boundary and the training data. It can be shown that the margin is given by 1 ||w|| and one needs to maximize that. The margin constraints can be written, Without loss of generality, as wX j +b ≥ 1 or wX j +b ≤ 1, depending on whether Y j = 1 or Y j = −1, respectively, for j = 1, . . . , n (class "0" here is coded by −1). Hence, one obtains a convex optimization problem the solution of which is given by where λ * j are the optimal values of the Lagrange multipliers for the preceding optimization problem, for j = 1, . . . , n, and S is the set of indices of the support vectors, i.e., training points (X j , Y j ) for which λ j > 0. The LSVM classifier is then given by: In practice, slack variables are introduced into the optimization problem to allow points to violate the margin, allowing the solution of nonlinearly separable problems and avoiding overfitting.
A non-linear SVM replaces all dot-products x T x in the linear SVM formulation by a kernel function k(x, x ) = Φ(x) T Φ(x ). This correponds to transforming the data to a higher-dimensional space using Φ and applying a linear SVM to the transformed data. Here we use the radial basis function (RBF) kernel, where σ is the kernel bandwidth parameter and is estimated from the observed data.
3) Classification and regression tree (CART): CART is a non-parametric decision tree algorithm. At each node of the tree, CART sets a threshold on a selected feature to split the data into two groups. The threshold and feature are selected at each node such that the impurity of the children nodes is minimized. At a terminal node, or leaf node, a label is assigned according to majority vote among the training points in the leaf node. To avoid overfitting, node splitting is usually terminated early.
4) K nearest neighbors (kNN): kNN is another nonparametric classification algorithm. At any point in the feature space, kNN assigns the majority label among the k nearest training points. Smoothness of the kNN classifier decision boundary increases as k increases. Too small k (say k = 1) introduces overfitting, but too large k leads to underfitting. To avoid ties, odd k is recommended in binary classification problems.
III. EXPERIMENTAL RESULTS
We report in this section the results of a comprehensive empirical comparison of the previous classification algorithms on the MI EEG data described earlier.
Similarly to [6], to account for the transition time, we removed the first 1s and last 0.5s of the MI segments of the EEG signals and performed the rest of the analysis on the middle 2.5 seconds. The signals were filtered using a Butterworth band-pass filter with lower and upper cut-off frequencies of 3 and 35 Hz, respectively. The p-value threshold for the t-test was set to 0.05 for the SS model, which resulted in various numbers of selected features per subject. This threshold was set on 0.005 for the SI experiments, which resulted in total of 29 features.
We performed several classification experiments using the LDA, RB-SVM, CART, and kNN classification algorithms. We set the minimum leaf size to 3 for CART and k = 3 for kNN. Classifier generalization ability was evaluated by randomly splitting the available data into 50% training data S N and 50% testing data S M , and using a test-set error estimator, which is the relative accuracy of the trained classifier on the testing set. In our SI design, the total sample size was 800, so that the testing sample size was 400, which is enough to guarantee excellent test-set error estimation accuracy [32].
For the training step, n ≤ 400 sample points were randomly drawn from S N . For each value of n, we trained the classifiers on the selected data, calculated the test set error on S M , and finally computed the percent prediction accuracies as 100(1 − test set error)%. The classification performances was examined with n ∈ {10, 15, 20} for the SS and n ∈ {100, 150, 200, 250, 300, 350, 400} for the SI models.
Randomly drawn training samples introduces a randomness factor to the classifiers' training and prediction performance. To take into account this randomness factor, we repeated each experimental case 100 times. The SS models' performance are presented in Table I and Figures 3 and 5. Table II shows the mean (std) percent accuracy of the SI models, whereas figure 6 compares the mean accuracy of the four classifiers for varying training sample sizes, ranging from n = 100 to n = 400. Figure 7 displays box plots of classifier accuracy for SI and SS approaches and compares the mean accuracy and standard deviation for varying training sample size.
IV. DISCUSSION
Our experiments lead to several interesting conclusions. For SI design, as displayed in figure 6, in the small sample-size region (100 ≤ n ≤ 200), LDA was the best performing classifier. CART beats all other classifiers if 200 ≤ n ≤ 400 samples are provided. SVM was the most robust (lowest std) in both SS and SI paradigms. The steep improvement in SVM accuracy leads to the hypothesis that it may achieve the highest accuracy if the training sample size is large enough.
For the SS design, as illustrated by Table I, increasing sample size improves the classification performance in terms of mean prediction accuracy, although it may be impracticable to collect many EEG recordings per subject. While SVM was the best performing classifier for only a few subjects (S3, S5, and S11 when n=20), the other three classifiers compete for the highest SS accuracy. This behaviour suggests a hypothesis that a hybrid model may be beneficial for SS BCI systems, where a specific classification algorithm is selected according to the maximum accuracy of the test set for that single individual.
The side-by-side box plots in figures 7 show that, for all classification algorithms, in contrast to the SI models, the variance of the trained SS classifiers is not reduced with an increment in training sample size. Indeed, the variance is increased noticeably in CART and SVM. This variability is expected at least due to the high between-subject differences in EEG signals, and suggests that a trained BCI may not be appropriate for a totally new user. It is worthy to note that the study in [15] also demonstrated improved accuracy by including a few samples from new users in training a SI BCI.
Overall, the results indicate that SS designs, and possibly a hybrid SS model, may be more appropriate for personalized MI BCI if collecting enough samples from a specific subject is feasible. In case a recording session is inconvenient for a subject, particularly for mentally or physically challenged people, a SI BCI may be used. Such SI BCI should be trained on a sufficiently large training data set that is collected from a large group of subjects covering between-subject variability. Comparing the box plots, it can be seen that classification mean accuracy in SI BCI even with a relatively large sample size (n = 400) is close to that of SS BCI with n = 20. However, the low variance of SI BCI acccuracy may make it more applicable for a multi-user system.
V. CONCLUSIONS
This paper presents our evaluation of the subject-specific (SS) and subject-independent (SI) paradigms using two popular classification algorithms in MI BCI design, LDA and SVM, in addition to the non-parametric algorithms k-NN and CART. Our results show that for SI BCI, LDA beats other classification algorithms in the presence of small sample size. CART outperforms the rest for a relatively large sample size, whereas we hypothesize that SVM achieves the highest accuracy with a sufficiently large sample size. It is important to stress that generalization performance should be evaluated on a relatively large and independent test set, which was done here.
For SS models, although LDA and CART performed well for majority of subjects, the best classifier performance depends on the subject, which suggests the applicability of a hybrid model. As was demonstrated in [33] for genderspecific BCIs, considering the subject gender may improve the performance of SI designs. We leave the investigation of gender-specific SI BCI performance to future work. Another interesting research problem is to investigate the EEG recording channels with the least discriminating information, for example by using a sequential backward search. In addition, deep learning methods, particularly convolutional neural networks (CNNs), are able to process brain maps as input images, but may need larger sample size for training a network with good generalization abilities. A future experiment may study the performance of CNNs for SI BCI considering varying sample sizes.
|
2020-12-24T02:15:50.109Z
|
2020-12-23T00:00:00.000
|
{
"year": 2020,
"sha1": "8a271b4e2ba80982001a4fbd7bcd9063d49f31f8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.12473",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8a271b4e2ba80982001a4fbd7bcd9063d49f31f8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
216035867
|
pes2o/s2orc
|
v3-fos-license
|
Channel Estimation and Power Scaling Law of Large Reflecting Surface with Non-Ideal Hardware
Large reflecting surface (LRS) has emerged as a new solution to improve the energy and spectrum efficiency of wireless communication system. Most existing studies were conducted with an assumption of ideal hardware, and the impact of hardware impairments receives little attention. However, the non-negligible hardware impairments should be taken into consideration when we evaluate the system performance. In this paper, we consider an LRS assisted communication system with hardware impairments, and focus on the channel estimation study and the power scaling law analysis. First, with linear minimum mean square error estimation, we theoretically characterize the relationship between channel estimation performance and impairment level, number of reflecting elements, and pilot power. After that, we analyze the power scaling law and reveal that if the base station (BS) has perfect channel state information, the transmit power of user can be made inversely proportional to the number of BS antennas and the square of the number of reflecting elements with no reduction in performance; If the BS has imperfectly estimated channel state information, to achieve the same performance, the transmit power of user can be made inversely proportional to the square-root of the number of BS antennas and the square of the number of reflecting elements.
I. INTRODUCTION
Due to the explosive growth of mobile data traffic in recent years, we need to enhance the performance of future wireless communication systems. Many related works have shown that multiple-input multiple-output (MIMO) technology can offer improved energy and spectrum efficiency, owing to both array gains and diversity effects, e.g., Ngo, Larsson and Marzetta prove that the power transmitted by the user can be cut inversely proportional to the square-root of the number of base station (BS) antennas with no reduction in performance [1]. However, the requirements of high hardware cost and high complexity are still the main hindrances to its implementation. Recently, large reflecting surface (LRS), a.k.a., intelligent reflecting surface (IRS) has emerged as a new solution to improve the energy and spectrum efficiency of wireless communication system, and can be used as a low-cost alternative to massive MIMO system [2]- [6]. Prior works demonstrate that the LRS can effectively control the wavefront, e.g., the phase, amplitude, frequency, and even polarization, of the impinging signals without the need of complex decoding, encoding, and radio frequency processing operations. Basar, et al., elaborate on the fundamental differences of this stateof-the-art solution with other technologies, and explain why the use of LRS necessitates to rethink the communicationtheoretic models currently employed in wireless networks [2]. Ozdogan, Björnson and Larsson demonstrate that the LRS can act as diffuse scatterers to jointly beamform the signal in a desired direction in [3]. They also compare the LRS with the decode-and-forward (DF) relay, and show that the LRS can achieve higher energy efficiency by using many reflecting elements [5]. Wu and Zhang analytically show that the LRS with discrete phase shifts achieve the same power gain with that of the LRS with continuous phase shifts [4]. They also verify that the LRS is able to drastically enhance the link quality and/or coverage over the conventional setup without the LRS in [6].
It is noted that all the mentioned works study the LRS systems with an assumption of perfect or ideal hardware operations without any impairments. However, both physical transceiver and LRS suffer from hardware impairments which are non-negligible in practice. Björnson, et al., prove that hardware impairments greatly limit the performance of channel estimation and bound the channel capacity of massive MIMO system [7], [8]. To reveal the impact of hardware impairments on the LRS system, in this paper, we focus on the study of channel estimation and the power scaling law analysis by taking the hardware impairment into account.
With the use of linear minimum mean square error (LMMSE) estimator, our analysis shows that the estimation error decreases with the power of pilot signal, but increases with the number of reflecting elements and the level of hardware impairments. Although the hardware impairments of LRS have no effect on the estimation accuracy statistically, the hardware impairments of transceiver limit the estimation performance when signal-to-noise ratio (SNR) goes to infinity. In addition, the estimation error of LRS channel is larger than that of direct channel. All obtained results imply that more accurate estimation methods and more efficient communication protocols are needed in future works. After that, we analyze the power scaling law of user in the cases of perfect and imperfect channel state information. Our obtained results show that if the BS has perfect channel state information, the transmit power of user can be made inversely proportional to the number of BS antennas and the square of the number of reflecting elements with no reduction in performance, and if the BS has imperfect channel state information from channel estimation, the transmit power of user can be made inversely proportional to the square-root of the number of BS antennas and the square of the number of reflecting elements to achieve the same performance. This is encouraging for that we can use more low-cost reflecting elements instead of expensive antennas to achieve higher power scaling.
II. COMMUNICATION SYSTEM MODEL
We consider an LRS-assisted wireless communication system in this paper, as illustrated in Fig. 1. The system consists of an M -antenna BS, an LRS comprising N reflecting elements, and a single-antenna user. In this section, we give the communication system model based on the physically correct system models in prior works [3]- [6]. The operations at the LRS is represented by the diagonal matrix Φ = diag e jθ1 , · · · , e jθ N , where θ i ∈ [0, 2π] represents the phase-shift of the i th reflecting element. The channel realizations are generated randomly and are independent between blocks, which basically covers all physical channel distributions. Denote the channels of BS-user link, BS-LRS link and LRS-user link as h d ∈ C M ×1 , G ∈ C M ×N and h r ∈ C N ×1 , respectively. They are modeled as ergodic processes with fixed independent realizations, h d ∼ CN (0, C d ) and represents a circularly symmetric complex Gaussian distribution, and C d , C LRS are the positive semi-definite covariance matrices. The communication protocol we adopt for the LRS-assisted system in this paper is based on the protocol proposed in [9], as illustrated in Fig. 2. The channel coherence period τ is divided into three phases: an uplink training phase of τ pilot , an uplink transmission phase of τ up data , and a downlink transmission phase of τ down data . During the uplink training phase, the deterministic pilot signal x is transmitted by the user to estimate channels, where the average power of x is E |x| 2 = p UE . Since the LRS has no radio resources to transmit pilot signals, the BS has to estimate the cascaded channel of G and h r , which is defined as Each column vector h i ∼ CN (0, C i ) in H LRS represents the channel between the BS and the user through LRS when only the i th reflecting element is ON. The uplink training phase is divided into (N + 1) subphases. During the 1 st subphase, all reflecting elements are OFF and the BS estimates the direct channel h d ; During the (i + 1) th subphase, only the i th reflecting element is ON and the BS estimates the channel h i . By exploiting channel reciprocity, the BS will transmit data to the user during the downlink transmission phase. The aggregate hardware impairments of transceiver can be modeled as independent additive distortion noises [10], [11]. The distortion noise at the user η UE ∈ C obeys the distribution of CN (0, v UE ), and the distortion noise at the BS η BS ∈ C M ×1 obeys the distribution of CN (0, Υ BS ), where v UE and Υ BS are the variance/covariance matrix of the distortion noise. The distortion noise at an antenna is proportional to the signal power at this antenna [10], [11], thus we have: • During the 1 st subphase of uplink training phase, the distortion noise covariance matrix Υ BS can be modeled as where κ UE and κ BS are respectively the proportionality coefficients which characterize the levels of hardware impairments at the user and the BS, and are related to the error vector magnitude (EVM). The EVM is a common measure of hardware quality for transceivers, e.g., when the BS transmits the signal x, the EVM at the BS is defined as • During the (i+1) th subphase of uplink training phase, the distortion noise covariance matrix Υ BS can be modeled as • During the uplink data transmission phase, the distortion noise covariance matrix Υ BS can be modeled as During the uplink training phase as well as the uplink data transmission phase, the distortion noise variance v UE can be modeled as v UE = κ UE p UE . The hardware impairments of LRS can be modeled as phase noise since the LRS is a passive device and high-precision configuration of the reflection phases is infeasible. The phase noise of the i th element of LRS is denoted as ∆θ i , which is randomly distributed on [−π, π) according to a certain circular distribution. Due to the reasonable assumption in [12], the distribution of the phase noise ∆θ i has mean direction zero, i.e., arg E e j∆θi = 0, and its probability density function is symmetric around zero. The actual matrix of LRS with phase noise is Φ = diag e j(θ1+∆θ1) , e j(θ2+∆θ2) , · · · , e j(θ N +∆θ N .
Based on the communication system model given above, the received pilot signals y d , y 1 , · · · , y N ∈ C M ×1 at the BS in different subphases of uplink training phase are where x ∈ C is the deterministic pilot signal, and n ∈ C M ×1 is an additive white Gaussian noise with the elements independently drawn from CN 0, σ 2 BS . The received signal y ∈ C M ×1 at the BS during the uplink data transmission phase from the user is where x ∈ C is the transmitted data signal, and the transmit power is p UE = E |x| 2 which is same with the pilot power.
III. CHANNEL ESTIMATION PERFORMANCE
In this section, we analyze the channel estimation performance of the LRS system with LMMSE estimator. The estimated channels include direct channel h d and column vectors h i of cascaded channel H LRS . When we estimate the direct channel in the 1 st subphase, all reflecting elements of LRS are OFF. The system can be simplified as a multiple-input singleoutput (MISO) communication system. The corresponding estimation performance was given in Theorem 1 of [8], which is shown in Lemma 1 as follows.
Lemma 1: The estimated direct channelĥ d using LMMSE estimator can be represented aŝ where Y d is the covariance matrix of the received pilot signal y d . The LMMSE is the trace of the error covariance matrix, tr (M d ), and M d is When we estimate the LRS channel h i , one important difference from the direct channel h d is that there exist hardware impairments on LRS, and these impairments should be taken into consideration. Another important difference is that the signal received at the BS in the (i + 1) th subphase consists of two parts: the signal transmitted through direct channel and the signal transmitted through LRS channel. The signal y i transmitted through LRS channel can be obtained by subtracting the signal y d in Eq. (2) from the signal y i in Eq. (3), as given by It should be noted that additive Gaussian noise cannot be eliminated, and the noise term in y i is the superposition of that in y d and y i , which still obeys a Gaussian distribution. Similarly, the power of residual distortion noise caused by hardware impairments is superposed: the distortion noise at the BS in Eq. (7) is In addition, we omit the superposition of h d η UE in Eqs. (2) and (3) since the value of it is very small in practice.
Theorem 1: The estimated LRS channelĥ i from the separated signal y i using LMMSE estimator iŝ where Y i is the covariance matrix of the separated signal y i . The LMMSE is the trace of the error covariance matrix, tr (M i ), and M i is Proof: The estimated LRS channelĥ i using LMMSE estimator has a form ofĥ i = A y i , where A is the detector matrix which minimizes the mean square error (MSE). According to the definition of MSE, we obtain that MSE is the trace of the error covariance matrix, tr (M i ), and M i is (10) By substituting y i in Eq. (7) into Eq. (10), we obtain that Then, the detector matrix A which minimizes the MSE can be obtained by equaling the derivative of Eq. (11) with respect to A to zero, as given by Finally, we obtain the estimated LRS channelĥ i in Eq. (8). By substituting A into Eq. (10), we obtain the error covariance matrix M i in Eq. (9). Remark 1: The phase errors of the reflecting elements are random and unknown to the BS in practice. We can only use the statistic characteristics of ∆θ i to estimate the LRS channel. The result shows that the LRS hardware impairments will not affect the estimation accuracy statistically. Thus, a massive MIMO system can be replaced by an LRS-assisted system with large number of low-quality reflecting elements and moderate number of high-quality antennas, which causes tolerable decrease of estimation accuracy but can reduce hardware cost substantially. In addition, the estimation accuracy will decrease on account of the superposition of noise/distortion power caused by the subtraction operation on signals, and we need more accurate estimation method to compensate this loss.
Corollary 1: The average estimation error per antenna is independent of the number of BS antennas, but correlated to the number of reflecting elements on LRS (the times of estimation increases with the number of reflecting elements). Contrary to the ideal hardware case that the error variance converges to zero as p UE → ∞, the transceiver hardware impairments limit the estimation performance.
Proof: Consider the special case of C d = λI and C i = λI. The covariance matrix of the direct channel estimation error is where κ d = 1 + κ UE + κ BS (1 + κ UE ). The covariance matrix of the estimation error of the channel through the i th element of LRS is where κ i = 1+κ UE +3κ BS (1 + κ UE ). In the high pilot signal power regime, we have Thus, perfect estimation accuracy cannot be achieved in practice, not even asymptotically. We compare the estimation performance of direct channel and LRS channel with different impairment levels to illustrate the difference between them as well as the estimation accuracy limit caused by hardware impairments. We assume that the number of BS antennas is M = 20, and the hardware impairments coefficients are chosen from the set of 0, 0.05 2 , 0.10 2 , 0.15 2 . The channel covariance matrix is generated by the exponential correlation model from [13]. . We notice that the estimation error increases with the impairment level, and hardware impairments create non-zero error floors. In addition, the estimation error of LRS channel is larger than that of direct channel. To numerically illustrate the effect of different numbers of reflecting elements on channel estimation performance, we assume the number ranges from 0 to 200. We consider three models to generate the channel covariance matrix: 1) Exponential correlation model with correlation coefficient r = 0.7 [13]; 2) One-ring model with 20 degrees angular spread; 3) One-ring model with 10 degrees angular spread [14]. Fig. 4 shows that the channel estimation error increases with the increase of the number of reflecting elements and decreases with the increase of SNR. We notice that the estimation error is less than 0 dB with large number of reflecting elements when SNR is over 50 dB.
IV. POWER SCALING LAW OF USER
Many related works [1], [15], [16] show that the emitted power can be reduced with no reduction in performance by utilizing the array gain in multi-antenna system. One can reduce the transmit power as 1/M α , 0 < α < 1 2 , and still achieve non-zero spectral efficiency as M → ∞. In this section, we quantify the power scaling law for LRS-assisted wireless communication system. By considering maximumratio combining (MRC) detector as it achieves fairly well performance [1], [17], we consider the cases of perfect channel state information and estimated channel state information with error. The received signal at the BS with non-ideal hardware is y = (h d +G Φh r )(x+η UE )+η BS +n, where h d , G and h r are mutually independent matrices whose elements are i.i.d. zero-mean random variables. According to the law of large numbers, we have where σ 2 d = E |h d,i | 2 and h d,i is the element of the channel vector h d . According to the rule of matrix multiplication, we obtain where G i,j is the element of channel matrix G, and h r,j is the element of channel vector h r . As Gh r /N is a random vector similar to h d , we reuse Eq. (16) to obtain that where 1) BS with perfect channel state information: We first consider the case where the BS can obtain perfect channel state information. The detector vector is A = h d + GΦh r when using MRC. As illustrated in Section III, the phase error of LRS is random and unknown to the BS, thus the MRC detector is h d + GΦh r rather than h d + G Φh r . The transmitted signal can be detected by multiplying the received signal y with A H , i.e., r = A H y. The received signal after using the detector vector A is given as where, for simplicity, h represents h d + GΦh r and h represents h d + G Φh r . In addition, the phase noise on LRS will not change the signal power, and the expectation of ∆θ i is zero. We obtain the achievable rate of uplink in Eq. (20). Proposition 1: Assume that the BS has perfect channel state information and the transmit power of the user is scaled with M and N according to , as M, N → ∞.
(22) Proof: Substituting p UE = E UE / M + kM N 2 into Eq. (20), and using the law of large numbers reviewed in Eqs. (16) and (18), we obtain the convergence value of the achievable rate as M, N → ∞ in Eq. (22).
2) BS with imperfect channel state information: In practice, the BS has to estimate the channel, and there exists estimation error as we discussed in Section III. For simplicity, we denote estimation error as E = h est − h. Referring to the Eq. (33) in [1], the elements of E are random variables with zero means and variances β pUEβ+1 , where β = 1 + kN 2 σ 2 d . The received signal can be rewritten as (23) Similar to the Eq. (38) in [1], the achievable rate of uplink channel is given in Eq. (21), where each element of h H est is a random variable with zero mean and variance pUEβ 2 pUEβ+1 .
Proposition 2: Assume that the BS has imperfect channel state information and the transmit power of the user is scaled with M and N according to (24) Proof: The proof follows the similar procedures with Proposition 1. Substituting p UE = E UE /( √ M (1 + kN 2 )) into Eq. (21), and using the law of large number reviewed in Eqs. (16) and (18) along with the variances of elements of estimation error vector E and channel estimation vector h est , we obtain the convergence value of the achievable rate as M, N → ∞ in Eq. (24).
Remark 2: Proposition 1 shows that with perfect channel state information and a large M and N , the performance of an LRS-assisted system with M -antenna BS, N -reflecting element LRS and the transmit power E UE /(M (1 + kN 2 )) of the user is equal to the performance of a single-input singleoutput (SISO) system with transmit power E UE . Proposition 2 shows that with imperfect channel state information and a large M and N , the performance of an LRS-assisted system with M -antenna BS, N -reflecting element LRS and the transmit power E UE /( √ M (1 + kN 2 )) of the user is equal to the performance of a SISO system with transmit power E 2 UE σ 2 d . Proposition 2 also implies that the transmit power can be cut proportionally to E UE /(M α (1 + kN 2 ) 2α ), where α ≤ 1 2 . If α > 1 2 , the achievable rate of uplink channel converges towards zero as M → ∞ and N → ∞.
To numerically illustrate the power scaling law in LRSassisted wireless communication system, we compare the spectral efficiency of LRS-assisted system with that of MISO and SISO system. Fig. 5 shows the spectral efficiency on the uplink versus the SNR for M = 20, N = 100 and κ BS = κ UE = 0.05 2 with perfect and imperfect channel state information, and the SNR is defined as p UE /σ 2 BS . The LRS-assisted system can reach the limit of spectral efficiency caused by hardware impairments much faster than MISO and SISO system, i.e., have a high spectral efficiency at low SNR. Fig. 6 shows the spectral efficiency versus the BS antennas for κ BS = κ UE = 0.05 2 and p UE /σ 2 BS = 10 dB with different numbers of reflecting elements of LRS. The spectral efficiency increases with the increase of the numbers of BS antennas and reflecting elements, and converges to a finite value given above. These results confirm the fact that we can scale down the transmit power of user as the power scaling laws given in Proposition 1 and Proposition 2.
V. CONCLUSION
In this paper, we study the LRS-assisted communication system by considering hardware impairments. In specific, we study the channel estimation performance as well as the power scaling law in both cases of perfect and imperfect channel state information. The result is encouraging for that we can use more low-cost reflecting elements instead of expensive antennas to achieve higher power scaling. There are other important issues that are not addressed, e.g., the estimation error increases with the increase of the number of reflecting elements, the estimation error of LRS channel is larger than that of direct channel. These problems cause the demand for more accurate estimation methods and more efficient communication protocols in future works.
|
2020-04-22T01:01:25.038Z
|
2020-04-21T00:00:00.000
|
{
"year": 2020,
"sha1": "9716650de4c5993f435281a591e37ea8f81e9b11",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9716650de4c5993f435281a591e37ea8f81e9b11",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
261364061
|
pes2o/s2orc
|
v3-fos-license
|
Concomitant transformation of monoclonal gammopathy of undetermined significance to multiple myeloma and of essential thrombocythemia to acute biphenotypic leukemia 37 years after initial diagnosis
TO THE EDITOR: The occurrence of monoclonal gammopathy of undetermined significance (MGUS) and essential thrombocytopenia (ET) in the same patient is quite rare. With an anecdotal purpose, we herein report the long-term clinical history of a patient who presented with simultaneous evolution to multiple myeloma (MM) and to acute biphenotypic leukemia from MGUS and ET, respectively, with the latter conditions simultaneously diagnosed 37 years prior to this case.
Concomitant transformation of monoclonal gammopathy of undetermined significance to multiple myeloma and of essential thrombocythemia to acute biphenotypic leukemia 37 years after initial diagnosis TO THE EDITOR: The occurrence of monoclonal gammopathy of undetermined significance (MGUS) and essential thrombocytopenia (ET) in the same patient is quite rare. With an anecdotal purpose, we herein report the long-term clinical history of a patient who presented with simultaneous evolution to multiple myeloma (MM) and to acute biphenotypic leukemia from MGUS and ET, respectively, with the latter conditions simultaneously diagnosed 37 years prior to this case.
CASE
The occurrence of monoclonal gammopathy of undetermined significance (MGUS) and essential thrombocytopenia (ET) in the same patient is quite rare [1][2][3], usually manifesting as an incidental finding. In addition, the coexistence of multiple myeloma (MM) with ET has also been rarely reported [4][5][6][7][8]. Moreover, the evolution of MGUS to MM simultaneously with blastic transformation of ET in the form of acute biphenotypic leukemia, as observed in this report, represents an exceptionally rare occurrence. With an anecdotal purpose, we herein report the long-term clinical history of a patient who presented with concomitant evolution of MGUS to MM and from ET to acute bipheno- In 2010, a 77-year-old man presented to our center with increasing thrombocytosis and monoclonal paraproteinemia (IgG lambda). In 1975, at another center, he was diagnosed with MGUS associated with ET. The patient was managed according to the prevalent clinical guidelines and received low-dose acetylsalicylic acid (LD-ASA). Upon presentation to our clinic (35 years after original diagnosis and treatment), he reported that for several years he had not been followed up by periodic laboratory evaluations and hematologic examinations. Therefore, a comprehensive work-up, including a bone marrow (BM) aspirate and trephine biopsy, was performed. Megakaryocytic hyperplasia and clustering consistent with ET, along with an infiltration of IgG kappa clonally mature plasma cells (PC) consistent with MGUS, was noted. Janus kinase 2 (JAK 2) V617F, P190, and P210 mutation analyses revealed no abnormalities. In addition, no defining features potentially associated with POEMS syndrome [9], which may be suspected on the basis of the coexistence of a JAK 2-negative thrombocytosis with a monoclonal component, were found by comprehensive work-up; in particular, no organomegalies, skin changes, peripheral nerve abnormalities, or endocrinopathy were present. The radiological evaluation of his skeleton ruled out both lytic and sclerotic bone changes. Human immunodeficiency virus, hepatitis C virus, and hepatitis B virus infections were ruled out by serological evaluations. Therefore, the patient was diagnosed with IgG lambda MGUS concomitant with JAK 2-negative ET. Given the remarkable thrombocytosis (platelet count, >1,000×10 9 /L), hydroxyurea was added to LD-ASA. Thereafter, the patient was regularly followed up until 2 years later when his hemogram showed pancytopenia concomitant with an increase in monoclonal protein concentration higher than 4 g/dL. At that time, examination of a BM aspirate revealed a 30% proportion of clonal IgG kappa PC along with 20% blasts; the latter cells, showed coexpression of lymphoid and myeloid markers, being positive for CD34, CD13, CD33, HLA-DR, CD19, and CD22. BM trephine biopsy (Fig. 1) confirmed BM infiltration by PC and blasts. Conventional cytogenetic and fluorescence in situ hybridization revealed a normal karyotype; negative JAK 2 V617F, P190, and P210 mutation analyses were confirmed. Unfortunately, no other molecular studies were performed. Physical examination revealed no remarkable findings; in particular, neither upper abdominal organomegalies nor superficial adenomegalies was palpable. Laboratory and radiologic evaluations revealed moderate Letters to the Editor Bence Jones proteinuria (lambda type) and mild pancytopenia but no other abnormalities were found. In particular, serum calcium and comprehensive metabolic, renal, hepatic, and coagulative panel results were normal. In addition, skeletal survey showed neither lytic nor sclerotic lesions throughout the axial and appendicular skeleton. The diagnosis of MM coexisting with secondary acute biphenotypic phenotype was made. The patient was evaluated as a possible candidate for treatment with hypomethylating agents, but his condition suddenly deteriorated and he died of pneumonia.
This case lacks practical therapeutic implications and reliable indications for the management of this uncommon occurrence, and our report has only anecdotal value. However, the overlapping occurrence of acute biphenotypic leukemia transformed from ET and MM is extremely rare. We speculate that the synchronous evolution of ET and MGUS along with coexpression of lymphoid antigens by blastic cells could suggest a common origin of these 2 malignancies, potentially evolving from a common precursor by progressive transformation to more aggressive disorders [5]. However, this hypothesis remains to be investigated.
Pasquale Niscola 1 , Gianfranco Catalano 1 , Stefano Fratoni 2 , Laura Scaramucci 1 , Paolo de Fabritiis 1 , Tommaso Caravita 1 WaldenstromÊs macroglobulinemia presenting with lytic bone lesions: a rare presentation TO THE EDITOR: Lymphoplasmacytic lymphoma (LPL) is a neoplasm of small B lymphocytes, plasmacytoid lymphoid cells, and plasma cells that usually involves bone marrow, and sometimes, the lymph nodes and spleen; which does not fulfill the criteria for any other small B cell lymphoid neoplasms that may also have plasmacytic differentiation [1]. Waldenstrom's macroglobulinemia (WM) comprises a significant proportion of LPL cases and is characterized by bone marrow involvement and an IgM monoclonal gammopathy of any concentration [2].
When WM was first described, the general belief was that it did not extend to the skeletal system. However, following several reports of lytic bone lesions in WM [3][4][5][6], this belief has been challenged. It is now considered that bone involvement in WM may not be unusual. The abnormal feature that was commonly observed in these previously reported cases was the presence of a predominant plasmacytic morphology in the bone marrow of the WM patients with lytic bone lesions. Contrary to these reports, we hereby report a rare case of WM with lytic bone lesions, showing a predominant presence of lymphocytic infiltration of the bone marrow, and very few plasmacytic cells.
CASE
A 65-year-old male patient, who had a 10-year history of hypertension and type II diabetes mellitus, presented with complaints of pain and a tingling sensation in both lower limbs over the previous year and in both upper limbs over the previous 6 months. He also had a history of weight
|
2017-10-17T20:12:15.137Z
|
0001-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "b4d34155b0810b9df7a474eeef813e62a86f0f8a",
"oa_license": "CCBYNC",
"oa_url": "https://synapse.koreamed.org/upload/SynapseXML/3072br/pdf/br-48-228.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4d34155b0810b9df7a474eeef813e62a86f0f8a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263955497
|
pes2o/s2orc
|
v3-fos-license
|
A Novel Peptide that Disrupts the Lck-IP3R Protein-Protein Interaction Induces Widespread Cell Death in Leukemia and Lymphoma
There is increasing evidence that the T-cell protein, Lck, is involved in the pathogenesis of chronic lymphocytic leukemia (CLL) as well as other leukemias and lymphomas. We previously discovered that Lck binds to domain 5 of inositol 1,4,5-trisphosphate receptors (IP3R) to regulate Ca2+ homeostasis. Using bioinformatics, we targeted a region within domain 5 of IP3R-1 predicted to facilitate protein-protein interactions (PPIs). We generated a synthetic 21 amino acid peptide, KKRMDLVLELKNNASKLLLAI, which constitutes a domain 5 sub-domain (D5SD) of IP3R-1 that specifically binds Lck via its SH2 domain. With the addition of an HIV-TAT sequence to enable cell permeability of D5SD peptide, we observed wide-spread, Ca2+-dependent, cell killing of hematological cancer cells when the Lck-IP3R PPI was disrupted by TAT-D5SD. All cell lines and primary cells were sensitive to D5SD peptide, but malignant T-cells were less sensitive compared with B-cell or myeloid malignancies. Mining of RNA-seq data showed that LCK was expressed in primary diffuse large B-cell lymphoma (DLBCL) as well as acute myeloid leukemia (AML). In fact, LCK shows a similar pattern of expression as many well-characterized AML oncogenes and is part of a protein interactome that includes FLT3-ITD, Notch-1, and Kit. Consistent with these findings, our data suggest that the Lck-IP3R PPI may protect malignant hematopoietic cells from death. Importantly, TAT-D5SD showed no cytotoxicity in three different non-hematopoietic cell lines; thus its ability to induce cell death appears specific to hematopoietic cells. Together, these data show that a peptide designed to disrupt the Lck-IP3R PPI has a wide range of pre-clinical activity in leukemia and lymphoma.
Introduction
The lymphocyte-specific protein tyrosine kinase (Lck) is a member of the Src family of protein tyrosine kinases first identified in the 1980s [1,2]. Since then, the function of Lck has been extensively investigated and many mechanistic insights into the regulation of its activity have been revealed. However, because Lck is primarily expressed in T cells, few reports have investigated the role of Lck in malignant B cells or other cells of hematopoietic origin [3]. Recent studies are beginning to elucidate how the alteration in expression and/or activity of Lck may result in pathological conditions like chronic lymphocytic leukemia (CLL) and other blood cancers [3]. We among other groups have shown that Lck is aberrantly expressed in CLL cells compared with normal B cells [4][5][6][7][8][9][10][11]. CLL cells with high expression of Lck show elevated BCR signaling capacity, cell survival and protection from glucocorticoid-induced apoptosis [4,9,10]. Moreover, selective inhibition of Lck is sufficient to block BCR signaling in CLL [9]. While it is not known how Lck regulates downstream BCR signaling, it may be required for the activation of PLCγ [11], which is responsible for the generation of critical second messengers such as IP 3 , DAG, and subsequent Ca 2+ signaling mediated by IP 3 R [12].
Using a combination of biochemical and bioinformatic approaches, our laboratory discovered that Lck binds to IP 3 R at a 21 amino acid region (2078-2098) KKRMDLVLELKNNASKLLLAI [13]. In T cells, the Lck-IP 3 R protein-protein interaction (PPI) regulates the pattern of Ca 2+ flux by a mechanism that is independent of IP 3 . A synthetic version of KKRMDLVLELKNNASKLLLAI, referred to as domain 5 subdomain (D5SD), inhibited binding of Lck to IP 3 R-1 and shifted the pattern of Ca 2+ signaling after strong T cell receptor activation. In this study, we hypothesized that D5SD peptide would affect cell proliferation and/or death in hematological malignancies by sequestering Lck or potentially other proteins that can bind to domain 5 of IP 3 Rs. Because Lck has emerged as a druggable target for many cancers [3], these data have important clinical implications. CLL, mantle cell lymphomas, T-cell neoplasms, and many lymphomas of germinal center origin have been found to express Lck [4][5][6][7][8][9][10][11]14]. Recent studies have also linked Lck to the pathogenesis of AML, suggesting that it may function as an oncogene in myeloidmalignancies [15,16].
Peptide synthesis
Peptides were synthesized by GenScript and purified by liquid chromatography/mass spectrometry to > 95% purity. The D5SD sequence is KKRMDLVLELKNNASKLLLAI. The control peptide sequence is NLNHSDQFAENLSHICGGHG. The TAT cell-penetrating peptide sequence (RKKRRQRRRGG) was added to the N-terminus of each peptide. The sequence for the SH2-binding phospho-peptide was EGQY*EEIP; the dephosphorylated peptide EGQYEEIP was used as a control.
Cell culture
All procedures followed the guidelines and regulations in accordance with IRB protocol ICC2902/11-02-28 of Case Western Reserve University Cancer Center/University Hospitals Cleveland Medical Center. The use of patient samples was approved by the IRB of Case Western Reserve University School of Medicine. WEHI7.2 and HEK-293 cells were cultured in DMEM containing 10% FCS, 100 μM non-essential amino acids, and 2 mM L-glutamine. Jurkat, CEMC7, Raji, RS11846, MEC1, HL60, NB4, OCI-AML3, NL20, and NIH 3T3 cells were cultured in RPMI-1640 medium with 10% FBS, 100 μM non-essential amino acids, and 2 mM L-glutamine. OCI-LY-10 cells were cultured in IMDM with 20% FBS, 2 mM glutamine and 50 μM 2-mercaptoethanol. All cell lines were incubated in a humidified incubator at 37°C in 5% CO 2 except for WEHI7.2 cells, which were incubated in 7% CO 2 . All cell lines except for WEHI7.2 and MEC1 were purchased from the American Type Culture Collection. The WEHI7.2 cell line was provided by University of California San Francisco and MEC1 from the DSMZ (Germany). Cell lines were routinely tested for mycoplasma.
RNA interference
pLKO.1 lentiviral vectors with shRNAs targeting Lck or Fyn were transduced along with pMD2G (env) and pR8.74 (gag and pol) vectors into 293T cells to generate viral particles. Viral particles were subsequently incubated with WEHI7.2 cells in the presence of puromycin to positively select for transduced cells. Stably transduced, knock-down cells were assessed for Lck and Fyn expression by western blotting.
Biotin-streptavidin pull-down assays
Biotin-labelled peptides were immobilized on streptavidin-coated beads and incubated with a biotin-containing buffer to block streptavidin molecules not bound to peptides. Beads were washed three times with Tris Buffered Saline (TBS; 25 mM Tris-HCl, 0.15 M NaCl, pH 7). Cell lysates, or purified GST-tagged SH2-Lck, were incubated with immobilized peptides for 18 h at 4°C. Beads were washed extensively and incubated for 5 min with 50 μL elution buffer prior to centrifugation. Protein analyte including eluate, beads, washes, and flow-through were analyzed by western blotting. The purified GST-tagged SH2-Lck domain was visualized by Coomassie Brilliant Blue staining on an SDS gel.
Co-immunoprecipitation
Cells were washed in cold PBS and pellets were resuspended in Tris (50 mM), NaCl (100 mM), EDTA (2 mM), CHAPS (1%), NaF (50 mM), Na3VO4 (1 mM), phenylmethylsulfonyl fluoride (1 mM) and protease/phosphatase inhibitor cocktails. When applicable, total protein was incubated with D5SD or control peptides for 1 h at 4°C prior to immunoprecipitation with anti-IP 3 R-1 antibody. Immunocomplexes were subsequently incubated with protein G-agarose beads, washed extensively with cold PBS and CHAPS lysis buffer followed by denaturation and boiling in SDS sample buffer.
Cell viability and apoptosis
Cell viability was quantified by Trypan Blue dye exclusion. Apoptotic nuclei were visualized and quantified by Hoechst 33342 dye using an Axiovert S100 Fluorescence Microscope (Zeiss) equipped with a 40× oil objective (Zeiss) with excitation/emission at 350/535 nm. OCI-AML3 cells were subjected to Guava ViaCount (Millipore Sigma) to assess viable, dead, and apoptotic cell fractions and analyzed by flow cytometry. WEHI7.2 cells were subjected to dual staining with annexin-V and propidium iodide as previously described [4].
Measurement of metabolically active cells
Metabolically active cells were assessed using CTG reagent (Cell Titer Glo). CTG was added to each well in a 96-well plate after a 24 h treatment and 10 min incubation at room temperature. Briefly, CTG reagent uses ATP as an indicator of metabolically active cells. The enzyme luciferase acts on luciferin in the presence of Mg 2+ and ATP to produce oxyluciferin and to release energy in the form of luminescence.
Measurement of cell proliferation
IncuCyte ZOOM (Essen Biosciences) was used as a measure of cell proliferation. NucLight Red-expressing cells were cultured in 96-well plates and phase-contrast images were taken every 2 h to determine cell confluency.
Measurement of intracellular Ca 2+
Cells were pre-treated with the intracellular Ca 2+ chelator BAPTA-AM (10 μM) for 30 min and single cell Ca 2+ traces were recorded in real-time with a Zeiss axiovert S100 microscope (Carl Zeiss AG). Excitation wavelengths were programmed to alternate at 340 and 380 nm at 1 s intervals to monitor changes in intracellular Ca 2+ concentration in fura-2 loaded cells as previously described [38].
Prediction of secondary structure
The GOR IV algorithm was used to predict the secondary structure of Domain 5 within the IP 3 R protein sequence [17]. A region of domain 5 was chosen based on the high probability of alpha-helices vs other secondary structure elements. Output of the GOR IV algorithm was obtained using ALIGNSEC, a computational module that is part of ANTHEPROT package and available at http://antheprot-pbil.ibcp.fr [39].
Molecular modeling
Molecular modeling was performed using GalaxyPepDock software on an interactive web server [40]. Briefly, the modeling algorithm is based on a flexible structure energy-based approach. Using previously identified structures within the protein data bank (PDB), input peptide sequences are aligned for similarities in molecular structure to calculate both a similarity and interaction score. The structural interaction analysis generates a number of templates which are subjected to energy optimization. This process increases the accuracy of the modeling by sampling backbone and side-chain flexibilities for both the protein and peptide. After each model is refined by energy optimization, the output containing up to 10 predictive models is generated. GalaxyPepDock is freely available at http:// galaxy.seoklab.org.
PPI network analysis
PPI networks were generated using the Protein Interaction Network Analysis (PINA version 3.0) [41]. Briefly, this program integrates data from multiple protein databases (IntAct version 4.2.15, BioGRID version 3.5.185, MINT May 21, 2020, DIP version 20170205, and HPRD release 9) and generates a non-redundant protein interactome that aligns with RNAseq expression data from the TCGA AML dataset containing 173 patients and 16,731 genes. Each node represents a potential PPI with the query protein. Nodes are color-coded based on tumor-specific expression and survival data. Edges connecting each node represent the correlation between the query protein and the interacting protein. The relative width of each edge designates the strength of the correlation (Pearson R 2 correlation coefficient). PINA is freely available at https://omics.bjcancer.org/pina/.
Genomic and proteomic data mining
Heat maps and scatterplots of expression data were obtained using GEPIA2, a computational web server for large-scale expression profiling and interactive analysis [42], and the EMBL-EBI expression atlas [43]. GEPIA2 analyzes RNAseq data from 9,736 tumors and 8,587 normal samples from the TCGA database. In this study, we utilized AML data from 173 patients with matched normal controls and DLBCL data from 47 patients and matched normal controls. GEPIA2 is freely available at http://gepia2.cancer-pku.cn/. High resolution Fourier mass spectrometry data was obtained from the human proteome map [44] and visualized with the EMBL-EBI expression atlas [43].
Experimental reproducibility and statistical analysis
Data are presented as the mean ± SD or SEM as appropriate; a minimum of three measurements were obtained from at least three independent experiments. A Student's t test was used to determine statistical significance between two treatment groups. A two-tailed p-value of 0.05 was the minimal threshold for significance.
Design of D5SD Peptide
As noted, Lck binds IP 3 R at Domain 5 which corresponds to a 21 amino acid sequence from IP 3 R-1 [13]. A synthetic peptide, domain 5 sub-domain (D5SD), was generated to competitively bind Lck and displace it from IP 3 R (Fig. 1A). Moreover, we designed and synthesized a scrambled peptide as a control, also from the IP 3 R-1 sequence, that had no significant homology to any mammalian protein (Fig. 1A). When generating the D5SD peptide, the GOR IV algorithm was utilized to predict the secondary structure of domain 5 within IP 3 R-1 [17] and was validated by an additional algorithm developed by Rost and Sander [18]. As shown in Fig. 1B (top), there were two target regions (blue) within domain 5 of IP 3 R-1 that were strongly predicted to form alpha helices. We chose the 21 amino acid sequence starting at position 155, because the two lysine residues at the N terminus facilitate protein-protein interactions (PPIs) [18,19]. As shown in Fig. 1B (bottom), this region is predicted to form alpha helices vs beta sheets or random coils. While our experiments focused on using the human IP 3 R-1 fragment, IP 3 R-2 and IP 3 R-3 fragments are shown for comparison because they are highly homologous sequences (Fig. 1A).
D5SD Peptide Binds to the SH2 domain of Lck
To visualize how D5SD peptide might bind Lck, we used a computational approach that models PPIs with a high degree of accuracy. Using a 3D molecular structure of the Lck SH2 domain from the protein data bank (PDB), we were able to predict where D5SD peptide would bind (Fig. 1C). The model shown utilizes similarity and interaction scores followed by an energy-based refinement process to determine the most optimal molecular flexibility for the PPI. To validate this model, we assessed whether biotin-labelled D5SD peptide would bind a purified GST-tagged SH2 domain from Lck. Pull-down experiments revealed a small fraction of unbound SH2 in the flow-through at ~ 36kDa. However, the majority of SH2 was bound to immobilized D5SD (note the shift in migration of the SH2 band), while a lower fraction of D5SD-bound SH2 was observed in the eluate (Fig. 1D). These data suggest that D5SD peptide binds Lck within the SH2 domain.
D5SD Peptide Disrupts the Lck-IP 3 R PPI
To confirm that full-length Lck binds to D5SD peptide, we incubated biotin-labelled peptides with lysates from WEHI7.2 and Jurkat T-cell lines. Indeed, biotin-labelled D5SD peptide binds Lck in lysates from both cell lines, whereas the control peptide does not ( Fig. 2A). Because our previous work on the Lck-IP 3 R PPI had been conducted in WEHI7.2 murine cells, we wanted to confirm that the endogenous Lck-IP 3 R PPI was also present in human cells; Fig. 2B indicates that this interaction also naturally occurs in jurkat (human) T cells (Fig. 2B). Importantly, Fig. 2C shows that D5SD peptide markedly disrupts the Lck-IP 3 R PPI. Notably, in B cells, where the Src kinase Lyn is often more abundant than Lck, we did not observe an interaction between Lyn and IP 3 R (Fig S1). This suggests that D5SD peptide may preferentially bind Lck over other Src family kinases.
Lck Helps Malignant Hematopoietic Cells Maintain Viability
Next, we hypothesized that D5SD peptide would induce cell death in lymphoid malignancies due to its ability to regulate Ca 2+ signals in T-cells [13]. In order to maximize cellular uptake of peptides in subsequent experiments, HIV-TAT sequences were added to both D5SD and control peptides (Fig S2A). Importantly, D5SD peptide without the TAT sequence had no effect on cell viability compared with an untreated control ( Fig S2B). Thus, the effect of TAT-D5SD peptide on cell viability was measured in various hematological malignancies, including lymphoid and myeloid lineages. We initially tested TAT-D5SD in the two malignant T-cell lines that were used in the previously described biochemical assays. In WEHI7.2 and Jurkat T cells, we observed a modest, yet significant induction of cell death in both cell lines with TAT-D5SD (20 μM) after 24 hours (Fig. 3A). To determine if Lck was important for cell survival, we created a WEHI7.2 cell line where Lck was constitutively knocked-down by lentiviral-mediated shRNA. Here we show a significantly higher level of apoptosis in cells that do not express Lck, but do express Fyn (Fig. 3B). Both Lck and Fyn are highly expressed in T cells, whereas Lyn is not [20]; importantly, shNRA-mediated silencing of Fyn had no significant effect on the percentage of apoptotic cells (Fig. 3B). These data suggest a unique role for Lck in regulating cell death.
We also tested whether selectively targeting the SH2 domain of Lck affects cell viability in B-CLL cells which co-express Lck and Lyn. Treatment of CLL cells with the cell-permeable phospho-peptide EGQY*EEIP has been shown to preferentially bind the SH2 domain of Lck vs Lyn due to the specificity of the YEEI sequence, which prevents activation of Lck's catalytic domain [21]. We found that the phosphorylated form of the peptide (Y*EEI), selectively targeting the SH2 domain of Lck, more than doubled the level of cell death in CLL cells (Fig. 3C). Moreover, a similar level of cell death was observed after pharmacological inhibition of Lck with the pan-Src inhibitor dasatinib (Fig. 3D). While dasatinib inhibits several Src family members, it has a higher selectivity for Lck and Src compared with Lyn and Fyn [22]. Taken together, these data suggest that Lck, in part, helps maintain the survival of malignant hematopoietic cells. Based on these data, we tested an additional T-cell malignancy and three B-cell malignancies to determine the effects of TAT-D5SD peptide on cell viability. Specific details of each cell line are shown in Table 1. All three malignant T cell lines were less sensitive to TAT-D5SD compared with malignant B-cell models (Fig. 3E). MEC1, a CLL cell line, was among the most sensitive to TAT-D5SD and showed 40% cell death after 24 hours of treatment. All cell lines expressed IP 3 R-1 to varying degrees (Fig. 3E, inset). While most B cells co-express Lyn and Lck, western blot analysis confirmed that Lck was expressed in Raji, RS11846, and MEC1 cells (Fig. 3E, inset). This is consistent with other studies which have shown Lck to be expressed in a number of B cell malignancies, including CLL and several types of B-cell lymphoma [4][5][6][7][8][9][10][11]14]. In fact, Lck protein levels are quantifiable by ultra-sensitive mass spectrometry in normal B cells (low expression) and T cells (high expression), but generally not expressed in non-hematopoietic cells (Fig. 3F). Importantly, we did not observe any significant effects on cell viability in non-hematopoietic cell lines such as NIH 3T3, NL20, and HEK-293 ( Fig S3).
TAT-D5SD Peptide Induces Cell Death in Primary CLL Cells by a Ca 2+− Dependent Mechanism
CLL is a leukemia in which constitutive signaling through the BCR pathway is important to malignant B-cell survival (23,24). While the expression of Lck varies among primary CLL samples [4,9,10], both Lck and Lyn were readily detectable by western blot analysis (Fig. 4A). Consistent with the MS analysis in Fig. 3F, Lck was detectable by western blotting in normal B cells when blots were exposed for longer periods of time. We also examined a database of 68 primary CLL samples and 103 B-cell lymphoma samples subjected to RNA-seq. Fig S4 shows that 100% of CLL and B-cell lymphoma samples co-express Lck and Lyn. Additionally, we observed that Lck levels vary depending on how samples are prepared. For example, CLL6 showed lower levels of Lck protein when cell pellets were lysed in Ripa buffer (Fig. 4A), yet Lck was readily detected from a second blood-draw when cell pellets were lysed in concentrated SDS sample buffer (Fig. 4B). In the same sample, we found that both Y394 and Y505 sites were phosphorylated, suggesting that Src kinases are constitutively active in some CLL patients (Fig. 4B). This is also evident by the high level of tyrosine phosphorylation present in these cells.
In order to assess the effect of TAT-D5SD on CLL cells, we obtained peripheral blood from multiple CLL patients. Cells were immediately treated with either TAT-D5SD or TAT-control peptides for 24 h. Potent induction of cell death by TAT-D5SD peptide was detected in every sample tested (Fig. 4C). The average level of cell death in CLL patient samples was 52%, suggesting the IC 50 of TAT-D5SD is ~ 20 μM (Fig. 4D). To evaluate the kinetics of cell death, TAT-D5SD and TAT-control peptides were evaluated at 3, 6, 9, 12, and 24 hrs. Cell death with TAT-D5SD peptide occured early (3 h) and gradually at 20 μM, whereas the TAT-control peptide had a minimal effect on cell viability (Fig. 4E). It was confirmed that the mechanism of cell death in CLL cells was apoptosis, which was evident by Hoechst 33342 dye staining of condensed nuclei 4-6 hours after treatment with TAT-D5SD peptide (Fig. 4F). This was further confirmed by PARP cleavage in a primary CLL sample 4 h after treatment with TAT-D5SD (Fig. 4G). Because D5SD peptide binds to Lck and displaces it from IP 3 R-1, we hypothesized that this rapid induction of apoptosis was Ca 2+ -dependent. To test this, we treated the CLL-derived cell line MEC1 for 30 min with the intracellular Ca 2+ chelator BAPTA-AM prior to incubation with TAT-D5SD peptide. As shown in Fig. 4H, the addition of BAPTA-AM prevented the induction of cell death by TAT-D5SD. Interestingly, when intracellular Ca 2+ was measured by single cell digital imaging, TAT-D5SD induced Ca 2+ mobilization into the cytosol within 10 minutes following the addition of peptide (Fig. 4I). As expected, Ca 2+ responses induced by TAT-D5SD peptide were inhibited by the addition of BAPTA-AM (Figs. 4I and 4J). Together, these results suggest that the mechanism of cell death induced by TAT-D5SD in CLL is Ca 2+ -dependent.
TAT-D5SD Peptide Induces Cell Death and Inhibits Proliferation in B Lymphoma Cells
Based on publicly available RNA-seq data from the Cancer Genome Atlas (TCFA), DLBCL is another B-cell malignancy that expresses LCK at significantly higher levels compared with matched normal cells (Fig. 5A). Interestingly, LCK showed a similar pattern of upregulation in DLBCL tumor samples as BTK and SYK (Fig S5), both of which drive the BCR signaling pathway and are therapeutic targets in B-cell malignancies [25]. To evaluate cell death induction in a model of DLBCL, we evaluated OCI-LY-10 cells which have previously been shown to express Lck (Table 1). Strikingly, a marked increase in cell death was observed after just a 2 h incubation with TAT-D5SD peptide (Fig. 5B). Apoptotic nuclear morphology was also apparent within this short time frame (4 h to 6 h) (Figs. 5C and 5D). We then subjected NucLight Red-expressing OCI-LY-10 cells to IncuCyte ZOOM live cell imaging fluorescence microscopy. This technique analyzes cell proliferation over time in a controlled environment tissue culture chamber without a need to disrupt cellular clumps [26]. While sustained incubation of cells with TAT-ctrl peptide led to an increase in cell proliferation, cells treated with TAT-D5SD showed no proliferative capacity (Fig. 5E). These data suggest that TAT-D5SD peptide induces cell death in multiple types of B-cell lymphoma (see Table 1), including cell lines derived from more aggressive malignancies such as DLBCL.
Lck Is Aberrantly Expressed in AML and Associates with Well-Characterized Oncogenes
Lck has been implicated as a potential driver of oncogenic transformation and cell proliferation in AML and was identified as a therapeutic target by the Gene Expression Omnibus database [15,16]. However, very little is known about the potential role of Lck in AML. While its expression in AML is low compared with lymphoid malignancies, RNA-seq data from TCGA shows that LCK is expressed in nearly all 173 AML samples analyzed by TCGA (Fig. 6A). Using computational approaches, we examined Lck for potential PPIs based on a number of AML-specific genes. A PPI network within the TCGA AML dataset is shown in Fig S6A. Interestingly, Lck is shown to interact with AML-specific oncogenes such as FLT3, Notch-1 and Kit. Indeed, the SH2 domain of Lck was previously shown to interact with FLT3-ITD in B cells [27], which suggests a role for Lck in FLT3-ITD positive AML. Intriguingly, the expression of the LCK gene in AML was similar to known drivers of AML leukemogenesis and proliferation including FLT3, NOTCH1, KIT, RUNX1, RUNX2, DNMT3A, MN1, and CEBPA (Fig S6B). There were no differences in the expression of LYN and SRC between AML samples and matched normal controls.
TAT-D5SD Peptide Induces Cell Death and Inhibits Proliferation in AML Cells
Given the potential importance of Lck in AML, we tested whether TAT-D5SD peptide would have activity in AML cells. Indeed, the AML cell lines reported in Table 1 displayed a high sensitivity toward TAT-D5SD peptide (Fig. 6B). TAT-D5SD peptide rapidly induced cell death in the AML cell line OCI-AML3 in 2 h (Fig. 6C). TAT-D5SD peptide dramatically inhibited cell proliferation when analyzed by live cell imaging (Fig. 6D). Additionally, AML cells heavily rely on cellular metabolism to proliferate. We observed that the level of metabolically active OCI-AML3 and HL-60 cells were significantly diminished with increasing concentrations of TAT-D5SD peptides (Figs. 6E and 6F), suggesting that the effects on AML cell metabolism are dose dependent. Last, we show that OCI-AML3 cells treated with TAT-D5SD undergo apoptosis after treatment with TAT-D5SD peptide. Flow cytometric analysis showed that half of the AML cell population was apoptotic at 24 hours and consisted of a dead (late apoptotic) fraction and viable (early apoptotic) fraction (Fig. 6G).
Discussion
We set out to design an IP 3 R-1-derived peptide that would function as a competitive inhibitor to displace Lck. Using bioinformatic approaches, we predicted the region of domain 5 (within IP 3 R-1) that was facilitating the IP 3 R-1-Lck PPI. Specifically, we used the GOR IV method to target a region of domain 5 that was predicted to form alpha helices (Fig. 1B). Since Lck binds at Domain 5, the GOR method can be used to predict where the most alpha-helices, beta-sheets, and random coils are within the fragment [17]. This approach is based on the principle that PPIs are more likely to occur in alphahelical regions [28]. Several drugs, including those that target p53/MDM2 and Bcl-2/Bax have been developed based on the identification of alpha helical regions [28]. Based on these predictions, a 21 amino acid region in the middle of amino acids 2078-2098 (KKRMDLVLELKNNASKLLLAI) was synthesized and referred to here as domain 5 subdomain (D5SD) (Fig. 1A). Importantly, the two lysine residues at the N terminus of D5SD also facilitate PPIs, which was our rationale for choosing this specific alpha helical region. As a control, we designed and synthesized a scrambled peptide that had no significant homology to any mammalian protein.
Molecular modeling and pull-down experiments revealed that D5SD peptide can bind the SH2 domain of Lck in vitro (Figs. 1C and 1D). While SH2 domains typically interact with other proteins via phospho-tyrosine, a tyrosine residue is not necessarily required for direct SH2 binding as is the case for Lck's interaction with CD45 at the TCR [29,30]. We also showed that D5SD peptide binds to full-length Lck protein, whereas no physical interaction was observed between Lyn and IP 3 R-1 ( Figs. 2A and S1). These data suggest that D5SD peptide may have preference for binding Lck over other Src family members. One goal of this research was to provide a proof-of-concept that D5SD peptide could function as a novel therapeutic for a wide range of hematological malignancies. Importantly, the small basic protein TAT (86-101 residues) drastically enhances the efficiency of viral transcription and can be used to maximize delivery of both D5SD and control peptides into human cells [31]. Given the rapid induction of cell death by TAT-D5SD, we hypothesize that the Lck-IP 3 R PPI may be important for the survival of leukemia and lymphoma cells. This is supported by the finding that cells deficient in Lck protein undergo a higher rate of apoptosis (Fig. 3B). In addition, selective targeting of the Lck SH2 domain by short phospho-tyrosine peptide is sufficient to induce cell death in CLL cells, as is pharmacological inhibition of Lck with dasatinib ( Fig. 3C and 3D). We speculate that leukemic cells require a persistent Ca 2+ flux via IP 3 R channels to positively regulate cell survival. When the Lck-IP 3 R-1 PPI is disrupted in T cells, the pattern of Ca 2+ signaling is shifted after strong TCR activation [13]. In CLL cells, cytoplasmic Ca 2+ elevation occurs rapidly after exposure to TAT-D5SD peptide (Fig. 4I, left). The pattern of Ca 2 elevation in lymphocytes can ultimately determine cell fate [32].
Repetitive pulses that are shorter in amplitude and lower in Ca 2+ concentration promote cell proliferation and survival, whereas larger Ca 2+ waves that release more Ca 2+ into the cytosol promote cell death. Figure 4H supports the hypothesis that the Ca 2+ elevation is required for cell death. Given that Lck hi cells are associated with enhanced BCR signaling [9,10], it is possible that the Lck-IP 3 R PPI tightly regulates Ca 2+ homeostasis, but in favor of cell proliferation and survival.
It is of particular interest that Lck is expressed in malignant cells derived from both lymphoid and myeloid origin. We among others have shown aberrant Lck expression in primary CLL cells [4][5][6][7][8][9][10][11]. Using the publicly available TCGA database, we show that LCK is expressed and elevated in DLBCL relative to normal controls (Figs. 5A and S5). Of note, the pattern of LCK expression is similar to oncogenes SYK and BTK, which are known drivers of B-cell leukemia and lymphoma [25]. While few studies have investigated Lck in myeloid cells, Lck has been shown to interact with the SH2 domain of FLT3-ITD and increases colony formation in pro-B cells [27]. Considering the importance of FLT3-ITD in AML, we conducted a PPI network analysis using the TCGA AML dataset containing 173 human primary AML samples. This analysis shows Lck interacting with FLT3 as well as Notch-1 and Kit (Fig S6). RNA-seq data reveals LCK is expressed in nearly all AML samples in Fig. 6A, and has previously been identified in AML by the gene expression omnibus database [16]. In childhood AML, Lck clusters with other T cell proteins and was detected in 500 patient samples [33]. In adult AML, Lck clusters with NOTCH1, NOTCH3, CD74, and LGALS3 based on expression level, and these clusters are significantly associated with overall survival (data available at leukemiaproteinatlas.org). Moreover, dasatinib has shown to increase sensitivity in AML cells carrying FLT3-ITD [34], suggesting Lck could be a relevant therapeutic target in AML.
The discovery that TAT-D5SD peptide induces cell death in B-cell and myeloid malignancies, whereas T-cell leukemias and lymphomas had lower responses, is intriguing. The expression and cellular distribution of Lck may determine the relative sensitivity to TAT-D5SD peptide in lymphocytes. For example, B cells that express lower levels of Lck show increased sensitivity to Lck-IP 3 R-1 disruption by TAT-D5SD. However, a larger sample size would be needed to validate this correlation. This has important implications considering the high variability of Lck expression in B-cell malignancies. For example, CLL cells with high expression of Lck (Lck hi ) have increased proliferative capacity and survival vs Lck low cells [9,10]. While D5SD peptide does not physically interact with Lyn, it does bind other proteins such as PKM2 that interact with IP 3 R-3 [35], which could affect its cytotoxic potential in certain cell types. Importantly, we have shown that TAT-D5SD does not have cytotoxic activity in epithelial cells or fibroblasts; presumably this is because Lck is either not expressed or does not play a role in the survival of non-hematopoietic cells. The induction of apoptosis by TAT-D5SD provides evidence that a peptide may have promise as a therapeutic agent for CLL, B-cell lymphomas, and AML. Peptide therapeutics are emerging in oncology due to their ability to be highly selective and cell-permeable [36]. Novel therapeutic targets for AML are desperately needed, as the prognosis of patients is extremely poor, with median survival approaching one year [37]. In the long term, we believe these data could be instrumental in developing and testing novel therapeutics that target the Lck-IP 3 R PPI. Design and synthesis of D5SD peptide. (A) Lck interacts with IP 3 R-1 (13). Shown is the 21 amino acid sequence (2078-2098), KKRMDLVLELKNNASKLLLAI, within Domain 5 of IP 3 R-1 where Lck interacts. This amino acid sequence corresponds to the D5SD peptide used in subsequent experiments. The control peptide sequence (NLNHSDQFAENLSHICGGHG) was also derived from Domain 5 of IP 3 R-1, but the sequence was scrambled so that it had no significant homology to any other mammalian protein. Amino acid sequences of IP 3 R-2 and IP 3 R-3 show significant homology to IP 3 R-1 and are shown for comparison. (B) Top, schematic of domain 5 of IP 3 R-1 showing target regions with the highest probability of forming alpha-helices (blue). Bottom, specific secondary structure elements (alpha-helices vs beta sheets and random coils) within the D5SD amino acid sequence as assessed by the GOR IV algorithm. A higher score denotes increased probability for each state of secondary structure. (C) Molecular modeling of D5SD peptide (dark gray) bound to the SH2 domain of Lck. Modeling was performed using GalaxyPepDock, a web-based supercomputer that uses similarity and interaction scores followed by energy-based optimization to predict peptide docking sites. h. Apoptosis was measured by flow cytometry using Guava ViaCount assay, which contains a proprietary combination of dyes that measure viable, dead, and apoptotic cell fractions. Cell death was measured by trypan blue dye exclusion unless otherwise indicated. Bars and symbols represent the mean ± SEM of triplicate measurements. Results were confirmed in at least three independent experiments. A student's T-test was used to determine statistical significance; *P<0.05; **P<0.01; ***P<0.001.
|
2023-09-05T15:04:34.913Z
|
2023-01-09T00:00:00.000
|
{
"year": 2023,
"sha1": "1f72b84d2632e254a5df634c0ac690ef9b12f932",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26502/ami.936500114",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33c563a42ab292c8bebbfb62b6adc9b5c33b1540",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210628710
|
pes2o/s2orc
|
v3-fos-license
|
CFD Modelling of Aerial Mass Transfer in Tracheobronchial Airways
: Lungs, sensitive yet most important organ that supplies oxygen to other organs to function normally. The cells of the body need energy for all their metabolic activities. The oxygen that is required to carry out their respective functions in human body is supplied by lungs. The process of taking in oxygen from atmosphere is called as inhalation and releasing out carbon dioxide into the atmosphere is exhalation these processes together called as respiration. There is a less understanding on its function and even lesser studies with a better understanding the flow characteristics in the tracheobronchial airways a number of conditions can be studied. There are several numerical methods that can be used to develop and study a geometrical model one among those models is Computational Fluid Dynamics (CFD), it uses governing equations of fluid dynamics to study the flow characteristics.
INTRODUCTION
Many organs, such as the human heart, have already received a lot of attention from scientists using numerical methods. However, only few studies focused on modeling the lungs entirely, as it is probably one of the most challenging organ to simulate due to the different length scales involved, from microns for the mucociliary transport to centimeters for the airflow in the upper airways. Respiration is one of the vital processes executed by almost all living beings. On an average, a healthy human being breaths about 10 to 15 times in a minute. The respiration process starts from Nasal passages from where air is taken in and passing through nasal cavity it enters trachea. From trachea air enters primary bronchus then secondary bronchus followed by segmented bronchus and at the end air reaches the alveolus. Alveolus consists alveolar sacs and alveolar ducts through which oxygen enters blood capillaries and carbon dioxide is taken out from blood capillaries into alveolus [1]. The trachea or windpipe is a cartilaginous and membranous tube, extending from the lower part of the larynx, on a level with the sixth cervical vertebra, to the upper border of the fifth thoracic vertebra, where it divides into the two bronchi, one for each lung. The trachea is nearly but not quite cylindrical, being flattened posteriorly. Obstructive lung diseases in the lower airways are a leading health concern worldwide. The lungs can have a wide range of problems that can stem from genetics, bad habits, an unhealthy diet and viruses [2]. Tracheal evaluation is a fundamental part of chest imaging. Adult tracheal anatomy is well understood, but tracheal embryology is not. There have been major advances in imaging, but radiography remains the initial imaging study for most tracheal pathology. Careful radiographic analysis can yield considerable information [3,4]. Abnormal tracheal development causes a spectrum of life-threatening anomalies. We report a newborn with tracheal agenesis and a common "esophagotrachea". Ventilation was achieved first by face mask then with an endotracheal tube. In this report, we describe the types of tracheal agenesis and discuss initial airway management [5]. There are several methods used to generate a geometric model like the model created by WEIBEL which is a simplified model assuming a symmetrical branching system [6], HORSFIELD and CUMMING generated a model based on their studies of a lung cast [7]. Stapleton et al. and Heenan et al. used idealized geometries which is based on previous studies [8,9]. Schmid et al. used high-resolution computer tomography (HRCT) to generate a geometry [10]. Computational Fluid Dynamics (CFD) is becoming a powerful tool in the medical context. It provides a good insight of physical phenomena occurring inside the human body without the need of intrusive surgical methods, which often fail to observe the desired phenomenon as they introduce perturbations [11].
As this paper mainly deals with steady state which is time independent, the transient operator, is neglected in both the equations.
II. PROBLEM STATEMENT & METHODOLOGY A. Problem Statement
Flow development in airways of the lung is not understood well enough by medical science and cannot be satisfactorily measured due to the presence of a lot of bifurcations. The normal angle of the tracheal bifurcation is 70±20 [3]. For the right and left main-stem bronchi the values were as follows: 1.16±0.17cm and 1.02±0.22 cm for men [12,13].
B. Geometry of the Problem
In this paper the tracheal diameter is taken as 1.8 cm, the diameter of right bronchus is taken as 1.16 cm and the diameter of the left bronchus is taken as 1.02 cm. These bronchi are further divided into bronchioles.
SOLUTION METHODOLOGY A. Inhalation
The upper part is taken as trachea and is taken as mass flow inlet with 1.225× 10 kg/s and the lower two branches are known as bronchi and are taken as pressure outlet with 0Pa of gauge pressure [14].
B. Exhalation
Bronchi are taken as velocity inlets with the values of 0.0414578m/s for right bronchus and 0.0511160m/s for left bronchus, this time trachea is taken as pressure outlet with 0 Pa of gauge pressure [14].
A. Inhalation
During inhalation the air is drawn in to lungs by means of moment of the diaphragm in the downward direction creating void, velocity vectors were taken and it can be observed that the velocity is high with the value ranging from 0.4183-0.6275m/s at the inlet i.e., trachea and gradually decreasing as the air travels towards the bronchioles which the velocity value is 0 and the lines can be seen coming out of the bronchioles, due to the branching there is a spike in the velocity reaching 0.8367m/s after entering the right bronchus and is decreased towards further branching. Volume rendering is taken with pressure as variable and we observed that the velocity and the pressure values were within their ranges.
B. Exhalation
During the exhalation the diaphragm is pushed into the lungs forcing the air leaving lungs, similar to that of the process of inhalation velocity vectors were taken to study the flow behavior, these velocities at the beginning of this process is approximately 0, due to difference in velocities and the narrower left lung the velocity seems increasing with a maximum velocity of 1.488m/s and then decreases with the immediate branching and the out let velocity value ranges from 0.7428-1.114m/s. Volume rendering is taken with pressure as variable and we observed that the velocity and the pressure values were within their ranges.
V.
CONCLUSION It can be observed that the flow is due to the expansion of the lungs creating a pressure difference which allows the air to flow into lungs from figure 4 and the flow velocity can be observed from figure 6 which is during the process of inhalation. In case of exhalation it can be observed that the lung contracts creating a pressure difference which can be seen in figure 7 resulting in the velocity which is shown in figure 6.
VI.
FURTHER STUDIES This paper deals with the flow behavior during steady state, the same boundary conditions are applied to study the flow pattern for transient state and also to be coupled with User Defined Function (UDF) to study the flow behavior at each and every time step by using time varying boundary conditions which give a complete outlook of the respiratory process.
|
2019-10-17T09:05:28.304Z
|
2019-07-31T00:00:00.000
|
{
"year": 2019,
"sha1": "dff70eec5c7851fcddf7d50cd77bd4c6a7f98989",
"oa_license": null,
"oa_url": "https://doi.org/10.22214/ijraset.2019.7128",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "02991a4e04ce5288a493f8a281f8fdbba1837720",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
10660881
|
pes2o/s2orc
|
v3-fos-license
|
Neural Control System in Obstacle Avoidance in Mobile Robots Using Ultrasonic Sensors
This paper presents the development and implementation of neural control systems in mobile robots in obstacle avoidance in real time using ultrasonic sensors with complex strategies of decision-making in development (Matlab and Processing). An Arduino embedded platform is used to implement the neural control for field results.
Introduction
The navigation of a mobile robot is one of the most important challenges in the field of mobile robotics as these kinds of robots must be able to evade the obstacles they encounter on their way towards a goal.A large number of researchers focus their studies on control techniques and intelligent vehicle navigation, because conventional monitoring techniques are limited due to the uncertainty of the environment where the vehicle is intended to move.Therefore, the need to develop intelligent control strategies, such as neural networks because they offer a very good solution to the problem of navigation of vehicles, their ability to learn nonlinear relationships between the input values and sensor values output.
In short, the problem is a vehicle located in a particular environment, this environment contains a number of random obstacles and a goal.We can pose several ways to solve this problem, on one hand, a classical methods may arise such as route planning or visibility graphs.On the other hand, we have the option of using an intelligent control system based on artificial intelligence techniques such as neural networks, fuzzy control, etc.These systems are able to solve many control problems due to its ability to emulate some human characteristics.
In [1], a collision-free path between the source and the destination is constructed based on neural networks for mobile robot navigation in partially structured environments.The proposed scheme uses two neural networks for the task.The neural network multilayer perceptron (MLP) using backpropagation.The proposed scheme is carried out in real time implemented on an Intel Pentium 350 MHz processor; the robot is able to avoid all obstacles to reach its goal from the initial starting point.
In [3], the problem of navigation of a mobile robot is solved with the aid of a model of local neural network.This network is a set of sub-models which represents the dynamic system modelling in various operating points.Each sub-model is a feedforward neural network trained with back-propagation algorithm.The output of these submodels is weighted with the help of a neural network with a base function to generate motion commands for the robot.The performance of the local neural network is compared to a multilayer perceptron network and a radial basis function as a parameter to measure performance, measuring takes the time it takes the robot to reach its goal or destination.
The controller for navigation consists primarily of a set of three subnets neural designs (see 4 and 5).The first two are responsible for the most important behaviours of an intelligent vehicle, which are the location of the target and obstacle avoidance.Both controllers are classifiers, which are trained with backpropagation-supervised techniques.The third neural network acts as a supervisor and is responsible for the final decision based on the outputs of the first two networks.This driver or neural network is trained by the algorithm variant of the associative reward-penalty for learning.Due to its hierarchical structure, system complexity is reduced, resulting in a fast response time.
Methodology
This section of the article describes the robot structure analysis, engine capabilities used, housing the brain neural networks sensor system, the system block diagram and the design of the neural network designed and implemented.
Mobile robot construction
The design of the mobile robot was made to be easily modified and adaptable to new and future research.The physical appearance of the robot was evaluated, and the design was defined based on criteria of functionality, available materials, and mobility.The analysis of different structures of guided robots at a reduced size and simplicity in structure (as shown in Figure 1), another factor considered is the previous working experience with mechanical structures for robots.
• Communication via serial and I2C protocol.
Figure 4 shows the sensor used for obstacle detection.In Figure 6, the block diagram of the system operation is shown.
Design of an artificial neural network for decision-making
Controlling the movements of mobile robot that must generate a path that allows itself to navigate avoiding obstacles without colliding, was identified as a problem of pattern classification.For such problems we can find networks as single perceptron, multilayer perceptron, the backpropagation networks, radial basis networks and probabilistic neural networks (PNN) among others, as the most commonly used in this type of application.
The neural network chosen for the development of robot control system is a multi-layer perceptron (MLP), because such networks have good characteristics to solve classification problems as the present one.Furthermore, we can add that it presents a relatively simple architecture for implementation in the robot.
Figure 7 shows the structure of the network used a neural network of three inputs a vector output of 4 elements, with a 4-layer configuration was used and [5,4,4,4], in the first hidden layer with activation function type " tansig " in the second hidden layer and the third layer with a hidden function type "logsig " as the output layer activation function with a "logsig ".
The inputs to the network are the information from sensors placed on the left, center and right, and the sensors are activated according to the proximity of an obstacle in that guidance.The network will be output according to this post, which will allow correcting the position and thus, avoiding the obstacle.As we know the output function of the network makes the output values are decimal numbers in the range [0 1], then we program a simple comparison function within the Arduino to set the output value greater than 1 and the other two to 0.
Results
The output vector consists of three elements corresponding to each of the basic activities that can control the robot.The values of these elements are coded, so that the corresponding movement to run es1 and the other three to 0. Thus, the network will provide the response vector (1,0,0,0) if the control action is to turn right, (0,1,0,0) if you turn left, (0,0,1, 0) if the action is to advance and finally (0,0,0,1) if the action is back.
The algorithm developed is shown in Figure 8.For the weights of the input layer is made using the command: "net.IW {}", and the weights of the remaining layers with the command: "net.LW {}", finally the bias values are obtained with: "net.b{}".These data and bias weights are stored in the memory of the embedded system in matrix form, to be applied to the verification formulas MLP network which are: This formula multiplies each input with its own weight and then sums the corresponding bias for each neuron.Each of the neurons of the first hidden layer has an output given by: The remaining neurons of the remaining layers have an activation function given by the following formula: The weights connecting the input layer to the hidden layer are stored in a w1 matrix with five rows and three columns, variables inputs are acquired and stored in p [ ] vector, the weights connecting the hidden layer with the next layer are stored in the array w2 four rows and five columns, the weights connecting the next layer to the next are stored in the array w3 four rows and four columns, as with the last hidden layer and the layer output are stored in the array w4 four rows and four columns.The bias corresponding to each layer which are stored in a vector, for a total of four vectors, b1 [5], b2 [4], b3 [4], b4 [4].The output of this network decides the action to follow hence you can avoid the obstacles encountered in the environment.
Thanks to the accuracy of the generated C functions for working with embedded floating point output of the network system is equivalent to the ones processed by Matlab.Being successful the integration of the neural network to the embedded system.
The code for generating the activation function and logsig tansig C is:
Physical implementation and analysis
The final robot's dimensions shown in Figures 9 and 10, are 26 cm long, 12 cm tall, 24cm wheel separation and an average speed of 10cm/s travelling in a straight line.The wheels are covered with non-slip material to improve performance on smooth surfaces.The sensors work in an acceptable manner, as sensors for these objects must be relatively dense material for the ultrasonic signal to be reflected achieved successfully and detected by the transducer.
Full navigation and learning algorithm was implemented in no more than 200 lines of code in C language and is easily extensible to include new situations and actions.
The adaptability of the robot was tasted by having the robot to interact with a group of people in an open exhibition space, which play as dynamic obstacles, i.e., used their feet to block the movement of the robot, I've even another mobile robot.
Conclusions
Neural networks are excellent tools applicable in mobile robots evading obstacles, which have the ability to work with imprecise information The architecture of the RNA-BP is implemented in embedded processing capabilities in a real time system, generating response cases, which were not taken into account when training, happening to be a reliable and fault-tolerant control.The vehicle control resulting in robust differential drive system that was used.
The SRF02 sensors that are used are suitable for this application; however, further research is suggested to integrate a SRF08 series that offers better supervision, ease calibration, and size than the SRF02 series.More needs to be achieved though: Reliability of implementation and management of the back-propagation neural network design.
For more advanced applications is recommended to work with a DSP.The use of techniques that involve more time together sensors, sensors such as gyroscopes, electronic compass, and other encoder to guarantee the position of the movable in space is suggested.Using more proximity sensors, at least five or six will give us a better understanding of the environment as long as the DSP support computational burden of neural network that would be generated.
4 .Figure 1 .
Figure 1.Mechanical structure of robot, top view 2.2 Embedded architecture, proximity sensors and geared The system used in the robot motors robot to give it movement are Geared model number: B02_1-180.The main features are: • Gearmotor CD • Voltage 5 • Torque 3 • 43 speed • No-load current consumption: 75 • Stuck consumption current: 670 • Output shaft 5 diameter • Holes for screw mounting.
Figure 2
Figure2shows the motor used in the design of mobile robot.
Figure 2 .
Figure 2. Gearmotor B02_1-180 The embedded system development platform will have the decision-making system based on an artificial neural network with the following characteristics: • Microcontrolador ATmega368 • Operating voltage: 5 • Input voltage: 7 12 • Pines E / S digital: 14 • Analog input pines • Current per pin: 40 • Current at 3.3 pin: 50 • Clocked 16 Figure 3 shows the card used in the mobile robot.
Figure 3 .
Figure 3. RNA implementation embedded system for making decisions
Figure 4 .
Figure 4. SRF02 ultrasonic sensor modelThe diagram of connections general designed and implemented in the mobile robot, which contains the embedded system, SRF02 sensors, geared B02_1-180 and LCD screen to show the status of sensors and motors shown in Figure5.
Figure 5 .
Figure 5. Diagram of connections of general of mobile robot
Figure 6 .
Figure 6.Diagram block of the system operation
Figure 7 .
Figure 7. Structure of the network using a neural network of three inputs and a vector output of 4 elements
Figure 8 .Figure 9
Figure 8. Algorithm for decision-making using RNA-BP The Neural algorithm developed and implemented in the mobile robot was programmed in C language
Figure 9 .
Figure 9. Performance characteristics of the neural network in the training backprogation
Figure 10 .
Figure 10.Neural network training performance Programming the BP neural network developed and implemented for mobile robot in the embedded system depends mainly on: • Synaptic weights of the hidden layers • Synaptic weights of the output layer • Bias or king pesos each layer • Know the activation functions used in each of the layers.In the first hidden layer is used tansig and the remaining was used logsig of the input vector o j b = Gain of the hidden layer, extra weights in the networks
Figure 11 .
Figure 11.Experimental physical design of mobile robot based on neural network with backpropagation algorithm
|
2016-01-07T01:57:53.067Z
|
2014-02-01T00:00:00.000
|
{
"year": 2014,
"sha1": "f0e29f4da483de94f1dafd478df0923dc50270c0",
"oa_license": null,
"oa_url": "https://jart.icat.unam.mx/index.php/jart/article/download/247/244",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f0e29f4da483de94f1dafd478df0923dc50270c0",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics",
"Engineering"
]
}
|
17262634
|
pes2o/s2orc
|
v3-fos-license
|
Proteomic alterations in mouse kidney induced by andrographolide sodium bisulfite.
AIM
To identify the key proteins involved in the nephrotoxicity induced by andrographolide sodium bisulfite (ASB).
METHODS
Male ICR mice were intravenously administrated with ASB (1000 or 150 mg·kg⁻¹·d⁻¹) for 7 d. The level of malondialdehyde (MDA) and the specific activity of superoxide dismutase (SOD) in kidneys were measured. The renal homogenates were separated by two-dimensional electrophoresis, and the differential protein spots were identified using a matrix-assisted laser desorption/ionization (MALDI) time-of-flight (TOF)/TOF mass spectrometry.
RESULTS
The high dose (1000 mg/kg) of ASB significantly increased the MDA content, but decreased the SOD activity as compared to the control mice. The proteomic analysis revealed that 6 proteins were differentially expressed in the high-dose group. Two stress-responsive proteins, ie heat shock cognate 71 kDa protein (HSC70) and peroxiredoxin-6 (PRDX6), were regulated at the expression level. The remaining 4 proteins involving in cellular energy metabolism, including isoforms of methylmalonyl-coenzyme A mutase (MUT), nucleoside diphosphate-linked moiety X motif 19 (Nudix motif19), mitochondrial NADH dehydrogenase 1 alpha subcomplex subunit 10 (NDUFA10) and nucleoside diphosphate kinase B (NDK B), were modified at the post-translational levels.
CONCLUSION
Our findings suggest that the mitochondrion is the primary target of ASB and that ASB-induced nephrotoxicity results from oxidative stress mediated by superoxide produced by complex I.
Introduction
Andrographis paniculata (Burm f) Nees (Acanthaceae) (A paniculata, Chuanxinlian) is a widely used Chinese medicinal herb that is believed to be effective for relieving fever, inflammation and pain according to traditional Chinese medicine. Compared with other Chinese medicinal herbs, A paniculata has been well studied. The major bioactive compounds are diterpenoids, flavonoids and polyphenols [1,2] . A paniculata has a variety of health benefits, including anti-inflammation, anti-cancer and immunity enhancement. Andrographolide (C 20 H 30 O 5 ), the major diterpenoid of A paniculata, has multiple pharmacological properties and is a potential chemotherapeutic agent [3] .
Andrographolide sodium bisulfite (ASB, C 20 H 29 O 7 SNa, 436.23) is a water-soluble sulfonate of andrographolide that is synthesized by an addition reaction with sodium bisulfite. Lianbizhi (LBZ) injection, which contains ASB as its sole component, has been used clinically in mainland China to treat infectious diseases such as bacillary dysentery, mumps, laryngitis, tonsillitis and upper respiratory tract infections [4] . However, reports of adverse drug reactions (ADRs) have increased in the past decade, leading to the issue of a drug use warning from the State Food and Drug Administration of China [5] . The clinical ADRs to LBZ injection are diverse, the most serious of which is acute renal failure [4,[6][7][8] . There are various explanations for the cause of ADRs to LBZ, such as allergic reactions [6] , cross-reactions with aminoglycosides in combination therapy [4] and nephrotoxicity [7] , but extensive studies of the toxicology of LBZ injection are still scarce.
Injection agents for Chinese medicinal herbs generally include bioactive ingredients and additives. Bioactive ingre-npg dients are usually synthesized, semi-synthesized or extracted from herbs; additives are added to increase the hydrophilicity of the active ingredient. The toxicities of the injection agents could therefore be due to the toxicities of the bioactive ingredients, additives, and/or impurities resulting from the manufacturing process. The potential nephrotoxicity of LBZ injections has been suggested by several recent toxicological studies. Renal damage triggered by a single intravenous injection of two kinds of LBZ with different purities has been reported [9] . We previously used purified ASB (>99% pure), the raw material for LBZ injection, to investigate its toxicity in kidneys [10] . Administration of ASB at a high dose (1000 mg/kg) for 7 d resulted in renal dysfunction and an increase in both the serum creatinine and blood urea nitrogen levels. In addition, microscopic examination revealed the presence of tubular interstitial injury and cloudy swelling in the proximal tubule in the high-dose group. However, the mechanism of renal toxicity of ASB remains unclear.
A global proteomics approach typically links separation and identification technologies to create a protein profile or differential protein display. Proteomic techniques have been used in kidney toxicity studies of chemicals such as the antibiotic gentamicin and the chemotherapeutic agent cisplatin and have provided insight into the mechanisms governing key proteins in critical biological pathways that create adverse drug effects [11] .
In the present study, a differential proteomic analysis of the mouse kidney was performed to identify the key proteins involved in kidney dysfunction induced by ASB administration and to investigate the mechanism of ADRs to LBZ. A two-dimensional (2-D) electrophoresis proteomic approach revealed that six proteins were significantly differentially expressed in the murine kidney upon exposure to ASB.
Animals and treatment
Male ICR mice weighing 18-22 g were purchased from the Zhejiang Experimental Animal Center. All animals were housed with a 12-h light/12-h dark cycle at 22 °C and 55%±5% relative humidity. Food (Zhejiang Experimental Animal Center, China) and tap water were provided ad libitum. All experiments were carried out according to the guidelines of China for the care and use of laboratory animals. Each of the mice was randomly assigned to one of three experimental groups (each n=10): high-dose ASB (1000 mg/kg), low-dose ASB (150 mg/kg) and a control group. The ASB groups were induced with daily intravenous (iv) injections of 20 mL/kg body weight of ASB (99%, Zhejiang Jiuxu Pharmaceutical Co, Ltd, China) in a 0.9% sodium chloride injection solution (Huadong Pharmaceutical Co, Ltd, Hangzhou, China) for 7 d. The control group received an equal volume of 0.9% sodium chloride injection solution. Mice were sacrificed 1 h after the last injection, and the kidneys were surgically removed for further analysis.
Measurement of malondialdehyde content and superoxide dismutase activity One removed kidney from each animal was homogenized with saline 1:10 (w/v) and centrifuged at 3000×g for 10 min. The supernatant was used to measure the content of malondialdehyde (MDA), one of the end products of lipid peroxidation and an indicator of reactive oxygen species (ROS) production, and the specific activity of superoxide dismutase (SOD), using commercially available kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) in accordance with the manufacturer's protocols. The data were statistically analyzed using Student's t-test to compare the means of two different groups.
Protein extraction for proteomic analysis Renal proteins were extracted as previously described [12] . After removing the renal capsule, the kidney was excised into several thin slices and washed in ice-cold PBS to remove the contaminating blood. The tissue was frozen in liquid N 2 , ground to powder, and resuspended in lysis buffer containing 7 mol/L urea, 2 mol/L thiourea, 4% (w/v) CHAPS, 2% (v/v) ampholyte (pH 3-10), 120 mmol/L DTT, 40 mmol/L Tris-base and protease inhibitor cocktail (50 μL for 1 mL lysis buffer, Sigma), followed by a 30-min incubation at RT. After centrifugation (10 000×g for 5 min at 4 °C), the supernatant was stored at -80 °C as a 500-μL aliquot, and the protein concentration was determined using the Bradford method with a Bio-Rad Protein Assay Kit (Bio-Rad Laboratories, Hercules, CA, USA).
Index of renal oxidation
Oxidative stress is a pathway for renal injury [13] . To investigate whether the renal damage induced by ASB is caused by ROS, the levels of SOD and MDA in the renal tissue were analyzed (Figure 1, 2). Upon ASB treatment, SOD activity decreased, whereas the MDA content increased. The downregulation of SOD activity and the upregulation of MDA were both dose-dependent. However, only the group treated with the high dose of ASB was significantly different from the control group, suggesting that the toxicity is only trigged by a high concentration of ASB, consistent with our previous results [10] .
Differentially expressed renal proteins
Because only the high dose of ASB caused significant changes in renal function and redox status, this study focused on the proteomic analysis of the kidney after treatment with a high dose of ASB. Approximately 500 protein spots were visualized in each 2-D gel. Representative 2-D gels of both the highdose group and the control group are illustrated in Figure 3, in which the proteins that were differentially expressed in the ASB group are numbered. In total, seven differentially expressed protein spots in the 2-D gel were identified by MALDI TOF/TOF MS (Table 1). Because two protein spots (spots 6 and 7) were identified as the same protein, nucleoside diphosphate kinase B (NDK B), six proteins were actually regulated by ASB administration. The individual changes in the seven spots were magnified ( Figure 4). Of the seven protein spots, three were upregulated, including methylmalonyl-Coenzyme A mutase (MUT, spot 1), peroxiredoxin-6 (PRDX6, spot 5) and NDK B (spot 6). The other three spots, heat shock cognate 71 kDa protein (HSC70, spot 2), mitochondrial NADH dehydrogenase 1 alpha subcomplex subunit 10 (NDUFA10, spot 3) and NDK B (spot 7) were downregulated. The remaining spot (spot 4), which was identified as nucleoside diphosphate-linked moiety X motif 19 (Nudix motif19), displayed an acidic shift with a slight molecular weight increase in the ASB group, suggesting an ASB-induced post-translational modifi- Figure 1. Concentration changes in MDA in the kidneys of mice exposed to ASB. The homogenates of kidneys from mice treated with or without ASB iv administration for 7 d were used to measure MDA content. n=10.
Mean±SD. a P>0.05, c P<0.01 vs control, t-test. Figure 2. Specific activity changes in SOD in the kidneys of mice exposed to ASB. The homogenates of kidneys from mice treated with or without ASB iv administration for 7 d were used to measure the SOD specific activity. n=10. Mean±SD. a P>0.05, c P<0.01 vs control, t-test. Table 1. Open arrows indicate spots with greater abundance between the corresponding gels.
www.chinaphar.com Lu H et al
Acta Pharmacologica Sinica npg cation (PTM). The increase in MUT (spot 1) was also due to an ASB-induced PTM because it showed an acidic shift relative to its theoretical pI and was found only in ASB group gels. In contrast, both NDUFA10 (spot 3) and NDK B (spot 6) exhibited different pIs from their theoretical values, even in the control group (Table 1), suggesting an acidic isoform of NDUFA10 and a basic isoform of NDK B, respectively. Because they were present in both the ASB group and the control group, NDUFA10 (spot 3) and NDK B (spot 6) were considered to be the respective isoforms under physiological conditions. Interestingly, the two spots of NDK B were regulated oppositely: the basic isoform (spot 6) was upregulated, whereas the spot exhibiting the theoretical pI (spot 7) was downregulated, suggesting that ASB treatment shifted NDK B to its basic isoform.
Discussion
In the present study, we used differential proteomic analysis to identify the proteins regulated by ASB in the mouse kidney. Seven spots, which were identified as six proteins, were differentially expressed and/or post-translationally modified after ASB treatment and identified by MALDI TOF/TOF MS ( Table 1). This is report to demonstrate an alteration in kidney protein expression following ASB-induced renal injury.
Protein involved in cellular energetics
Nudix motif19 is a member of the Nudix hydrolase superfamily and was recently identified as a CoA diphosphatase involved in fatty acid β-oxidation [14] . Nudix motif19 is highly expressed in the mouse kidney at both the protein [15] and mRNA levels [16] , suggesting it has an important role in the kidney. The slight PTM of Nudix motif19 induced by ASB reflects a disturbance in fatty acid catabolism, which could lead to catabolite accumulation and result in the inhibition of fatty acid β-oxidation and a reduction in energy supply.
MUT is strictly a mitochondrial enzyme. It catalyzes the isomerization of methylmalonyl-coenzyme A (CoA) to succinyl-CoA [17] and is the key enzyme for the transfer of the catabolites of branched-chain amino acids and odd-chain fatty acids into the tricarboxylic acid cycle [18] . A knockout of Mut in the mouse results in tubulointerstitial renal disease in response to respiratory chain dysfunction [19] . A similar interstitial nephritis was triggered by ASB in our previous study [10] . The MUT dysfunction resulting from the ASB-induced acidic shift in pI in this study may be similar to the inactivation caused by Mut deletion. Therefore, the ASB-induced nephrotoxicity is likely due to respiratory chain dysfunction.
NDUFA10 is a subunit of mitochondrial respiration chain complex I (NADH: ubiquinone oxidoreductase) [20] . PTMs regulate the activity and the interactive capacity of complex I subunits [21] . The phosphorylation of NDUFA10 in bovine heart has been proposed to influence the affinity of NADH binding and the activity of complex I [22] . In this study, one of the acidic isoforms of NDUFA10 was more highly expressed in the control group, in agreement with its important role in the electron transport chain of intact mitochondria. The ASB group displayed a relative reduction in this isoform, suggesting that ASB partially inhibits PTM and leads to functional decline. Complex I is the major source of superoxide [23] , and impairment of complex I causes oxidative damage that is a part of the progression of different pathologies, including Parkinson's disease [24] . Although further study is needed to identify the PTM of NDUFA10 in the mouse kidney, the partial inhibition of the PTM of NDUFA10 upon treatment with ASB suggests aberrant superoxide generation by complex I. Excessive endogenous ROS can contribute to acute renal injury [13] . Hence, the ASB-induced changes in the PTM of NDUFA10 might contribute to ASB nephrotoxicity.
NDK B belongs to the nm23 gene family and encodes nucleoside diphosphate kinase (NDK), which has multiple functions that are involved not only in cell energy conversion but also in many cellular processes [25] . In this study, NDK B was identified from two spots in the control group and one spot in the ASB group due to the ASB-induced basic shift in NDK B, suggesting a change in the function of NDK B. Both NDK B and NDK A are modified after H 2 O 2 treatment [26] , and Cys 109 of NDK A was identified as the site of oxidative modification by ROS [27] . Because Cys 109 is also present in NDK B and no other PTM of NDK B has been reported, the basic isoform of NDK B also seems to be subject to oxidative modification under physiological conditions, which would support the existence of a higher ROS level in the kidney than in other tissues [13] . Altered NDK expression is associated with the inhibition of cellular proliferation and apoptosis [28] , and NDK B overexpression reduces H 2 O 2 -induced cell death in BAF3 cells [29] . Furthermore, a recent proteomic analysis revealed that NDK B (pI 7.6) was upregulated in neuronal nitric oxide synthase (nNOS)knockout mice due to an increased level of superoxide [30] . Therefore, the increase in the basic isoform of NDK B (pI 7.48) in our proteomic analysis may enable its antioxidative function against superoxide triggered by ASB, which is partially supported by the increased MDA content in the ASB group.
The above proteins were all modified by ASB at the posttranslational level and are all mitochondrial proteins except NDK B, a dual-localized mitochondrial protein [15] . Their modification by ASB reflects a decline or loss of mitochondrial activity in cellular energy metabolism, especially in cell respiration. Taken together, the alterations in the protein profile suggest that the mitochondrion is likely the primary target of ASB.
Proteins involved in the stress response HSC70 is a required chaperone-mediated protein folding cofactor that belongs to the heat shock protein (HSP) 70 family. Unlike its cognate protein HSP70, which is induced only under stress, HSC70 is constitutively expressed and is involved in other housekeeping functions, including cell cycle regulation and the stress response. Depletion of the HSC70 pool has been linked to a block in the activation of caspases that inhibit apoptosis signaling [31,32] . Attenuated HSC70 expression was also observed in ochratoxin A-induced cell death associated with ROS generation [33] . Therefore, the decrease in HSC70 after ASB administration indicates that apoptosis signaling was activated in renal cells in response to oxidative stress. In addition, PRDX6, an antioxidant enzyme, was upregulated by ASB. PRDX6 belongs to the peroxiredoxin family, and its overexpression in mice or transfected cells reduces cellular H 2 O 2 levels and decreases oxidative stress-induced apoptosis [34] . The regulation of stress-responsive proteins suggests a correlation between oxidative stress and the renal toxicity of ASB that occurs via cell apoptosis. This proposal is supported by the alterations in the biochemical index, including an increase in the MDA content and a decrease in SOD activity, which suggest that a redox imbalance occurs in the mouse kidney after ASB treatment. [35,36] . Several mechanisms for the in vivo upregulation of PRDX6 have been described, including the high level of superoxide caused by a lack of NO to scavenge superoxide in the nNOS-knockout mouse [30] , the excessive ROS generated by complex I due to increased glutathione oxidation after ischemia-reperfusion [37] , and the high level of endogenous ROS resulting from the markedly reduced GSH level in a CFTR-defective mouse lung [38] . Complex I is the major source of superoxide [23] . Furthermore, superoxide production by complex I increases in response to oxidation of the mitochondrial GSH pool [39] . From these previous studies, it has been suggested that PRDX6, which likely belongs to a core cellular antioxidative defense, is upregulated by superoxide generated by complex I. However, not all ROS inducers increase the in vivo expression of PRDX6. Paraquat fails to upregulate PRDX6 in the mouse lung [38] . A similar result was observed in the rat kidney after ip injection of chloroform [40] . To our knowledge, this is report of a reagent that can upregulate PRDX6 in the kidney in vivo.
In summary, we have identified changes in six renal mouse proteins after exposure to ASB, all of which were related to cellular energetics or the oxidative stress response. The modification of MUT and NDUFA10 is related to the dysfunction of the mitochondrial respiratory chain, and the regulation of NDK B and PRDX6 is associated with abnormal superoxide generation. Our results suggest that oxidative stress in response to superoxide production in the mitochondria plays an important role in the renal toxicity induced by ASB treatment. The proteomics data in this study provide new insights into the biochemical pathways involved in ASB nephrotoxicity, which will aid in the future reduction of ADRs to LBZ in clinical applications.
|
2017-05-25T01:27:22.880Z
|
2011-06-20T00:00:00.000
|
{
"year": 2011,
"sha1": "d3b7cf15662a0d6265b918de5d2ef5dca8a138da",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/aps201139.pdf",
"oa_status": "BRONZE",
"pdf_src": "Anansi",
"pdf_hash": "d3b7cf15662a0d6265b918de5d2ef5dca8a138da",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
16029702
|
pes2o/s2orc
|
v3-fos-license
|
The Geometry of Large Causal Diamonds and the No Hair Property of Asymptotically de-Sitter Spacetimes
In a previous paper we obtained formulae for the volume of a causal diamond or Alexandrov open set $I^+(p) \cap I^-(q)$ whose duration $\tau(p,q) $ is short compared with the curvature scale. In the present paper we obtain asymptotic formulae valid when the point $q$ recedes to the future boundary ${\cal I}^+$ of an asymptotically de-Sitter spacetime. The volume (at fixed $\tau$) remains finite in this limit and is given by the universal formula $V(\tau) = {4\over 3}\pi (2\ln \cosh{\tau\over 2}-\tanh^2{\tau\over 2})$ plus corrections (given by a series in $e^{-t_q}$) which begin at order $e^{-4t_q}$. The coefficents of the corrections depend on the geometry of ${\cal I}^+$. This behaviour is shown to be consistent with the no-hair property of cosmological event horizons and with calculations of de-Sitter quasinormal modes in the literature.
Introduction
In a recent paper [1] we embarked on a quantitative study of causal diamonds, or Alexandrov open sets, which are beginning to play an increasingly important role in quantum gravity, for example in the approach via casual sets [2], in discussions of 'holography', and also of the probability of various observations in eternal inflation models (see [3] for a recent example and references to earlier work). The calculations in [1] were concerned with small causal diamonds, that is causal diamonds I + (p)∩I − (q) whose duration τ (p, q) 1 is small compared with the ambient curvature scale. The present paper was motivated by inflationary cosmology and the observations showing that the scale factor a(t) of our present universe is accelerating. Indeed, it is given to a good approximation by assuming that the spatial geometry is flat and setting the jerk ≡ a 2 a 3 d 3 a dt 3 = 1 (1) so that where Λ is the cosmological constant. The jerk is a dimensionless measure of the rate of change of acceleration. It is easily seen to be unity if and only if we have k = 0 model with a cosmological constant and pressure free matter [4,5,6,7]. The physical reason why the jerk of the observed universe is unity is unclear. Equation (2) then solves Einstein's equations with a cosmological term coupled to a pressure free fluid. The questions we are interested in concern the observations made by a hypothetical observer moving along a timelike world line γ, in metric which is not exactly but only asymptotically de-Sitter, in the limit that his/her own proper time t q → ∞. In particular we shall study the volume V (τ, t q ) of the causal diamond I + (p) ∩ I − (q) where p and q lie on γ in the limit when both t p , t q → ∞ while τ = t q − t p is kept fixed. Thus both points p and q tend to future spacelike infinity I + while the duration of the diamond τ is kept fixed. The entire diamond is in the asymptotic region and the volume of the diamond depends on the asymptotic geometry which we wish to explore. The volume (at fixed τ ) remains finite in this limit and is given by the universal formula V (τ ) = 4 3 π(2 ln cosh τ 2 − tanh 2 τ 2 ) plus corrections (which are given by a series in e −tq ) which begin at order e −4tq . This behaviour will be shown to be consistent with the no-hair property of cosmological event horizons and with calculations of de-Sitter quasinormal modes in the literature.
Before describing our calculations,we shall give a brief review of the geometry of asymptotically de-Sitter spacetimes.
2 Geometry of asymptotically de-Sitter spacetimes ¿From now on we adopt units in which Λ = 3 and hence H = 1. The metric on de-Sitter space may be cast in Friedmann-Lemaitre form in three different ways Of these only the first is global, that is covers the full geodesically complete spacetime. Another local chart, valid only inside an observer dependent cosmological horizon, is the locally static form It was conjectured in [9], before the theory of inflation, that perturbations of de-Sitter should settle down inside the the cosmological event horizon (r < 1) to the exact static form. How this 'No Hair 'mechanism works in practice was later elucidated in the context of inflation in [10] who pointed out that while scalar and gravitational perturbations of de-Sitter spacetime described using any of the Friedmann-Lemaitre coordinates do not decay, but rather freeze in to constant values at late times, restricted to interior the event horizon of any given inertial observer, the perturbations decay exponentially. One way of understanding this is to note [11] that the general asymptotic form of the metric at late times expressed in quasi-Friedmann-Lemaitre, geodesic or Gaussian coordinates takes the form where g ij is an arbitrary three-metric. Thus globally the metric does not settle down to the de-Sitter form. However locally that is within the event horizon of any given observer it does. That is because as time goes on, such an observer can access an exponentially smaller and smaller proportion of the spatial hypersurface Σ : t = constant. Now provided that the Σ is smooth, no matter what metric it is given, any local patch when examined with sufficient magnification will appear flat. Exact solutions of the Einstein equations describing this process are rather rare, but there are some: the Biaxial Taub-NUT metrics, and that exhibits this mechanism rather clearly [12]. For a recent astrophyisical perspective on the eschatology of an asymptotically de-Sitter universes see [18].
In a later, and completely independent, development Fefferman and Graham [13] examined asymptotically hyperbolic Riemannian (i.e positive definite) Einstein metrics with negative scalar curvature near their conformal boundary. It is clear that the asymptotic expansions they obtained are identical in structure to those discussed by Starobinsky earlier [11] for a Lorentzian Einstein metrics with positive scalar curvature near its spacelike conformal boundary. They are also identical in structure to the asymptotical anti-de-Sitter metrics near their timelike boundary [15]. In what follows we shall make use of these expansions. For more work on the de-Sitter case see [16].
We consider a (d+1)-dimensional space-time which solves the Einstein equations with negative cosmological constant Λ = d(d−1) 2l 2 , l is the de-Sitter radius. We look at the solution to these equations close to the spacelike infinity I + in the form where ρ is a timelike coordinate such that ρ = 0 at I + . Coordinates x i , i = 1, .., d are the coordinates on the spacelike surface I + . Inserting this metric into the Einstein equations one obtains a system of equations (10) where differentiation with respect to ρ is denoted with a prime, ∇ i is the covariant derivative constructed from the metric g, and Ric(g) is the Ricci tensor of g.
Notice that we could have considered the Einstein equations with negative cosmological constant Λ = −d(d − 1)/2l 2 . The analytic continuation between two cases is a simple replacement l 2 → −l 2 both in the metric (9) and in equations (10). The analytic continuation between two spacetimes was considered in detail in the appendix of [19]. Coordinate ρ then becomes a radial coordinate, ρ = 0 is the timelike infinity of the asymptotically anti-de-Sitter space-time. The solution of the equations (10) in this case is well known in the form of the asymptotic expansion where g (0) ij (x) is the metric on the timelike boundary of the anti-de-Sitter spacetime. The coefficients g ij only the trace and the covariant divergence are determined by g ij thus encodes the stress energy tensor of the boundary dual theory. Coefficient h ij (x) is non-vanishing only if d is even, it has some interesting conformal properties and mathematicians call it the obstruction tensor.
In the asymptotically de-Sitter case one can use same expansion (11) taking into account that ρ is now a time-like coordinate and metric g ij (x) is now the metric on the spacelike future infinity I + . Moreover, all expressions for the coefficients g ij (x) take exactly same form as in the asymptotically anti-de-Sitter case provided the substitution l 2 → −l 2 is applied. In particular we find for the first few coefficients 2 [14], [15], 2 Notice that our curvature notations differ by a sign from those used in [14] and [15]. (12) in the asymptotically de-Sitter case. The expressions for g (k) ij are singular when ij . The Einstein equations impose certain constraints on the trace and covariant divergence of coefficient g ij (x).
Volume of the causal diamond
In the rest of the paper we will be interested in a four-dimensional asymptotically de-Sitter space-time so that d = 3. Since d is odd no obstruction tensor appears in the expansion (11). From now on we use units in which l = 1.
Asymptotic metric. In addition to ρ, two other timelike coordinates can be used. The coordinate t is defined by relation dρ 2 4ρ 2 = dt 2 so that one has that ρ = e −2t , t → ∞ at future infinity I + . The coordinate t is convenient for measuring the geodesic distance (the proper time) along a timelike geodesic. The other coordinate is η = e −t , η ≥ 0, η = 0 at future infinity 3 . In terms of the coordinate η the metric takes the form where one has, as was first shown by Starobinsky [11], that where the trace and covariant derivative are defined with respect to metric g ij (x). Thus, starting with η 3 there appear both even and odd powers of η.
The Riemann coordinates. Our coordinate system {x i } on I + should be adopted to a concrete observer that follows a geodesic γ parameterized by coordinate t. Suppose that γ intersect I + at a point O with coordinates x i = 0, i = 1, .., d.
In a small vicinity of this point one can choose the Riemann coordinate system (for a nice introduction to this coordinate system see [17]) such that In terms of the spherical coordinates (r, θ, φ) with centre at x = 0, one has x k = rn k (θ, φ), k = 1, 2, 3, where n k is unit vector, n k n k = 1.
The causal diamond. We choose the point q to have coordinates (η = ǫ, 0, 0, 0) and point p to have coordinates (η = N + ǫ, 0, 0, 0). In terms of coordinate t we have that t ǫ = ln 1 ǫ and t N +ǫ = ln 1 N +ǫ so that the proper time interval is τ = t ǫ − t N +ǫ = ln( N +ǫ ǫ ). Notice that τ can be any finite number. In terms of τ one has that N = ǫ(e τ − 1). To leading order the equation for the light-conė while the equation for the light-coneİ + (p) is In our calculation we will need the next to leading order modification of the light-cone. In metric (13) the null-geodesic satisfies equation where λ is an affine parameter along the geodesic. To second order in r and η one has that so that g ij n i n j = 1 + η 2 (R ij n i n j − 1 4 R) .
Substituting this into equation (16) and integrating we find the equation foṙ I − (q), the past light-cone of q, up to cubic order in η, where we took into account the condition that r = 0 when η = ǫ. A similar equation holds forİ + (p), The intersection of the two light-cones,İ + (p) ∩İ − (q), is given by equation The correction to the flat space-time result is of order ǫ 3 and will be neglected in the calculation below.
The volume. Consider first the volume of the causal diamond not taking into account the modification of the light-cone. The volume inside the causal diamond is given by expression where we introduced To the second order one has that One checks by direct calculation that S2 n k n l = 4 3 πδ kl .
One thus finds for the volume The contribution to the volume due to the modifications (18) and (19) of light-conesİ + (p) andİ − (q) is given by expression Keeping only terms quadratic in ǫ and using identity (23) we find that where we introduced (recall that N = ǫ(e τ − 1)) The integration can be performed explicitly so that one gets the closed form expressions Notice that there is an identity which holds for these functions Recalling that ǫ = e −tq where t q is the time coordinate of the point q and combining all contributions we obtain the volume as expansion in powers of e −tq , where R(0) is the Ricci scalar of the 3-dimensional surface I + at the point of intersection of the geodesic γ with I + . Notice that expansion in e −2tq is also expansion in the curvature (and its derivatives) of I + . Now, it is a surprising fact that due to identity (29) the coefficient a 2 (τ ) vanishes identically, Notice that the possible term in (30) which is cubic in e −tq vanishes. This is due to the fact that in the expansion (13) one has that Tr g (3) = 0 and due to the property S2 n k n l n m = 0 .
The other possible source for a e −3tq term is the ǫ 3 modification in (20). The analysis however shows that this modification shows up in the volume in the form of even powers of ǫ 3 , i.e. it may first appear in term e −6tq .
So that the next non-vanishing term in expansion (30) is e −4tq . It would be interesting to see whether all odd powers of e −tq vanish in expansion (30) of the volume. We note that the volume has finite limit when t q → ∞ so no regularization is needed. At first sight, this is surprising since taking that the volume of a bulk region is typically divergent when the boundary of the region approaches infinity (spacelike in anti-de-Sitter and timelike in de-Sitter space-time, see [14] and [15]). However in maximally symmetric spacetime like de-Sitter, it is clear that all causal diamonds with the same duration τ are equivalent, no matter how close to future infinity I + they may be, and they must therefore have the same volume V (τ ) given in fact by the universal formula for J 1 (τ ) in (28) . The same universal formula was obtained in a different way in [1] (see (21) of that reference). We emphasize that the first term in the expansion (30) comes from the metric ds 2 = −dt 2 + e 2t (dr 2 + r 2 (dθ 2 + sin 2 θ)) (32) of de-Sitter spacetime with flat constant t slices. This is the only contribution in the limit of t q → ∞. The curvature of the spacelike surface I + shows up in the e −2ntq correction terms. Thus the information on the curvature of I + which is encoded in the volume of the causal diamond is exponentially suppressed. When the diamond as a whole moves closer to the future infinity the geometry inside the diamond becomes more and more accurately de-Sitter. This is of course consistent with results of [10]. There is a nice universality: no matter what is the local geometry in the bulk the geometry inside the diamond becomes de-Sitter when it approaches the future infinity. At fixed duration τ the volume of causal diamond in pure de Sitter spacetime becomes a function of the cosmological constant Λ, In models of eternal inflation V (τ, Λ) is taken as a measure of probability of an observer of duration τ . Thus, we can see how this probability depends on cosmological constant Λ. As is seen in Figure 1 the volume is monotonically decreasing with Λ taking the maximal value at vanishing Λ. Figure 2: The causal diamond when approaching the future infinity I + becomes more and more accurately described by static de-Sitter coordinates inside the horizon H associated with the observer following the timelike geodesic γ. The gravitational perturbations over the de Sitter metric is radiated through the boundary of the diamond. The size of the diamond on the conformal diagram becomes smaller and smaller when q and p approach O.
Relation to the quasi-normal modes
There is an alternative way of looking at the time evolution of the geometry inside the causal diamond. In the limit t q , t p → ∞ the diamond is close to the corner formed by cosmological event horizon H of the observer that follows the timelike geodesic γ. Inside this corner one can always take the de-Sitter metric in static coordinate system as a background and consider deviations from the de-Sitter space as perturbations. A perturbation is described by a wave equation and takes the form 1 r H l (r)e −iωtS Y l (θ, φ), where H l (r) satisfies an effective radial Shrödinger type equation. Inside the diamond these perturbations tend to escape through the boundaries of the diamond. In a bigger picture the perturbations dissipate through the event horizon H. The concrete mechanism of the dissipation is given by the quasi-normal modes which are solutions to the gravitational equations for the perturbations subject to condition that they are out-going at the horizon and regular at the origin. This condition can be satisfied only for a discrete complex set of frequencies ω n . For de-Sitter space-time the gravitational quasi-normal modes have been studied for instance in [20] and [21]. An interesting peculiarity of de-Sitter spacetime as compared to a black hole spacetime is that the quasi-normal frequencies are purely imaginary so that they describe the exponential decay only while generically there could be also oscillations 4 . In D spacetime dimensions there are two sets of the quasi-normal modes [21] where n = 0, 1, 2, ..; l is the angular momentum of the perturbation and the value of q depends on the type of the perturbation: q = 0 for tensor, q = 1 for vector and q = 2 for scalar perturbations. The perturbation of the volume of the causal diamond is determined by a scalar type gravitational perturbation. Moreover, since the integration in (35) includes integration over spherical angles then only l = 0 may contribute to the time evolution of the volume. Let us now compare the two sets (for D = 4) of frequencies (34) with our direct calculation (30). Doing this one should keep in mind the relation between global and static coordinate systems. One has that sinh t = (1 − r 2 ) sinh t S , where t S is time coordinate in static coordinate system and t is time coordinate in the global coordinate system. Inside the diamond r changes in the finite limits. Therefore, for large times t ∼ t S . We see that the set in (34) in which iω is an odd number does not show up in the evolution of the volume. At least this is true for the few lowest frequencies. On the other hand, the second set, in which iω is even number, indeed appears in the evolution of the volume. We have repeated the calculation in the previous section for arbitrary D. Then the lowest decaying (and, possibly, non-vanishing) term in the volume (30) is of order e −2tq . This is again consistent with the second set of quasi-normal frequencies in (34).
Acknowledgements
Ths work was initiated at the IHÉS. Both authors would like to thank Thibault Damour and the director Jean Pierre Bourguignon, for their hospitality during our stay at IHÉS. The work was completed while the first author was visiting the Galileo Gallilei Institute and he would like to thank the director and the organisers of the the workshop on 'String and M theory approaches to particle physics and cosmology' for their hospitality and INFN for partial support. The second author would like to thank Michael Gromov for an interesting discussion.
|
2007-06-12T09:55:50.000Z
|
2007-06-05T00:00:00.000
|
{
"year": 2007,
"sha1": "060134cbdee433299583f8bd56459f5985ea36c2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0706.0603",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bf294631fe0384a093a23cc18447fd8b99e471cc",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
266227203
|
pes2o/s2orc
|
v3-fos-license
|
Telehealth serious illness care program for older adults with hematologic malignancies: a single-arm pilot study
Key Points • We found a telehealth-delivered SICP to be feasible and usable for older adults with AML and MDS.• The majority of patients found the telehealth SICP to be worthwhile and would recommend it to others.
Introduction
][3][4] The emotions may make processing and understanding their diagnosis and treatment options difficult. 1,5In addition, there is very little time between diagnosis and treatment decision-making. 6Older adults, compared with younger individuals, face additional challenges.8][9] Therefore, older patients with AML and MDS often make treatment decisions with limited understanding of their diagnosis and lack of time for emotional coping.
Serious illness conversations (SICs) may increase patients' understanding of their disease, promote hope and illness acceptance, and better prepare them for the future. 10The geriatric assessment (GA) uses validated tools to identify aging-related vulnerabilities (eg, functional impairment and cognitive impairment) that are associated with poor outcomes.GA may help clinicians better identify aging-related vulnerabilities and inform management discussions.Integration of a GA into SICs may help clinicians better tailor SICs based on age-related vulnerabilities and enhance the quality of conversations. 11Furthermore, many older adults with cancer prefer to maintain some sense of control at the end-of-life (EOL), and early SICs may allow patients to discuss their EOL wishes before they are unable to do so. 12In a cross-sectional study of 200 patients with cancer, 82.5% of patients wished that they had an SIC with their physician, and 94% preferred to have these discussions early. 13Therefore, routine SICs have the potential to both mitigate emotional distress and address agingrelated concerns that older adults with AML and MDS often have to navigate.
5][16] One strategy to address perceived patient discomfort is the use of telehealth, which can promote patient comfort by allowing SICs to take place when patients are at home. 17,18We previously conducted a qualitative study to adapt the SIC program (SICP) for delivery via telehealth to promote early SICs among older adults with AML and MDS. 19The primary aim of this pilot study was to assess the feasibility and usability of the adapted SICP via telehealth for older adults with AML and MDS.
Study design, population, and setting
We conducted a single-arm pilot study at the University of Rochester Medical Center/Wilmot Cancer Institute in Rochester, New York and recruited patients and their caregivers from June 2022 to March 2023.Patients enrolled in this study were aged ≥60 years, had a diagnosis of AML or MDS, were being managed in the outpatient setting, were able to speak English (because the adapted SICP is written in English only), and were able to provide informed consent.Caregivers aged ≥18 years old were enrolled if identified by the patient when asked if there was a "family member, partner, friend, or caregiver with whom you discuss or who can be helpful in health-related matters" and able to provide consent.Patients could enroll in this study with or without caregivers (up to a maximum of 2 caregivers were allowed to enroll), but caregivers could only enroll if their respective patient consented to participate.Caregivers who did not formally enroll in the study could join the SICP visits.Oncologists and advance practice providers who had provided care for at least 1 patient aged ≥60 years with AML/MDS in the past year were also enrolled.This study was approved by the University of Rochester Research Subjects Review Board.
Study procedure and data collection
Patients were identified by the study team, and eligibility was confirmed with both the primary oncologist and principal investigator (K.P.L.).Eligible participants were approached via telephone.Participants who consented to participate completed baseline measures (demographics and cancer health literacy). 20Cancer health literacy was measured using the 6-item Cancer Health Literacy Test (CHLT-6).CHLT-6 is a validated measure with a Cronbach α of 0.96 to 0.99. 20Correct responses to the questions were scored as 1 and summed.Participants were classified as having adequate cancer health literacy (total score, 4-6) or limited cancer literacy (total score <4). 20Clinical information was collected by the study team from the electronic health record (EHR).A 30-to 60-minute SICP visit with the oncology clinician was scheduled within 2 months of consent.Telehealth was defined as the visit taking place via video call or telephone in this study.Participants then completed postintervention measures and participated in an audio-recorded, semistructured interview via telephone to discuss their experience and provide feedback.Audio-recorded postintervention interviews were transcribed verbatim and uploaded to MAXQDA software (VERBI Software GmBH) for analysis.
Intervention description
The adapted SICP includes 19,21 (1) GA to evaluate aging-related vulnerabilities obtained from the patient and summarized for clinicians, (2) patient preparation pamphlet, (3) clinician preparation email, (4) SIC conversation guide (SICG), (5) EHR documentation template for clinicians (supplemental Figure 1), and (6) family guide.The patient preparation letter, SICG, and family guide have previously been published. 19,22Before the SICP visit, the study team completed a GA with the patient that was provided to clinicians.4][25][26] Patients were provided with the patient preparation pamphlet via email or mailed to their home, and clinicians were provided with a preparation email.The preparation email included access to a University of Rochester Medical Center compliant Zoom link with details of the day and time of the visit, summary of the patient's GA, a copy of the SICG, and information regarding the EHR documentation template.Clinicians documented their visit in the EHR using the documentation template.After the visit, patients were provided with the family guide via email or mailed to their home.
Measures
Feasibility and usability.The primary outcome measures for this study were feasibility and usability.Feasibility was measured using retention rate, defined as completion of the SICP visit, with >80% retention considered feasible.Usability was measured using the Telehealth Usability Questionnaire (TUQ [range, 1-7; higher score is better]) with an average score of >5 considered usable. 27ther outcome measures.To inform future trials, we also collected other patient and caregiver measures.Patient measures included advanced care planning engagement, psychological health, quality of life (QOL), disease understanding, acceptability of the SICP, and satisfaction with communication.Caregiver measures included psychological health, QOL, disease understanding, and satisfaction with communication.
Patient measures: baseline and after intervention
Advance care planning (ACP) engagement was assessed using an adapted 15-item questionnaire on patient readiness and selfefficacy (range, 1-5; higher score is better). 28][31] QOL was assessed using the validated 44-item functional assessment of cancer therapy-leukemia ([FACT-Leu] range, 0-176; higher score is better). 32The FACT-Leu assessment includes 5 domains: physical well-being (7-item), social/family well-being (7-item), emotional well-being (6-item), and functional well-being (7-item), and additional concerns (17-item). 32Disease understanding was assessed using a 5-item questionnaire on prognosis. 33Patients were asked to estimate the curability of their cancer with treatment and their life expectancy, and these results were compared with responses of clinicians. 33For example, in order to measure the alignment of life expectancy between patients and clinicians, they were asked the following question: "Considering your (the patient's) health, and your (the patient's) underlying medical conditions, what would you estimate your (the patient's) overall life expectancy to be?" Response options were ≤6 months, 7 to 12 months, 1 to 2 years, 2 to 5 years, and >5 years.We described the distribution of responses to this question for patients at baseline and after intervention, as well as for clinicians, and compared the responses at both time points.Responses were considered more aligned if they were similar to each other.
Patient measures: baseline (only)
Social support was assessed using the validated 13-item Older Americans Resources and Services Medical Social Support survey. 34This survey assesses the frequency and availability of social interaction and emotional support for older adults.It also evaluates the patient's perception of their support persons.
Patient measures: postintervention (only)
Acceptability of the SICP was assessed using a 23-item acceptability survey. 21Satisfaction with communication was assessed using the adapted 6-item Health Care Communication Questionnaire ([HCCQ] range, 0-20; higher score is better) and the 1-item Heard and Understood question (range, 0-4; higher score is better). 35,36Satisfaction with communication about other medical issues and aging concerns was assessed using the adapted 7-item HCCQ-Age (range, 0-28; higher score is better). 11Therapeutic alliance was assessed using a modified Human Connection Scale (range, 16-64; higher score is better). 37Acceptance of illness was assessed using the validated 5-item Peace, Equanimity, and Acceptance in Cancer Experience ([PEACE] range, 5-20; higher score is better) questionnaire. 38
Caregiver measures: baseline and postintervention
0][31] QOL was assessed using the validated 35-item Caregiver QOL Index for caregivers (range, 0-140; higher score is better). 39Disease understanding was assessed using a similar questionnaire as was used for the patients. 33Caregivers were asked to estimate the curability of the patient's cancer with treatment and life expectancy for the patient. 33regiver: postintervention (only) Similar measures were used to assess satisfaction with communication for caregivers.Caregiver HCCQ also included 2 additional sections: HCCQ-Age about patients (range, 0-28) and HCCQ-Age about caregivers (range, 0-20).One section assessed the caregiver's perception of the patient's communication with the clinician (7-item).The second section assessed the caregiver's own communication with the clinician (5-item). 11,35,36
Data analysis
We used descriptive statistics to summarize demographics, feasibility, usability, and patient and caregiver measures.For measures collected at both baseline and postintervention, we used paired t tests or Wilcoxon signed-rank tests to examine change from baseline to postintervention depending on the distribution of data.Hypothesis testing was performed at α = 0.10 (2-tailed) given the pilot nature of the study and small sample size.1][42] We anticipated that ~20% of the participants would withdraw before postintervention assessment owing to death.With 20 patients enrolled, we anticipated at least 16 patients to be evaluable.When we estimated retention rate and usability, a 95% confidence interval would span ≥25%.We conducted all quantitative analyses using SAS statistical software, version 9.4 (SAS Institute Inc, Cary, NC).For qualitative analyses, 2 coders used open coding and focused content analysis to independently code transcripts for themes.Discrepancies were resolved through consensus between coders.Thematic saturation was achieved, and data were reported using Consolidated Criteria for Reporting Qualitative Research guidelines (supplemental Tables 1 and 2).
Feasibility
We approached 29 patients, and 21 consented (consent rate, 72%).One patient died between consent and baseline, resulting in a total sample of 20 patients enrolled.Of the 20 enrolled patients, 1 died before their scheduled SICP visit, resulting in a total of 19 SICP visits (retention rate, 95%; primary feasibility metric).Five patients did not consent because they did not feel that the visit would be helpful (eg, family members or clinicians already knew their wishes); 2 patients did not want to complete surveys, and 1 patient was concerned about knowing too much about their diagnosis.Thirteen SICP visits were completed using video calls via Zoom; 5 using telephone calls (4 patients preferred phone over video call; 1 patient had technical difficulty with Zoom), and 1 visit was in person (patient was not comfortable with phone or video).
Five patients identified a total of 6 caregivers for enrollment (1 patient had 2 caregivers enrolled), all of whom consented (consent rate: 100%).All consented caregivers participated in SICP visits (retention rate: 100%).Nine patients could not identify a primary caregiver, and 6 patients did not want to burden their caregivers with surveys.Nonenrolled caregivers were present in 9 of the 19 SICP visits.Supplemental Table 3 describes caregiver presence at visits.
Other outcome measures
SICP acceptability (patients).Results of the patient acceptability survey are shown in Figure 1.Fifteen patients (88.2%; 15/ 17) found this conversation to be very or extremely worthwhile.The majority of patients felt that the SICP increased their sense of peacefulness (56.3%; 9/16), sense of control over medical decisions (58.8%; 10/17), and closeness with their clinician (75.0%; 12/16).Most patients found it very or extremely helpful for their clinician to ask about their understanding of where they are with their illness (70.6%; 12/17), communicate their prognosis to them (64.7%;11/17), bring up their personal goals for the future (70.6%; 12/17), and ask about how much their family knows about their priorities and wishes (64.7%; 11/17).
The majority of patients (56.3%; 9/16) felt that this conversation took place at the right time, but some patients (18.8%, 3/16) wished their doctor had raised these topics earlier.
Patient measures: baseline and after intervention.Other patient measures completed at baseline and after intervention are shown in Table 2.After intervention, ACP engagement scores numerically increased (+0.4;P = .12).No significant changes in psychological health (ie, GAD-7, PHQ-9, distress) or QOL (FACT-Leu) occurred after SICP visits.After the SICP visit, patients' life expectancy and curability estimates aligned more closely with their clinicians' estimates (Figures 2 and 3).
Caregiver measures: baseline and after intervention.
Qualitative feedback
Three major qualitative themes emerged from postintervention interviews: (1) Participants appreciated the comfort of telehealth during their SICP visit; (2) participants felt that the SICP visit provided them with the opportunity to share their wishes, and (3) participants felt that the SICP visit eased their worries.Almost all patients (94.7%; recommend the visit if it included a written summary in addition to the conversation.
Participants appreciated the comfort of telehealth during their SICP visit
Patients felt that having their SICP visit from home via telehealth, video, or phone was comforting.Patients also emphasized the convenience of telehealth, because it reduced the need to travel.
Patient 8: "It seems like it's easier to talk to the doctor if you're sitting in the office talking to him, (but) I felt more comfortable sitting in my own chair talking to my doctor, seeing her face (and) seeing her reactions." Participants felt that the SICP visit provided them with the opportunity to share their wishes.
Patients appreciated sharing their wishes regarding EOL care with their clinician and found it helpful to have their family members participate in the discussion.
Patient 7: "There were some questions about end of life and my wishes and things like that.I was very glad that these things were all in the open because my husband was on the phone.Though we have talked about them, my husband and I, I just wanted to confirm to him how I feel, and (my clinician) made it so simple." Patients felt that it was important for their clinician to understand what was important to them, and for many patients in this study, their family is their priority and their major source of strength.
Participants felt that the SICP visit eased their worries
Although some patients were initially hesitant about sharing their fears with their clinician, patients felt comfortable discussing their concerns during the SICP visit.
Patient 6: "In the beginning I was a little bit leery about talking about what was bothering me.But after we got into a discussion I became very, very interested and wanting to tell (my clinician) what my inner thoughts were that I had not been able to share.I think I may have said that I appreciated being able to share some of my inner thoughts with (my clinician)." In addition, sharing their worries allowed patients to feel better prepared and less fearful about the future.
Patient 4: "My one fear would be that…when they get down towards the end (my) doctors kind of bail…just go into hospice or go home and die.And I heard him say very clearly 'I will not abandon you,' which meant a lot." Caregivers similarly felt that the SICP visit eased the patient's worries, thereby also decreasing their anxiety.
Caregiver 5: "Yeah.Actually, I'm a little bit less worried about how he, you know, how he's going to do because I think it calmed him down.And if he is less anxious then I'm a lot less anxious."
Discussion
In this pilot study, we found that telehealth delivery of the adapted SICP for older adults with AML and MDS was feasible (retention rate: 95%) and usable.In qualitative interviews, the majority of patients (89.5%) would recommend a SICP visit to others.Participants appreciated the comfort of telehealth and felt that the SICP provided them with the opportunity to share their wishes.
After their SICP visit, patients' estimates of their curability and life expectancy aligned more closely with those of their clinicians, and postintervention satisfaction with communication was relatively high.
The majority of patients found their SICP visit to be worthwhile, although some patients wished their doctor had brought this conversation up earlier.Patients in this study felt that the SICP provided them with the opportunity to share wishes that they might not have been able to discuss during a routine clinic visit and assisted in creating dialogue with their family members regarding current and ongoing care.Understanding patient care preferences is especially important for patients who are at high risk for rapid clinical decline. 43,44Older adults with AML and MDS often receive life-sustaining treatment at the EOL and die in the hospital, which may not be concordant with their wishes. 45,46ereaved families of patients with cancer have regretted not talking about death sufficiently, and some of these regrets may be attributed to the family's uncertainty regarding the patient's terminal prognosis. 47SICs encourage ongoing care communication and may increase illness understanding, allowing patients and their families to make value-aligned medical decisions in the present as well as at the EOL.
We found that patients' disease understanding improved after their SICP visit, and curability and life expectancy estimates aligned more closely with those of their clinicians, but disease understanding did not change at postintervention for caregivers.This may be because at baseline, caregiver knowledge aligned closely with clinicians.Specifically, at baseline, 50% (3/6) of caregiver's curability and life expectancy estimates aligned with clinician estimates.For patients, improved prognostic awareness may not improve depression, anxiety, or QOL. 48On the other hand, certain coping strategies improve QOL and reduce depression in patients who experience psychological distress from accurate prognostic awareness. 49Therefore, considering the variable impact of accurate prognostic awareness on psychological health, it is important that SICs offer coping strategies for both patients and caregivers in discussions of prognosis.
Communication scores for participants in this study were high with mean HCCQ scores of 18.3 (SD, 2.1) and 18.2 (SD, 2.9) for patients and caregivers, respectively.Participants felt heard by their clinicians, and therapeutic alliance as assessed by the Human Connection Scale was relatively high (57.9[SD, 4.7]).These scores for communication are higher than those reported in previous work, including 1 study that assessed communication with older adults with cancer using the GA, which had a mean HCCQ Score of 16.8 (SD, 3.2) and another study that assessed therapeutic alliance between oncologists and patients with advanced cancer, which had a mean Human Connection Scale score of 56.4 (SD, 7.4). 11,50Poor patient-physician communication may lead to confusion and loss of confidence in the care team, whereas strong therapeutic alliance is associated with decreased symptom burden and lower psychological distress. 50,51In addition, high Human Connection Scale scores have been associated with a lower likelihood of intensive care unit use at EOL for patients with cancer. 50herefore, the SICP has the potential to improve patient-clinician communication, facilitating a stronger alliance between patients and their care team that may lead to better patient-reported outcomes and lower intensity EOL care.
This study has several strengths.First, we focused on a vulnerable population, ie, older adults with AML and MDS.Second, we used telehealth in order to increase accessibility to SICs for patients.Third, we used both quantitative and qualitative methods to comprehensively assess the impact of the SICP on care for participants in this study.This study also has limitations.First, it is a single center, single-arm study with a small sample size.Second, our participants were mostly White and non-Hispanic; therefore our results may not be generalizable to individuals of other races and ethnicities.
In this single-arm pilot study, we found the adapted telehealth SICP to be both feasible and usable.Participants in this study found participating in the SICP to be worthwhile and would recommend this program to others.The adapted SICP has the potential to improve patient disease understanding, strengthen the patientclinician relationship, and help clinicians align care with what matters most to each patient and their family.
Figure 3 .
Figure 3. Patient and caregiver curability estimates at baseline and after intervention compared with clinician estimates.
Table 2 .
Baseline and/or postintervention measures for patients and caregivers CaregiverFigure 2. Patient and caregiver life expectancy estimates at baseline and after intervention compared with clinician estimates.
|
2023-12-16T12:39:20.142Z
|
2023-12-13T00:00:00.000
|
{
"year": 2023,
"sha1": "edcee2f7cf561ea4e5cce12bf424c9b42bbd999a",
"oa_license": "CCBYNCND",
"oa_url": "https://ashpublications.org/bloodadvances/article-pdf/doi/10.1182/bloodadvances.2023011046/2085677/bloodadvances.2023011046.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4aabaedebc7c5810510e4fe6c001f7191a867cbc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59170660
|
pes2o/s2orc
|
v3-fos-license
|
One bank problem in the federal funds market
The model of this paper gives a convenient strategy that a bank in the federal funds market can use in order to maximize its profit in a contemporaneous reserve requirement (CRR) regime. The reserve requirements are determined by the demand deposit process, modelled as a Brownian motion with drift. We propose a new model in which the cumulative funds purchases and sales are discounted at possible different rates. We formulate and solve the problem of finding the bank's optimal strategy. The model can be extended to involve the bank's asset size and we obtain that, under some conditions, the optimal upper barrier for fund sales is a linear function of the asset size. As a consequence, the bank net purchase amount is linear in the asset size.
Introduction
One of the threats perceived as being brought by a financial crisis on economy is the lack of liquidity. Because of this threat, in 2008, big banks received help from the Federal Reserve Bank through public money, whereas small banks were let to default or to be acquired by other banks. It was argued that the bailouts were necessary because a big bank's default might be followed by a lack of liquidity in the market and, thus, by a cascade of defaults. However, letting small banks to fail and helping the big ones to become bigger
The model
Let us consider the problem of a bank which has an exogenously given demand deposit (net of withdrawals) and continuously sells and buys funds, thus lowering or increasing the excess reserves, defined as the difference between deposits and required reserves.
The bank is characterized by: 1. A demand deposit process (D t ) t≥0 .
2.
A required reserve process (R t ) t≥0 , where R t = qD t .
Therefore, modelling the deposits D is equivalent to modelling the excess reserves X. Let (Ω, F, P x ) be a probability space rich enough to accommodate a standard, onedimensional, Brownian motion B = (B t , 0 ≤ t ≤ ∞).
Let us consider the problem of a bank which has an exogenous demand deposits (net of withdrawals) and continuously sell and buy funds, thus lowering or increasing the demand deposits. The demand deposits X = (X t , 0 ≤ t ≤ ∞) are assumed to fluctuate over time as follows We consider F = (F t ) t≥0 to be the completion of the augmented filtration generated by X (so that (F t ) satisfies the usual conditions).
Therefore, the bank observes nothing except the sample path of X.
Policies
Definition 2.1. A policy is defined as a pair of processes L and U such that L, U are F − adapted, right-continuous, increasing and positive.
In the context of the federal funds market, L t and U t are the cumulative funds purchases and funds sales (from the central bank) that the bank undertakes up to time t, in order to satisfy the reserve requirements and to maximize its profit. Let us take λ 1 and λ 2 , λ 1 ≥ λ 2 be interest rates at which the bank lends and borrows funds. A controlled process associated to the policy (L, U ) is a process Z = X + L − U . Using formula (1) for X, we obtain the decomposition of Z into its continuous part and its finite variation part: In our model Z t is the amount of excess funds in the bank's reserve account at time t. The policy (L, U ) is said to be feasible if and We denote by S(x) the set of all feasible policies associated with the continuous process X that starts at x.
Transaction Costs
We assume that the bank can continuously sell and buy funds, thus lowering or increasing its excess reserve account. It is considered, as in [3], that there are three types of transaction costs: Remark 1. The cumulative funds purchases and funds sales are discounted at possible different rates. If n = 1 then the discounting occur at the same rate λ 1 . The discount function ne −λ 1 t + (1 − n)e −λ 2 t , n ∈ [0, 1] was considered in [5] and leads to a time changing discount rate in [λ 2 , λ 1 ].
The Objective
The bank's reserve management and profit-making problem is to find the optimal strategy (L,Û ) which minimizes the cost.
The problem of minimizing the cost can be translated to the task of maximizing a value function. This function is easier to work with and it turns out to have particular characteristics, when the policy is of a barrier type. We present the relation between the cost function and the gain function obtained in [6].
The Gain Function
Then extending the arguments from [6] one gets the following Lemma.
Lemma 1. The relation between the cost function and the gain function is
4 The Optimal Policies
The Barrier Policies
Let b > 0 be a real fixed number. We consider that Definition 4.1. The barrier policies are the set of policies (L, U ) ∈ S(x) that satisfy: A barrier policy (L, U ) satisfies: where x − denotes the negative part of x. Moreover, the Double Skorokhod Formula obtained in [7] can be translated into a formula for the bank's transaction amount L − U , as shown in [2]:
Main Result
The following is the main result of our paper.
Theorem 1. The barrier policy (L,Û ) associated with b of (16) is optimal, i.e., for every
Special Case
Let us take n = 1 so that we have same discount rate λ 1 . Moreover, let the drift µ and volatility σ depend on the bank size A. Inspired by [2] we take µ and σ linear in A, i.e., µ = k 1 A, σ = k 2 A. Remark 2. This result can be used by a bank to develop a strategy for selling funds when its controlled excess reserve process hits this upper optimal barrier b, i.e. a certain percent of its asset's size. This corollary is consistent with the so called small bank large bank dichotomy, meaning that the bigger the size of the bank, the larger the net purchase amount that the bank undertakes.
Appendix A: Proof of Lemma 1
Proof. Wlog, we can assume that U 0 = L 0 = 0 (the other cases are similar, given (6)).
Since Z ≡ X + L − U , we have: Applying Fubini's theorem, we obtain: We know that since X is a (µ, σ) Brownian motion, E x (X t ) = x+µt and a simple integration gives the following: From the last two formulas we conclude that Next, we recall the Riemann-Stieltjes integration by parts theorem, which states that if two functions f, g are F V (of finite variation), then: Noticing that since L is increasing, L is F V and applying the above-mentioned theorem, we obtain, for each fixed T > 0 and for general λ > 0: Applying Fatou's lemma twice, (26) and (6), we obtain It follows that e −λt L t → 0 almost surely as t → ∞. Indeed, if this were not true, then, since e −λt L t ≥ 0 on a set of non-zero measure, we would have becomes unbounded, and we get to a contradiction.
Letting T → ∞ in (26) and then taking E x on both sides, we obtain for every λ > 0: We obtain a similar equation for U . Replacing it and (27) (for both λ 1 and λ 2 ), (25), (24) in the definition (8) for k L,U (x), we obtain:
Appendix B: Proof of Proposition 1
Proof. If u : R → R is a function of class C 2 (i.e. twice continuously differentiable), we denote by Γ the generator of the continuous diffusion process X in (1): Indeed by Ito's Lemma combined with the fact thatL increases only when Z = 0, whereaŝ U increases only when Z = b yields: Since Z is bounded a.s. then e −λs v ′ 1 (Z) is bounded a.s., thus the process From the definition of v 1 we infer that Applying integration by parts leads to: By taking expectation, then letting t → ∞, and using that v 1 is bounded leads to LetZ = X +L then Indeed by Ito's Lemma combined with the fact thatL increases only when Z = 0 yields: SinceZ is positive a.s. then e −λs v ′ 2 (Z) is bounded a.s., thus the process From the definition of v 2 we infer that Applying integration by parts leads to: By taking expectation, then letting t → ∞, and using that v 1 is bounded on R + leads to Next let us check the feasibility of the policy (L,Û ), i.e., and From (32) we get Moreover combined with (30) yield
Appendix C: Proof of Theorem 1
Proof. The idea of the proof is based on the martingale/supermartingale principle. In a first step we show that some processes are supermartingales.
Lemma 2. For every (L, U ) ∈ S(x), with Z = X + L − U, the process is supermartingale. Moreover withZ = X + L, the process Therefore for a fixed T > 0, by taking expectations we get Next, the positivity of Z, the linearity of v 1 (z) for large z, the integrability conditions (6), (7) and Dominated Convergence Theorem yield that Similarly The positivity ofZ, the roundedness of v 1 (z) for positive z, the integrability condition (6). and Dominated Convergence Theorem yield that By adding these inequalities we get However by Proposition 1 vL , which proves optimality of (L,Û )
Proof of Lemma 2
Recall that Γv Moreover By Ito's Lemma for processes with jumps where dL = dL − ∆L and dŨ = dU − ∆U. In the light of (39) the claim yield if we prove that 0≤s≤t e −λs (∆v 1 (Z) s − c∆L s + r∆U s ) ≤ 0.
By Ito's Lemma for processes with jumps Suppose that ∆L t > 0, then ∆Z t = ∆L t and The last quantity is negative because v ′ 2 (z) ≤ (1 − n)α, z ≥ 0.
|
2016-05-24T13:06:44.000Z
|
2015-07-15T00:00:00.000
|
{
"year": 2016,
"sha1": "990364a096f1cfc5146342a06860f60f3c6fe6df",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cc912c6251b89cc77c96d8b5415d20a7e0dacbaa",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Mathematics"
]
}
|
119631319
|
pes2o/s2orc
|
v3-fos-license
|
Noetherian properties of Fargues-Fontaine curves
We establish that the extended Robba rings associated to a perfect nonarchimedean field of characteristic p, which arise in p-adic Hodge theory as certain completed localizations of the ring of Witt vectors, are strongly noetherian Banach rings; that is, the completed polynomial ring in any number of variables over such a Banach ring is noetherian. This enables Huber's theory of adic spaces to be applied to such rings. We also establish that rational localizations of these rings are principal ideal domains and that etale covers of these rings (in the sense of Huber) are Dedekind domains.
Introduction
The field of p-adic Hodge theory has recently been transformed by a series of new geometric ideas. Central among these is the reformulation of the basic theory by Fargues and Fontaine [5] (see also [4], [6], [12]) in terms of vector bundles on certain noetherian schemes associated to perfect nonarchimedean fields of characteristic p. While these schemes are not of finite type over a field, they have certain formal properties characteristic of proper curves; for instance, their Picard groups surject canonically onto Z.
The so-called Fargues-Fontaine curves also admit canonical analytifications; more precisely, to each Fargues-Fontaine curve, one can functorially associate an object in Huber's category of adic spaces [8] which maps back to the original scheme in the category of locally ringed spaces. The pullback functor on vector bundles induced by this morphism is an equivalence of categories [12, §8]; this constitutes a version of the GAGA principle.
One expects a similar result for coherent sheaves, but in order to build a theory of coherent sheaves on adic spaces, one must restrict to spaces satisfying certain noetherian hypotheses. Some care is needed because there is no analogue of the general Hilbert basis theorem for noetherian Banach rings: if A is such a ring, then Tate algebras over A (completion of polynomial rings over A for the Gauss norm) is not known to be noetherian. One must thus consider adic spaces which locally arise from Banach rings for which the Tate algebras are all noetherian (i.e., these rings are strongly noetherian).
In this paper, we establish the strongly noetherian property for the rings used to build the adic Fargues-Fontaine curves (Theorem 3.2, Theorem 4.10); this answers a question of Fargues [4]. These rings, which are derived from the Witt vectors over a perfect field which is complete with respect to a multiplicative norm, appear frequently in p-adic Hodge theory as extended Robba rings (e.g., see [12]). We also establish some finer properties of these rings: any rational localization is a finite direct sum of principal ideal domains (Theorem 7.11), and anyétale covering in the sense of Huber is a finite direct sum of Dedekind domains (Theorem 8.5).
Acknowledgments
The author was supported by NSF grant DMS-1101343, and thanks MSRI for its hospitality during fall 2014 as supported by NSF grant DMS-0932078. Thanks also to Laurent Fargues for helpful discussions.
Euclidean division for Witt vectors
We begin by recalling the basic setup, fixing notations, and reviewing the Euclidean division ring for certain rings of Witt vectors.
Hypothesis 2.1. Throughout this paper, let p be a fixed prime, let q be a power of p, let L be a perfect field containing F q which is complete with respect to the multiplicative nonarchimedean norm |•|, let E be a complete discretely valued field whose residue field contains F q , and fix a uniformizer ̟ ∈ E.
Note that each element of A L,E (resp. B L,E ) can be written uniquely in the form n∈Z ̟ n [x n ] for some x n ∈ L which are zero for n < 0 (resp. for n sufficiently small) and bounded for n large. For t ∈ [0, +∞), define the "Gauss norm" function λ t : B L,E → R by the formula interpreting |x n | t as 1 in the case t = 0, so that λ 0 is the ̟-adic absolute value.
Proof. This is a straightforward consequence of the homogeneity properties of Witt vector arithmetic. See for instance [11, §4].
Definition 2.4. Let A r L,E be the completion of A L,E with respect to λ r . Note that A r L,E maps into W (L) ⊗ W (Fq) o E ; more precisely, if we write an arbitrary element x ∈ W (L) ⊗ W (Fq) o E as a p-adically convergent sum ∞ n=0 ̟ n [x n ], then x ∈ A r L,E if and only if p −n |x n | r → 0 as n → ∞. Moreover, the formula (2.2.1) continues to hold for x ∈ A r L,E and t ∈ [0, r]. Consequently, in the case E = Q p , the rings A r L,E , B r L,E coincide with the rings denoted R int,r L ,R bd,r L in [12].
L,E nonzero, define the Newton polygon of x as the portion of the boundary of the convex hull of the set with slopes in the range (0, r]. For t ∈ (0, r], the multiplicity of t in (the Newton polygon of) x is the height of the segment of the Newton polygon of x lying on a line of slope t, or 0 if no such segment exists; note that this quantity is always a nonnegative integer.
For x ∈ A r L,E , we define the degree of x, denoted deg(x), to be the largest n realizing λ r (x) = max n {p −n |x n |}, or equivalently, the sum of the p-adic valuation of x plus the multiplicities of all slopes of x. By convention, we also put deg(0) = −∞.
Lemma 2.6. For x 1 , x 2 ∈ A r L,E nonzero and t ∈ (0, r], the multiplicity of t in (resp. the degree of ) x 1 x 2 is the sum of the multiplicities of t in (resp. the degrees of ) f 1 and f 2 .
Proof. This follows from the multiplicative property of the norms λ t together with convex duality. We omit further details.
Remark 2.7. Note that for x, y ∈ A r L,E such that λ r (x − y) < λ r (x), we have deg(x) = deg(y). This observation indicates that if one is willing to neglect lower-order terms, then degrees in our sense behave like the degrees of ordinary polynomials.
The ring A r L,E admits a Euclidean division algorithm as described in [9, Lemma 2.6.3]. However, we opt to give a self-contained proof for several reasons. The level of generality in [9] is at once too high (there are intended applications in which one considers somewhat smaller rings) and too low (the field E therein is forced to be of characteristic 0) to match our setup here. In addition, there are a number of minor but confusing errors in the presentation in [9]; we have corrected these in the arguments that follow. (See [12, §4.2] for errata in the context of [9].) Lemma 2.8. For x ∈ A r L,E nonzero, there exists ǫ ∈ (0, 1) with the following property: for any y ∈ A r L,E , we can write y = zx + w for some z, w ∈ A r L,E subject to the following conditions.
Proof. Put m = deg(x) and write x = ∞ n=0 ̟ n [x n ]. We may then choose ǫ ∈ (0, 1) such that λ r (̟ n [x n ]) ≤ ǫλ r (x) for n > m. We prove the claim for this value of ǫ.
The strong noetherian property
We now prove an analogue of the Hilbert basis theorem for the ring A r L,E .
We may view A{T 1 /ρ 1 , . . . , T n /ρ n } as the subring of A T 1 , . . . , T n consisting of those series One would like to know that A{T 1 /ρ 1 , . . . , T n /ρ n } is noetherian whenever A is, but this is only known under somewhat restrictive hypotheses, e.g., when A is a nonarchimedean field [2, Theorem 5.2.6/1]. Over the course of §3, we will prove the following theorem, which answers a question of Fargues [4] by proving that A r L,E is strongly noetherian in the sense of Huber. This means that Huber's theory of adic spaces, as developed in [8], applies to this ring; however, we will not pursue this point here.
Our approach to the proof relies on some standard ideas from the theory of Gröbner bases; indeed, it can be used to give an alternate proof of [2, Theorem 5.2.6/1]. We start with the underlying combinatorial construction. Hypothesis 3.3. For the remainder of §3, retain notation as in Theorem 3.2, let H be an ideal of R, and let I = (i 1 , . . . , i n ) and J = (j 1 , . . . , j n ) (and subscripted versions thereof, such as I k = (i k,1 , . . . , i k,n )) denote elements of the additive monoid Z n ≥0 . Definition 3.4. We equip Z n ≥0 with the componentwise partial order ≤, for which I ≤ J if and only if I k ≤ J k for i = 1, . . . , n. This partial order is a well-quasi-ordering: any infinite sequence contains an infinite nondecreasing subsequence.
We also equip Z n ≥0 with the graded lexicographic total order , for which I ≺ J if either i 1 + · · · + i n < j 1 + · · · + j n , or i 1 + · · · + i n < j 1 + · · · + j n and there exists k ∈ {1, . . . , n} such that i l = j l for l < k and i k = j k . Since is a refinement of ≤, it is a well-ordering.
Remark 3.5. In commutative algebra, the only critical properties of are that it is a wellordering and that it refines ≤. In some cases (such as ours), it is also important that for any I, there are only finitely many J with J I. In any case, there are many options for with similar properties, giving rise to many different term orderings which are relevant for practical applications. See for instance [3].
We next define a notion of leading terms for elements of R. Note that a similar construction appears already in [10].
Definition 3.6. For x = I x I T I ∈ R nonzero, define the leading index of x to be the index I which is maximal under for the property that x I T I ρ = |x| ρ , and define the leading coefficient of x to be the corresponding value of x I .
We can now define an analogue of a Gröbner basis for the ideal H. Definition 3.7. For each I, let d I be the smallest possible degree of the leading coefficient of an element of R with leading index I, or +∞ if no such element exists. Note that if Since Z n ≥0 is well-quasi-ordered under ≤, the set of I for which d I < +∞ contains only finitely many minimal elements with respect to ≤. Consequently, the set of possible finite values of d I is bounded above, and hence is finite. For each nonnegative integer d, let S d be the set of I which are minimal with respect to ≤ for the property that d I = d; then S d is finite for all d and empty for all but finitely many d. Let S be the union of the S d . For each I ∈ S, choose x I ∈ H \ {0} with leading index I and leading coefficient of degree d I .
We claim that the finite set {x I : I ∈ S} generates the ideal H. As in the proof of Proposition 2.9, we first establish a certain approximate version of this statement, using an iterative construction and a proof by contradiction based on well-ordering properties.
Lemma 3.8. There exists ǫ ∈ (0, 1) with the following property: for each y ∈ H, there exist a I ∈ R for I ∈ S such that |a I | ρ |x I | ρ ≤ |y| ρ and y − I∈S a I x I ρ ≤ ǫ |y| ρ .
Proof. Write x I = J x I,J T J and let c I = x I,I be the leading coefficient of x I . Let ǫ be the maximum of x I,J T J ρ / c I T I ρ over all J for which I ≺ J (or any value in (0, 1) in case no such J exists); by construction, we have ǫ ∈ (0, 1). We prove the claim for this value of ǫ.
We define y l ∈ H, a l,I ∈ R for l = 0, 1, . . . and I ∈ S as follows. Put y 0 = y. Given y l = J y l,J T J , if |y l | ρ ≤ ǫ |y| ρ , put a l,I = 0 and y l+1 = y l . Otherwise, y l is nonzero, so it has a leading index J l . By construction, we can find an index I l ∈ S such that I l ≤ J l and d I l = d J l . Apply Proposition 2.9 to write y l,J l = z l c I l + w l for some z l , w l ∈ A r L,E with |w l | ≤ |y l,J l | and deg(w l ) < deg(c I l ) = d I l . Put If |y l | ρ ≤ ǫ |y| ρ for some l, then the sums a I = ∞ l=0 a l,I are finite and have the desired effect. It thus suffices to derive a contradiction under the assumption that |y l | ρ > ǫ |y| ρ for all l.
Define the ǫ-support of y l to be the finite set E l consisting of those J for which y l,J T J ρ > ǫ |y| ρ ; in particular, J l ∈ E l . By virtue of our choice of ǫ, E l and E l+1 agree for all indices J for which J l ≺ J. In particular, since E 0 is finite, we can choose J + for which J J + for all J ∈ E 0 , and then J J + for J ∈ E l for all l.
The set {J ∈ Z n ≥0 : J J + } is finite, so there are only finitely many indices which occur as J l for infinitely many l. Let J be the largest such value with respect to ≺; then there must be some nonnegative integers l < l ′ such that J l = J l ′ = J and J k ≺ J for l < k < l ′ . But we then have which gives a contradiction by Remark 2.7.
We now finish as in the proof of Proposition 2.9.
Lemma 3.9. The finite set {x I : I ∈ S} generates the ideal H. Consequently, Theorem 3.2 holds.
Proof. Choose ǫ ∈ (0, 1) as in Lemma 3.8. For y ∈ H, define sequences y 0 , y 1 , . . . and a 0,I , a 1,I , . . . for I ∈ S as follows: put y 0 = y, and given y l , apply Lemma 3.8 to construct a l,I ∈ R for I ∈ S such that |a l,I | ρ |x I | ρ ≤ |y l | ρ and y l − I∈S a l,I x I ρ ≤ ǫ |y l | ρ , then put y l+1 = y l − I∈S a l,I x I . By construction, |y l | ρ ≤ ǫ l |y l |, so the sequence {y l } converges to zero and the sums a I = ∞ l=0 a l,I converge to limits satisfying y = I a I x I .
Some additional rings
We next define the rings that appear directly in the study of the adic spaces associated to Fargues-Fontaine curves, and use Theorem 3.2 to extend the strong noetherian property to these rings.
Proof. By continuity, it suffices to check the inequality for x ∈ B L,E . From the shape of the formula (2.2.1), we may further reduce to the case where x = ̟ n [x n ] for some n ∈ Z, x n ∈ L. But in this case, the desired inequality becomes an equality. L,E nonzero, define the Newton polygon of x by choosing some x ′ ∈ B L,E with λ t (x ′ − x) < λ t (x) for all t ∈ I, forming the Newton polygon of x ′ , then discarding segments corresponding to slopes not in I. Note that this construction does not depend on the choice of x ′ , and inherits the multiplicativity properties from the corresponding definition for A L,E (Lemma 2.6). We define the multiplicity of slopes as before, and the degree of x as the sum of all multiplicities, which is again a nonnegative integer. . Conversely, if x has degree 0, then for some n ∈ Z, x n ∈ L × we have λ t (x − ̟ n [x n ]) < λ t (̟ n [x n ]) for all t ∈ I, so we may compute an inverse of x using a convergent geometric series. Next, put x = ̟ n [x n ] for some n ∈ Z, x n ∈ L. Let j be the smallest nonnegative integer such that c −j |x n | ≥ 1 and take y = ̟ n [x n z −j ]. For t ∈ I with t ≤ t 0 , we have ρ j λ t (y) ≤ ρ j λ t 0 (y) = λ t 0 (x). For t ∈ I with t > t 0 , in case j = 0 we obviously have λ t (y) = λ t (x); otherwise, we have c −j+1 |x n | < 1 and so ρ j λ t (y) < c t 0 −t λ t 0 (x). For z = yT j ∈ B I L,E {T /ρ}, we therefore have |z| ρ ≤ c t 0 −t λ I ′ (x). To conclude, it is sufficient to check that if ρ ∈ p Q , then for any x = ̟ i [x n ] with λ I ′ (x) = 1, we can lift some power of x to z ∈ B I L,E {T /ρ} with |z| ρ = 1. Set notation as above. If j = 0, then λ I ′ (x) = λ s (x) = λ s (y) = λ I (y) = |z| ρ .
If j > 0, then λ I ′ (x) = λ t 0 (x) = p −n |x n | t 0 . Since λ I ′ (x) = 1 and ρ ∈ p Q , after raising x to a suitable power we have x n z −j = 1, so This completes the proof.
By combining Theorem 3.2, Remark 4.3, and Lemma 4.9, we obtain the following result. Remark 4.11. The fact that the rings B I L,E {T 1 /ρ 1 , . . . , T n /ρ n } are noetherian for ρ 1 = · · · = ρ n = 1 means that B I L,E is strongly noetherian in the sense of Huber. However, we do not know how to deduce this directly from the restricted form of Theorem 3.2 in which one only allows ρ 1 = · · · = ρ n = 1: we need to allow arbitrary ρ in order to fix the left endpoint of the interval I using Lemma 4.9. We also do not know how to give a direct proof of Theorem 4.10 in the style of the proof of Theorem 3.2 except in the case where I = [r, r] consists of a single point, in which case λ I = λ r is again multiplicative. L,E is noetherian because A r L,E is. However, due to the mismatch of topologies, we do not know how to prove that B However, the resulting ring can be shown to be nonnoetherian, by exploiting the existence of elements with infinitely many distinct slopes in their Newton polygons (or equivalently, the fact that the maxima have become suprema).
A descent construction
Before continuing, we record a descent argument which will allow us to freely enlarge the field L in what follows.
Convention 5.1. We adopt conventions concerning Banach rings, adic Banach rings, Gel'fand spectra, and adic spectra as in [12]. In particular, we write M(R) for the Gel'fand spectrum of the Banach ring R and Spa(R, R + ) for the adic spectrum of the adic Banach ring (R, R + ).
Hypothesis 5.2. Throughout §5, let L ′ be a perfect overfield of L which is complete with respect to a multiplicative nonarchimedean norm extending the norm on L. (b) The simplicial exact sequence is almost optimal; that is, the quotient and subspace norms at each point coincide.
Consequently, we may complete the tensor product to obtain another almost optimal exact sequence.
Using Lemma 5.3, we obtain a descent property for ideals in W (o L ).
Primitive elements of degree 1
We next focus attention on those elements of W (o L ) which behave like monic linear polynomials in the variable p. These elements control much of the algebra and geometry of the rings we are considering. In particular, they give rise to a deformation retraction on M(B I L,E ) as described in [11]. L,E equals β. Proof. Let F ′ be a completed algebraic closure of H(β). Let L ′ be the perfect field corresponding to F ′ under the perfectoid correspondence [12,Theorem 3.5.3]; recall that L ′ may be identified set-theoretically with the the inverse limit of F ′ under the p-power map. We may then take u to be a coherent sequence of roots of p in F ′ . Proof. The ring B I L,E is preperfectoid [12,Theorem 5.3.9], and hence stably uniform [12,Theorem 3.7.4]. Lemma 7.4. Let x ∈ C be an element which is not a unit. Then for some L ′ , we can find Proof. By Theorem 4.10, the ring C is noetherian, so all of its ideals are closed [12, Remark 2. Proof. This follows by Lemma 7.4 and consideration of slopes.
Proof. By [12, Proposition 2.6.4], the connectedness of C implies the connectedness of M(C). Consequently, for x ∈ C, if there exists a single β ∈ M(C) of positive radius with β(x) = 0, using Lemma 7.6 and the construction from Definition 6.4 to move up and down, we may show that γ(x) = 0 for all γ ∈ M(C). By Lemma 7.3, this forces x = 0.
Corollary 7.8. If C is connected, then it is an integral domain. Proof. Since C is noetherian by Theorem 4.10, we may reduce to the case where C is connected, and hence an integral domain by Corollary 7.8. It suffices to check that for each β ∈ M(C) for which β(x) = 0, there exists a neighborhood U of β in M(C) such that γ(x) = 0 for γ ∈ U \ {β}. Note that β must have radius 0 by Corollary 7.7. By Lemma 7.4, we can choose some L ′ and some u ∈ m L ′ \{0} such that β ′ = H(̟−[u], 0) restricts to β. Let m be the ideal of B I L ′ ,E generated by ̟ − [u]; then B I L ′ ,E /m ∼ = H(β ′ ), so in particular m is maximal. By Krull's intersection theorem [3,Corollary 5.4], if x ∈ m n C ′ for all n then x vanishes in the local ring C ′ m , but this yields a contradiction against Corollary 7.7. By Lemma 7.9, we can find some n such that (̟ − [u]) n divides x in C ′ and the quotient y has nonzero image in H(β ′ ).
We may thus choose a neighborhood U ′ of β ′ in M(C ′ ) such that γ(y) = 0 for all γ ∈ U ′ , and hence γ(x) = 0 for all γ ∈ U ′ \ {β ′ }. Since β has radius 0, U ′ restricts to a neighborhood of U in M(C) of the desired form.
Theorem 7.11. The ring C has the following properties.
(a) The ring C is a direct sum of finitely many noetherian integral domains C 1 , . . . , C n .
(b) For i = 1, . . . , n, every element of C i can be written as an element of W (o L ) times a unit.
(c) For i = 1, . . . , n, C i is a principal ideal domain.
The fact that B I L,E itself is a principal ideal domain was known previously; see [12, Proposition 2.6.8].
Proof. We have (a) thanks to Theorem 4.10 and Corollary 7.8. We may thus assume hereafter that C itself is a noetherian integral domain.
Choose any nonzero x ∈ C. By Lemma 7.10, there are only finitely many β ∈ M(C) for which β(x) = 0. If there are no such β, then x is a unit by [12,Corollary 2.3.5]. Otherwise, by Lemma 7.4, each such β may be lifted to H(̟ −[u], 0) for some L ′ and some u ∈ m L ′ \{0}. We may make a single choice of L ′ and then let u 1 , . . . , u l be the resulting values of u. We may then apply Lemma 5.4 to the product l i=1 (̟ − [u i ]) to write it as a unit times some element y 0 ∈ W (o L ), which then must be a divisor of x in C. We thus form a sequence x 0 = x, x 1 , . . . of elements of C in which for each i ≥ 0, we have x i = y i x i+1 for some y i ∈ W (o L ) which is not a unit in C. Since C is noetherian, we cannot extend this sequence indefinitely; we then have that x is the product of the y i times a unit. This proves (b), which implies (c) because A r L,E is a principal ideal domain by Corollary 2.10.
Structure ofétale morphisms
To conclude, we extend the preceding results toétale morphisms. Proof. Since C is strongly noetherian, (C, C + ) is sheafy [7, Theorem 2.2], so the claim may be checked locally. By Lemma 8.2, we may assume that (B I L,E , B I,+ L,E ) → (C, C + ) factors as (B I L,E , B I,+ L,E ) → (C 0 , C + 0 ) → (C, C + ) where the first map is a rational localization and the second is a finiteétale morphism. By [12,Theorem 5.3.9], B I L,E is preperfectoid, as then is C 0 by [12, Theorem 3.7.4] and C by [12,Proposition 3.7.5]. In particular, C is uniform.
Lemma 8.4. If C is connected, then it is an integral domain. Moreover, for any rational localization (C, C + ) → (D, D + ) the map C → D is injective.
Proof. Set notation as in Lemma 8.2. For each i, the norm of any nonzero x ∈ D i to C i is nonzero, so by Corollary 7.7 we cannot have β(x) = 0 if β ∈ M(D i ) restricts to a point of M(C i ) of positive radius. Both claims follow at once.
Theorem 8.5. The ring C is a direct sum of finitely many Dedekind domains.
Proof. Since C is noetherian by Theorem 4.10, we may reduce to the case where C is connected, and hence an integral domain by Lemma 8.4. It thus remains to prove that for any maximal ideal m of C, the local ring C m is principal. Since C is noetherian, we may check this after completion; by the proof of Lemma 7.9, this completion remains unchanged after replacing C with D for any rational localization (C, C + ) → (D, D + ) such that mD = D. By Lemma 8.2, we may thus reduce to the case where C is finiteétale over a rational localization, and then the claim follows from Theorem 7.11(c).
|
2015-07-06T17:53:02.000Z
|
2014-10-20T00:00:00.000
|
{
"year": 2014,
"sha1": "06d3f4e5dd5e92090f889f78bc1909b2a620305d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1410.5160",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "06d3f4e5dd5e92090f889f78bc1909b2a620305d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
21177543
|
pes2o/s2orc
|
v3-fos-license
|
Echocardiographic Parameters as Cardiovascular Event Predictors in Hemodialysis Patients
Methods: Sixty consecutive patients with CKD on hemodialysis were clinically evaluated and underwent Doppler echocardiography, being followed for 19 ± 6 months. The outcome measures were fatal and nonfatal cardiovascular events and overall mortality. The predictive value of echocardiographic variables was evaluated by Cox regression model and survival curves were constructed using the Kaplan-Meier method and log rank test to compare them.
Introduction
The annual mortality rates in patients with chronic kidney disease (CKD) undergoing dialysis are high.According to the dialysis census of the USA 1 , the survival of hemodialysis patients in the country was 77.4% in one year and 34.2% in five years, from 1999 to 2003.In Brazil, the annual crude mortality was 17.1% in 2009 2 .
Cardiovascular diseases (CVD) account for approximately 50% of all deaths in patients undergoing hemodialysis.Moreover, these patients are often hospitalized and CVD account for approximately one third of hospital admissions 3 .
Structural and functional alterations detected by echocardiography, such as left ventricular (LV) hypertrophy and systolic and diastolic dysfunction, are very prevalent in the hemodialysis population.Doppler echocardiographic diagnosis of these abnormalities has been an important step for the characterization of individuals with higher cardiovascular risk 4,5 .
Several studies have sought to determine the prognostic value of alterations such as LV hypertrophy and systolic dysfunction in patients undergoing hemodialysis [6][7][8] .However, studies evaluating the predictive value of diastolic dysfunction in this population are scarce.Thus, the objective of this study was to determine the prognostic value of echocardiographic parameters in patients with CKD undergoing hemodialysis.
Study design and population
This is an observational, analytical, prospective cohort study, carried out in the Dialysis Service of Centro de Nefrologia do Maranhão (CENEFRON).We evaluated 60 consecutive patients with CKD undergoing hemodialysis in this service.The hemodialysis sessions of all patients were performed three times a week, and lasted four hours, in volumetric machines Arq Bras Cardiol 2012;99(2):714-723
Siqueira et al. Doppler echocardiogram in hemodialysis patients
method described by Teichholz et al. 14 and the left ventricular systolic fractional shortening by the formula (LVEDD -LVESD)/ LVEDD.The left atrial volume (LAV) was determined from the two-dimensional planimetry using the biplane method of Simpson 15 and then indexed to the BSA to obtain the left atrial volume index (LAVi = LAV / BSA).
Mitral flow was measured in apical four-chamber view by pulsed Doppler.The sample was positioned between the distal ends of the mitral valve leaflets, and then the following variables were obtained: early (E) and late (A) transmitral diastolic velocities, E/A ratio and E-wave deceleration time (EDT).Tissue Doppler was performed in the apical fourchamber view to obtain the velocities of the mitral annulus.The sample was placed at the junction of the LV lateral wall with the mitral annulus 16 , and then early (e') and late (a') diastolic velocities of the mitral annulus were identified, as well as the e'/ a' and E/e' ratios.
Left ventricular hypertrophy (LVH) was diagnosed when LVMI was > 115 g/m 2 for men and > 95 g/m 2 for women.
LV geometry was classified according to the values of RWT and LVMI as: concentric hypertrophy (presence of LVH and RWT > 0.42), eccentric hypertrophy (presence of LVH and RWT ≤ 0.42), concentric remodeling (absence of LVH and RWT > 0.42) and normal geometry (no LVH and RWT ≤ 0.42).We defined left atrial (LA) enlargement in the presence of LAVI > 28 mL/m 2 , while LV dilatation was defined when the LVEDD was > 5.9 cm for men and > 5.3 cm for women 15 .
Systolic dysfunction was considered when the EF was < 55% 15 .LV diastolic function was classified into four patterns: normal, abnormal relaxation (mild diastolic dysfunction), pseudonormal (moderate diastolic dysfunction) and restrictive (severe diastolic dysfunction).It was considered abnormal relaxation when E / A < 1; restrictive pattern when E/A > 2 and pseudonormal pattern when E/A was > 1 and < 2 in association with E/e' > 10 17 .
Demographic, clinical and laboratory data
Demographic and clinical data, including age, sex, history of smoking, comorbidities and duration of dialysis were obtained from detailed analysis of medical records and interviews with the patient and the attending physician, when necessary.Previous cardiovascular event was defined as a history of typical angina or myocardial infarction, ischemic or hemorrhagic CVA and congestive heart failure with functional class > II.
Before performing each Doppler echocardiogram, blood pressure was measured and anthropometric data and ratios were obtained (weight, height, BSA, body mass index), which were measured according to standard procedures and using suitable materials.The body mass index (BMI) was calculated by dividing weight (kg) by squared height (m), considering malnutrition when < 18.5 kg/m 2 .The BSA was obtained using the formula of Dubois and Dubois 18 .
All biochemical measurements were performed by a single laboratory, located in CENEFRON, and data were collected from patients' charts.
Inclusion criteria were patients aged 18 years and older, undergoing hemodialysis for at least three months.Exclusion criteria were: recent history (less than six months) of acute myocardial infarction (AMI), percutaneous or surgical revascularization, unstable angina or cerebrovascular accident (CVA), decompensated congestive heart failure (CHF); severe valvular disease; pulmonary hypertension, blood pressure > 160/110 mmHg, uncontrolled atrial fibrillation or complex ventricular arrhythmia (nonsustained ventricular tachycardia, frequent and polymorphic extrasystoles, in pairs and salvoes), uncontrolled blood sugar levels, malignancies, active infection; irregular dialysis regimen; incapacity to obtain informed consent from the patient and inadequate echocardiographic window.
Patients were clinically evaluated and underwent a Doppler echocardiography during the period of February to September 2009, with an interval < 30 days between the two procedures.Subsequently, they were regularly followed until May 2011 or until the occurrence of outcome.
The project was approved by the CENEFRON Research and Ethics Committee in Research of Universidade Federal do Maranhão.We obtained a signed Free and Informed Consent Form (FICF) from all patients included in the study.
Doppler echocardiogram
The echocardiograms were performed on echocardiography equipment, model Vivid 3 Cardiovascular Ultrasound System (GE Healthcare, General Electric Company, USA) with a 3-7 mHz transducer and resources to obtain M-mode, two-dimensional and Doppler echocardiography (pulsed continuous, and tissue).The examinations were performed in the interdialytic period, within 24 hours after the dialysis session by a single medical professional, trained and skilled in echocardiography, with patients at rest and in left lateral decubitus position.Echocardiographic measurements followed the recommendations of the American Society of Echocardiography [9][10][11][12] and, for each variable, at least three cycles were analyzed.
The assessment of LV geometry was obtained by twodimensional image, with the following variables: end-diastolic septal thickness (EDS), end diastolic posterior wall (EDPW), left ventricular end-diastolic diameter of (LVEDD) and left ventricular end-systolic diameter (LVESD).
The assessment of LV geometry was obtained by the twodimensional image, with the following variables: end-diastolic septal thickness (EDST), end-diastolic posterior wall thickness (PWT), left ventricular end-diastolic diameter (LVEDD) and left ventricular end-systolic diameter (LVESD).
The left ventricular mass (LVM) was calculated using the formula proposed by Devereux et al. 13 and then indexed to body surface area (BSA), to obtain the left ventricular mass index (LVMI = LVM / BSA) .The relative wall thickness (RWT) was obtained by the formula EDT + EDS/LVEDD.The left ventricular ejection fraction (LVEF) was calculated by the
Outcomes
The primary endpoint or cardiovascular outcome included fatal and nonfatal cardiovascular events.Cardiovascular events were defined by angina with coronary stenosis > 50% at coronary angiography, nonfatal AMI, myocardial revascularization procedure, nonfatal ischemic or hemorrhagic CVA, CHF requiring hospitalization and death from cardiovascular causes (including sudden death, AMI and CVA).The secondary outcomes included overall mortality.
Outcomes were obtained from monthly review of medical documentation, including medical records and death certificates, as well as communication with the physician and patient's relatives.Patients who underwent kidney transplant or who switched dialysis modality were censored in the study.
Statistical analysis
Statistical analyzes were performed using the Statistical Package for Social Sciences -SPSS ® 17.0 (SPSS Inc, USA).Quantitative variables were expressed as mean with standard deviation or median and categorical variables as percentages.
For comparison of proportions between groups with and without outcome, we used the Chi-square test and for comparison of quantitative variables, Student's t test for independent samples.To estimate the hazard ratios (HR), we performed univariate analysis using Cox proportional hazards model by and then, variables with p < 0.10 were included in the multivariate analysis using the same model.Survival curves were constructed using the Kaplan-Meier method and log rank test was used to compare survival curves in univariate analysis.The significance level was defined as p < 0.05.
Results
The study population consisted of 31 (51.7%)men and 29 (49.3%)women, with a mean age of 49.2 ± 13.7 years, ranging from 22 to 76 years.The demographic, clinical, biochemical and echocardiographic characteristics of the population are listed in Table 1.
The cause of CKD was attributed to hypertensive nephrosclerosis in 30% of cases, to chronic glomerulonephritis in 30%, to diabetic nephropathy in 16.7%, to polycystic kidney disease in 5%, to chronic pyelonephritis in 1.7% and to other diseases in 16.7% of cases.
Among the individuals included in the study, 25% had a previous diagnosis of CVD: three patients diagnosed with CHF, six with a history of typical angina or acute myocardial infarction and six with a history of CVA.
The mean follow-up was 19.6 ± 6 months and during this period, there were nine deaths and five non-fatal cardiovascular events.CVD accounted for 66.7% of all deaths (three AMI and three CVA), while the other three deaths were attributed to other causes (one due to hypovolemic shock and two due to septic shock).The non-fatal cardiovascular events were: a CVA, two hospitalizations for decompensated CHF and two episodes of angina with coronary stenosis > 50% at coronary angiography.
The event-free survival rates in one and two years were 91.5% and 79.4%, respectively, for cardiovascular events, 96.5% and 88.5% for cardiovascular mortality and 96.5 % and 83% for overall mortality.The rate of survival free of cardiovascular events during follow-up was 87.2% and 55% in the groups with and without history of prior CVD, respectively (p = 0.007).While the rate of survival free of cardiovascular events was 66.4% in patients with moderate to severe diastolic dysfunction, in the group with abnormal relaxation or normal diastolic function it was 87.3% (p = 0.075).Figure 3 shows the survival curves free of cardiovascular outcome according with the presence or absence of moderate to severe diastolic dysfunction.
Discussion
This study showed free survival rates of overall mortality of 96.5% in one year and 83% in two years.These rates are comparable to those found by Silva et al. 19 , who found survival rates of 91% and 84% in one and two years respectively, in a study carried out in Brazil.Data from dialysis censuses show survival rates of 88.6% in one year and 79.1% in two years in Europe 20 and 77.4% in one year and 63% in two years in the USA 1 .The characteristics of patients included in this study could explain the differences found.We studied patients with more controlled mean blood pressure, which is uncommon in this population.Furthermore, the average EF was preserved in the subjects studied, characterizing a population of more stable patients from the point of view of LV systolic function.
CVD accounted for 66.7% of all deaths during follow-up.The proportion found is similar to those reported in studies on cardiovascular and overall mortality in hemodialysis patients 6,21 .However, multicenter studies, such as the Figures 1 and 2 show the survival curves free of cardiovascular mortality and fatal and nonfatal cardiovascular events, respectively.
Table 2 shows the comparison of demographic, clinical, biochemical and echocardiographic characteristics between groups with and without cardiovascular events.Patients with cardiovascular outcome had a higher prevalence of diabetes and previous cardiovascular event, higher LVSD and lower values of EF and systolic fractional shortening.
The univariate Cox model for cardiovascular events is shown in Table 3.At univariate analysis, diabetes mellitus, prior history of CVD, LVESD, EF and systolic fractional shortening were significantly associated with cardiovascular outcome.Multivariate analysis included variables: diabetes mellitus, previous cardiovascular event, EF, moderate to severe diastolic dysfunction and E/e' ratio.The variables LVESD and systolic fractional shortening were not included in the regression model because they are variables correlated with EF.In the final regression model, prior diagnosis of CVD and moderate to severe diastolic dysfunction showed to be independent risk factors for fatal and nonfatal cardiovascular events (Table 4).
Table 5 shows the univariate Cox model for cardiovascular mortality.Only a prior history of CVD was significantly associated with cardiovascular mortality in the univariate analysis.Multivariate analysis was not performed, because when using the variable selection criterion for the multivariate model (p <0.1),only prior CVD and malnutrition would be entered in the final model.
Survival function Censored
HEMO and AURORA studies, showed rates of less than 50% of mortality due to heart disease in the dialysis population 22,23 .These results confirm that CVD remain the leading cause of morbidity and mortality in hemodialysis patients.Hemodialysis patients have a 10-20-fold higher risk of cardiovascular mortality compared with the general population 24 and this is due to the fact that individuals are exposed to both traditional as well as non-traditional factors for cardiovascular complications 25 .
Univariate analysis suggested that traditional risk factors such as diabetes and previous history of CVD are related to a higher risk of cardiovascular events.On the other hand, age, smoking, and hypercholesterolemia were not risk factors for the occurrence of cardiovascular outcome.Although some studies showed a higher prevalence of traditional risk factors in the population with CKD than in the general population 26 , high rates of cardiovascular complications have not been fully explained by such risk factors 24,25 .Additionally, factors known to be non-traditional, such as anemia, malnutrition, inflammatory state, alterations in calcium-phosphorus product, among others, have been implicated as independent factors associated with cardiovascular complications in dialysis patients [27][28][29][30] .In the present study, however, these factors did not correlate with an increased risk of cardiovascular morbidity and mortality.
Evidence indicates that the LV systolic dysfunction parameters are independently associated with the occurrence of fatal and nonfatal cardiovascular events in patients on dialysis, with no difference in predictive power between the methods used to evaluate it 7 .In this study, we observed, at univariate analysis, that reductions in EF or systolic fractional shortening are related to increased risk of cardiovascular events.
LVH is a strong predictor of cardiovascular morbidity and mortality in hemodialysis patients 31 .LVH was present in 85% of patients in this study, a rate similar to the ones reported by other authors 6,32 .Zoccali et al. 31 showed that increased LVMI in hemodialysis patients was an independent predictor of cardiovascular events, regardless of basal LVM values.In this study, LVMI was not associated with cardiovascular morbidity and mortality.This result can be attributed to more stable characteristics of the patients included in the sample and the follow-up period of this study.
LAV is a potentially useful parameter to assess diastolic dysfunction and is related to the severity and duration of the dysfunction 5 .In a study of a population on renal replacement therapy, Barberato et al. 33 showed that LAVI was a predictor of mortality.In this study, although LAVI was mildly elevated in both groups, the variable was not an independent predictor of cardiovascular events and cardiovascular mortality.Possible explanations for the lack of association might be: in anemic patients, this index does not reflect diastolic dysfunction 34 ; the evaluated patients were not part of a population with chronic kidney disease with significant LV systolic function impairment; and the number of patients was insufficient to find statistically significant differences.In the multivariate analysis, previous diagnosis of CVD remained a risk factor for fatal and nonfatal cardiovascular events.Zoccali et al. 7 in a study of 254 hemodialysis patients, Rakhit et al. 35 in a study that included 176 patients with CKD stages 4 and 5 on conservative treatment or dialysis, and Cheung et al. 22 in the HEMO multicenter study, described a similar result in the final model of Cox multivariate analysis.
The present study demonstrated a high prevalence of diastolic dysfunction in patients with CKD on hemodialysis, corresponding to 83.6%, with 36.4% of patients having moderate to severe diastolic dysfunction.Studies carried out in hemodialysis patients showed that the prevalence of diastolic dysfunction in this population can vary from 63.5% to 87%, depending on the criteria used for definition and patients included in the sample [36][37][38] .Diastolic dysfunction is characterized by alterations in ventricular relaxation and compliance 5 and its high prevalence can be explained by the fact that the CKD expose the heart to pressure and volume overload and associated hemodynamic factors that cause myocardial alterations 39 .Myocardial fibrosis resulting from these processes is a major determinant of LV stiffness and elevated filling pressures, predisposing to the development of diastolic dysfunction 39 .
The presence of moderate to severe diastolic dysfunction was the only independent echocardiographic predictor of cardiovascular events.A study that employed only parameters derived from mitral inflow and pulmonary venous flow for the categorization of diastolic function showed that diastolic dysfunction is an independent predictor of mortality in patients on renal replacement therapy by hemodialysis 40 .
In a recently published study, which evaluated 129 hemodialysis patients and the used conventional Doppler and tissue Doppler criteria to classify diastolic dysfunction, Barberato et al. 37 showed that overall mortality was significantly higher in patients with diastolic dysfunction with pseudonormal pattern and restrictive flow, when compared to the group with diastolic dysfunction and abnormal relaxation or normal diastolic function.In this study, advanced diastolic dysfunction was showed to be an independent predictor of cardiovascular events.
The present study has limitations: the number of patients studied could influence the power of some variables, such as LAVI; duration of follow-up < two years, may have influenced the lack of association between left ventricular function parameters and cardiovascular mortality.The strengths are: the study is prospective and longitudinal; feasible techniques were used for the noninvasive analysis of diastolic function, allowing periodic analysis in patients who have high rates of cardiovascular events.
Conclusions
Hemodialysis patients have high rates of cardiovascular morbidity and mortality.The presence of diabetes, previous history of CVD and a lower EF are factors potentially related to cardiovascular events.Moderate to severe diastolic dysfunction was an independent risk factor for cardiovascular events, and although further studies are necessary to validate this finding, it is recommended that diastolic function evaluation, made through pulsed Doppler and tissue Doppler parameters, be included in the evaluation of patients undergoing hemodialysis.This measure will enable the early detection of individuals at risk in order to reduce morbidity and mortality.
Figure 1 -
Figure 1 -Curve of survival free of cardiovascular mortality
Figure 2 -
Figure 2 -Curve of survival free of fatal and nonfatal cardiovascular events
Figure 3 -
Figure 3 -Curves of survival free of cardiovascular events according to the presence of moderate to severe diastolic dysfunction
Table 1 -Demographic, clinical, biochemical and Doppler echocardiographic characteristics
The table data are expressed as mean ± standard deviation, median (range) or percentage.HBP: High blood pressure, DM: Diabetes mellitus, SBP: systolic blood pressure, DBP: diastolic blood pressure, BMI: body mass index, LVEDD: end-diastolic diameter of left ventricular EF: ejection fraction, LVMI: mass index left ventricle; LAVi: left atrial volume index, EDS: end-diastolic thickness of interventricular septum; EDPW: enddiastolic thickness of posterior wall of left ventricle; RWT: relative wall thickness, e '-early diastolic velocity of mitral annulus; E: early diastolic transmitral velocity; A: late diastolic transmitral velocity.Siqueira et
Table 2 -Comparison of demographic, clinical, biochemical and echocardiographic characteristics according to the presence of cardiovascular outcome
The table data are expressed as mean ± standard deviation, median (range) or percentage.Percentages were compared by chi-square test and means were compared by Student's t test.
|
2017-06-10T01:12:36.994Z
|
2012-07-05T00:00:00.000
|
{
"year": 2012,
"sha1": "e64fbf91e8e2809863c6b8065dd70e4aa8172ad2",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/abc/a/KDY8LzLfmYMfX5LrvvXwmqy/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e774d54ce2d2d4bfae2670e8d8d1cdd13cc7bf4c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235271288
|
pes2o/s2orc
|
v3-fos-license
|
The SLOGERT Framework for Automated Log Knowledge Graph Construction
. Log files are a vital source of information for keeping systems running and healthy. However, analyzing raw log data, i.e., textual records of system events, typically involves tedious searching for and in-specting clues, as well as tracing and correlating them across log sources. Existing log management solutions ease this process with efficient data collection, storage, and normalization mechanisms, but identifying and linking entities across log sources and enriching them with background knowledge is largely an unresolved challenge. To facilitate a knowledge-based approach to log analysis, this paper introduces SLOGERT, a flex-ible framework and workflow for automated construction of knowledge graphs from arbitrary raw log messages. At its core, it automatically identifies rich RDF graph modelling patterns to represent types of events and extracted parameters that appear in a log stream. We present the workflow, the developed vocabularies for log integration, and our proto-typical implementation. To demonstrate the viability of this approach, we conduct a performance analysis and illustrate its application on a large public log dataset in the security domain.
Introduction
Log analysis is a technique to deepen an understanding of an operational environment, pinpoint root causes, and identify behavioral patterns based on emitted event records.Nearly all software systems (operating systems, applications, network devices, etc.) produce their own time-sequenced log files to capture relevant events.These logs can be used, e.g., by system administrators, security analysts, and software developers to identify and diagnose problems and conduct investigations.Typical tasks include security monitoring and forensics [38,23], anomaly detection [16,11,28], compliance auditing [39,25], and error diagnosis [44,6].Log analysis is also a common issue more generally in other domains such as power systems security [35], predictive maintenance [40], workflow mining [3,4], and business/web intelligence [17,30].
To address these varied applications, numerous log management solutions have been developed that assist in the process of storing, indexing, and searching log data.However, investigations across multiple heterogeneous log sources with unknown content and message structures is a challenging and time-consuming task [41,22].It typically involves a combination of manual inspection and regular expressions to locate specific messages or patterns [34].
The need for a paradigm shift towards a more structured approach and uniform log representations has been highlighted in the literature for a long time [21,19,34], but although various standardization initiatives for event representation were launched (e.g., [18,29,9,8]), none of them has seen widespread adoption.As a result, log analysis requires the interpretation of many different types of events, expressed with different terminologies, and represented in a multitude of formats [29], particularly in large-scale systems composed of heterogeneous components.As a consequence, the analyst has to manually investigate and connect this information, which is time consuming, error prone and potentially leads to an incomplete picture.
In this paper, we tackle these challenges and propose Semantic LOG ExtRaction Templating (SLOGERT), a framework for automated Knowledge Graph (KG) construction from unstructured, heterogeneous, and (potentially) fragmented log sources, building on and extending initial ideas [12].The resulting KGs enable analysts to navigate and query an integrated, enriched view of the events and thereby facilitates a novel approach for log analysis.This opens up a wealth of new (log-structured) data sources for KG building.
Our main contributions are: (i) a novel paradigm for semantic log analytics that leverages knowledge-graphs to link and integrate heterogeneous log data; (ii) a framework to generate RDF from arbitrary unstructured log data through automatically generated extraction and graph modelling templates; (iii) a set of base mappings, extraction templates, and a high-level general conceptualization of the log domain derived from an existing standard as well as vocabularies for describing extraction templates; (iv) a prototypical implementation of the proposed approach, including detailed documentation to facilitate its reuse, and (v) an evaluation based on a realistic, multi-day log dataset [27].All referenced resources, including the developed vocabularies 3 , source code 4 , data, and examples, are available from the project website 5 .
The remainder of this paper is organized as follows: we introduce our log KG building approach in Section 2 and evaluate it in Section 3 by means of example use cases.We then contrast the appraoch against the state of practice (Section 4) and review various strands of related work (Section 5); finally, we conclude in Section 6 with an outlook on future work.
Building knowledge graphs from log files
In this section, we introduce the SLOGERT (Semantic LOG ExtRaction Templating) log KG generation framework and discuss its architecture, components, and their implementation.The associated workflow, illustrated in Figure 1, transforms and integrates arbitrary log files provided as input.It consists of two major phases: (1) template extraction, which results in an RDF pattern for each type of log message that appears in the sources; and (2) graph building, which -based on these patterns -transforms raw log data into RDF.In the Template Extraction phase, SLOGERT will automatically generate event templates from unstructured log data by identifying the different types of log messages and their variable parts (i.e., parameters) in the raw log messages.For this, we rely on a well-established log parser toolkit [46] that generates extraction templates, which at this stage do not provide any clues about the semantics of the log message or the contained parameters.To enrich these extraction templates with semantics, we next annotate the parameters (variable parts) according to their type as well as extract relevant keywords from the log messages, which are used to link each log template with relevant CEE [29] annotations.
We then use this information to associate each log extraction template with a corresponding RDF graph modelling template (represented as Reasonable Ontology Templates (OTTRs)).The resulting graph modelling templates can be annotated, adapted, extended and reused, i.e., they only have to be generated (and optionally extended) once from raw log data in which unknown log events appear.
In the Graph Building phase, SLOGERT then parses each line in a log file and applies the matching extraction and RDF modelling templates to transform Fig. 2: SLOGERT components and processing of a single example log line them into RDF.Thereby, we generate entities from textual parameters in the log stream and represent them in our log vocabulary.Combining the generated RDF from multiple, potentially heterogeneous log files and log sources results in a single integrated log KG.This graph can contextualize the log data by linking it to existing background knowledge -such as internal information on assets and users or external information, e.g., on software, services, threats etc.Finally, analysts can explore, query, analyze, and visualize the resulting log KG seamlessly across log sources.
SLOGERT Components
Following this high-level outline of the SLOGERT workflow, this section describes each component in more detail.For a dynamic illustration of the overall process by way of an example log line, cf. Figure 2.
Phase 1: Template Extraction
Template & Parameter Extraction (A1), i.e., the first step in the process from raw log lines to RDF, relies on LogPAI 7 [46], a log parsing toolkit that identifies constant strings and variable values in the free-text message content.This step results in two files, i.e., (i) a list of log templates discovered in the log file, each including markings of the position of variable parts (parameters), and (ii) the actual instance content of the logs, with each log line linked to one of the log template ids, and the extracted instance parameters as an ordered list.This process is fully automated and applicable to any log source, but depending on the structure of the log messages, it may not necessarily result in clearly separated parameters.As an example, consider that a user name next to an IP address will be identified as a single string parameter, as they usually change together in each log line.To achieve better results, LogPAI therefore accepts regular expression specifications of patterns that should be extracted as parameters, if detected.We take advantage of this capability by defining general regex patterns for common elements and including them in the configuration.At the end of this stage, we have extraction templates and the associated extracted instance data, but their semantic meaning is yet undefined.
Semantic Annotation (A2) takes the log templates and the instance data with the extracted parameters as input and (i) generates RDF rewriting templates that conform to an ontology and persists the templates in RDF for later reuse (A2-1), (ii) detects (where possible) the semantic types of the extracted parameters (A2-2), (iii) enriches the templates with extracted keywords (A2-3), and (iv) annotate the templates with CEE terms (A2-4) For the parameter type detection (A2-1), we first select a set of log lines for each template (default: 3) and then apply rule-based Named Entity Recognition (NER) techniques.Specifically, we use TokensRegex from Stanford CoreNLP [5] to define patterns over text (sequences of tokens), and map them to semantic objects.CoreNLP can detect words in combination with part of speech (POS) tags and named entity labels as part of the patterns.
Such token-based extraction works well for finding patterns in natural language texts, but log messages often do not follow the grammatical rules of typical natural language expressions and contain "unusual" entities such as URLs, identifiers, and configurations.For those cases, we additionally apply standard regex patterns on the complete message.For each identified parameter, we also define a type and a property from a log vocabulary to use for the detected entities.In case a parameter does not result in any matches, we mark it as unknown.
In our prototype, we collect all parameter extraction patterns in a YAML configuration file and model a set of generic patterns that cover various applications, including the illustrative use cases in Section 3.These patterns are reusable across heterogeneous log sources and can be easily extended, e.g., with existing regex log patterns such as, e.g., Grok8 .For the semantic representation necessary to allow for a consistent representation over heterogeneous log files, we followed the Ontology 101 methodology [33], extended a prior log vocabulary [26] and mapped it to the Common Event Expression (CEE) [29] taxonomy.Furthermore, we persist our ontology with a W3ID namespace (i.e., https://w3id.org/sepses/ns/log#)and use Widoco [14] for the ontology documentation.
Our vocabulary core represents log events (log:Event) with a set of fields (sub-properties of the log:hasParameter object property and log:parameter datatype properties).Each log event originates from a specific host (log:hasSource Host) and exists in a specific log source (log:hasLogSource, e.g., an FTP log file).The underlying source type (e.g., ftp) is expressed by log:SourceType, and the log format is represented as log:Format (e.g., syslog).Furthermore, a log event template is tagged with its underlying action (e.g., login, access), domain (e.g., app, device, host), object (e.g., email, app), service (e.g., auth, audit), status (e.g., failure, error), and a subject (e.g., user) based on the CEE specification 9 .
Once we have identified extraction templates for events and parameters, we can generate corresponding RDF generation templates for each of them.This step expresses the patterns that determine how KGs are built from the log data as reusable OTTR [1] templates, i.e., in a language for ontology modeling patterns.As all the generated templates are reusable and should not have to be regenerated for each individual instance of a log file, we persist them in RDF with their associated hash (based on the static parts of the log messages) as identifier.Finally, as a prerequisite to generate the actual KG from these OTTR templates, we transform all log line instances of the input into the stOTTR format, a custom file format similar to Turtle, which references the generated OTTR templates.
Phase 2: Graph Building In this step, we generate a KG based on the OTTR templates and stOTTR instance files generated in the extraction component.
RDFization (A3)
For the conversion of OTTR templates and instance files, we rely on Lutra 10 , the reference implementation for the OTTR language.Thereby, we expand the log instance data into an RDF graph that conforms to the log vocabulary and contains the entities and log events of a single log file.
Background KG Building (A4) Linking log data to background knowledge through the use of appropriate identifiers is a key step that facilitates enrichment with both local context information and external knowledge.The former represents information that is created and maintained inside the organization and not intended for public release.Examples include, e.g., the network architecture, users, organizational structures, devices, servers, installed software, and documents.
This knowledge can either be maintained manually by knowledge engineers or automatically by importing, e.g., DHCP leases, user directories with metadata, or software asset information.The dynamic nature of such information (e.g., a user switches department, a computer is assigned a new IP address, software is uninstalled) necessitates a mechanism to capture temporal aspects.To this end, RDF-Star can be used to historize the contained knowledge.
The second category, external knowledge, links to any publicly available (RDF) data sources, such as, e.g., IT product and service inventories, vulnerability databases, and threat intelligence information (e.g., collected in [24]). 9https://cee.mitre.org/language/1.0-beta1/core-profile.html 10 https://ottr.xyz/#LutraKnowledge Graph Integration (A5) combines the KGs from the previously isolated log files and sources into a single, linked representation.Key concepts and identifiers in the computer log domain follow a standardized structure (e.g., IP and MAC addresses, URLs) and hence can be merged using the same vocabulary.In case external knowledge does not align with the generated graphs (e.g., entity identifiers differ), an additional mapping step has to be conducted before merging.Existing approaches, such as the Linked Data Integration Framework Silk11 can be used for this purpose.
Use Cases & Performance
We illustrate the presented approach and its applicability to real-world log data based on a systematically generated, publicly available data set that was collected from testbeds over the course of six days [27].Furthermore, we report on the performance of the developed prototype (cf.Section 3.3).
Data Source
The AIT log dataset (V1.1) 12 contains six days of log data that was automatically generated in testbeds following a well-defined approach described in [27].It is a rare example of a readily available realistic dataset that contains related log data from multiple systems in a network.In addition, information on the setup is provided, which can be used as background knowledge in our approach.As detailed information on the context of the scenario was not available, we complemented it with synthetic example background knowledge on the environment that the data was is generated in for demonstration purposes.
Each of the web servers runs Debian and a set of installed services such as Apache2, PHP7, Exim4, Horde, and Suricata.Furthermore, the data includes logs from 11 Ubuntu hosts on which user behavior was simulated.On each web server, the collected log sources include Apache access and error logs, syscall logs from the Linux audit daemon, suricata logs, exim logs, auth logs, daemon logs, mail logs, syslogs, and user logs.The logs capture mostly normal user behavior; on the fifth day of the log collection (2020-03-04), however, two attacks were launched against each of the four web servers.In total, the data set amounts to 51.1 GB of raw log files.
Use Cases
In this scenario, we assume that activities have raised suspicion and an analyst wants to conduct a forensic analysis based on the available log data.We will illustrate how our proposed framework can assist in this process.To this end, we first processed all raw logs 13 ?logEventlog:hasUser / log:user.name"daryl" .FILTER (xsd:dateTime(?timestamp)> "2020-03-04T18:30:00"^^xsd:dateTime) OPTIONAL { { select ?templateId (group_concat(?anno;separator=',') as ?annotations)where { ?templateId a logex:LogEventTemplate ; logex:hasAnnotation/rdfs:label ?anno } group by ?templateId } } } ORDER BY ?timestampListing 1: SPARQL query to show activities of a user Listing 1 demonstrates how an analyst can query the activities associated with a given username (i.e., daryl).The query illustrates the ability to access integrated log data and the flexibility of SPARQL as a query language for log data analytics.
Table 1 shows an excerpt of the query results, with the template name, timestamp of the event, host, type of log, and automatically extracted CEE [29] annotation labels.The template associated with each log event makes it possible to easily identify events of the same type; human-readable labels can optionally be assigned in the template library.As a simple illustrative example, the query makes use of only a small subset of the available extracted properties.To explore the context and increase an analyst's understanding of the situation in the course of an investigation, other extracted properties such as IP addresses, processes, commands, files, URLs, and email addresses can be added.These extracted entities establish links among Fig. 3: Graph exploration: context information for User daryl log events and connect them to background knowledge, allowing the analyst to explore the log data from multiple perspectives in order to "connect the dots", e.g., in the context of attacks.
As an example, consider that the sequence of login failures in a short time period evident in Table 1 suggests a possible brute force attack, and the successful login shortly thereafter is alarming.To explore this further, the analyst can construct an enriched view on the available log events by visualizing the data (e.g., using GraphDB) and interactively following links of interest.The graph structure makes it possible to navigate the contextualized log data with interlinked background knowledge that otherwise is typically stored in external documentation or only exists in analysts' heads.In the example in Figure 3, the analyst started from the username daryl and navigated to the Person associated with the account; from there, she can obtain additional information about the person, such as the role in the company, contact information, assigned devices, and additional usernames (e.g., User page).In a similar manner, the analyst could integrate and explore external RDF data sources.We can also see how events are connected to the user in the selected time frame, and how the templates are annotated with detected CEE terms.
Performance
For each log source in the AIT log dataset -collected from four servers in a testbed environment -we ingest the raw log files 15 and execute all steps in the workflow according to Figure 2 processing steps, we measure the execution time for each phase (i.e., template extraction and graph building) on a single machine with a Ryzen 7 3700X processor (64GB RAM).
Table 2 shows the input log sources with their total file sizes over all four servers.Furthermore, it lists the sizes of the generated (intermediate) KGs in Turtle format (TTL), as well as in compressed format (HDT).At the end of the SLOGERT process, all intermediate graphs are merged into a single KG.In our illustrative scenario, we processed 838.19MB of raw log data in total and generated a KG from it that is 498.36MB in HDT format.Note that whereas the resulting TTL graph files were approx.four times the raw input size, the compressed HDT data, which can also easily be queried with SPARQL, is about 40% smaller than the original log file.The size of the generated graphs could further be reduced by (i) not including the full original raw input message (currently we keep it as message literal), and/or (ii) discarding unknown parameters, and/or (iii) extracting only the specific classes and properties necessary for a given set of analytic tasks.
In terms of processing time graph building (P2) is the most time consuming phase (approx.168 min) 16 .We see that run-time scales linearly with file size; it can easily be reduced by parallelizing the semantic annotation and graph generation phases.Our prototype converts the log events into batches -the number of log lines per batch (200k in our experiments) can be configured.Although we executed the process on a single machine in sequence, each batch file could easily be processed in parallel.Taking the audit log as example, all audit log event lines were split into 200k batches (13 files), taking approx.17 min each to build the KG. 3 compares and contrasts SLOGERT with such existing LMSs.Whereas some tasks, such as log collection from raw logs, can rely on available standard mechanisms, the approach differs fundamentally in terms of event and parameter detection, normalization, background knowledge linking, and the way that insights can be gained.
In particular, SLOGERT (i) automatically classifies events and assigns types from a taxonomy based on the static parts of the messages and (ii) identifies and annotates variable parts of the messages.Although existing LMSs often also include a predefined and limited set of extractors to identify relevant patterns (e.g., IP address, date/time, protocol), they do not capture entities, their relationships, and the nature of these relationships in a graph structure.Furthermore, they are limited in their representational flexibility by the structure of the underlying, typically relational, storage.
The graph-based approach makes it possible to link assets to concepts and instances defined in background knowledge in order to enrich log events with additional internal or external knowledge.For instance, multiple usernames can be linked to the same person they belong to or software assets can be linked to public sources such as the Cyber-security Knowledge Graph (CSKG) [24] to include information about their vulnerabilities.
Finally, whereas existing LMSs typically provide relatively static dashboards and reports, SLOGERT opens up possibilities for exploration through graph queries and visual navigation, providing a new flexible perspective on events, their context, and their relationships.
Overall, no comparable graph-based semantic systems exist -current stateof-the-art message-centric log management systems focus on on aggregation, management, storage, and manual step-by-step textual search and interpreta-tion with implicit background knowledge.SLOGERT makes it possible to automatically contextualize, link and interpret log events.It complements, but does not replace established log management solutions with additional techniques to extract, enrich, and explore log event data.Specifically, our current focus is facilitating deeper inspection of subsets of log data from selected log sources, e.g., in a relevant time frame.To this end, we have developed flexible mechanisms for fully automated ad-hoc extraction and integration.
Related Work
Log parsing and extraction Logs, i.e., records of the events occurring within an organization's systems and networks [21], are composed of log entries, each of which provides information related to a specific event that has occurred.These log entries typically consist of a structured part with fields such as a timestamp, severity level etc., and an unstructured message field.Whereas conventions for the structured parts are somewhat standardized (e.g., in [2])), there is little uniformity on the content of the message field, despite numerous standardization attempts (e.g., IDMEF [18], CEE [29], CIM [9] and CADF [8]).Because log messages are produced from statements inserted into the code, they often do not follow typical natural language grammar and expression, but are shaped according to the code that generates them.Specifically, each log message can be divided into two parts: a constant part (i.e., fixed text that remains the same for every event instance) and a variable part that carries the runtime information of interest, such as parameter values (e.g., IP addresses and ports).
Traditional manual methods for analyzing such heterogeneous log data have become exceedingly labor-intensive and error-prone [15].Furthermore, the heavy reliance on regular expressions in log management results in complex configurations with customized rules that are difficult to maintain as systems evolve [15].These limitations of regex-based event extraction motivated the development of data-driven approaches for automated log parsing (e.g., [42]) that leverage historical log messages to train statistical models for event extraction.[15] provides a systematic evaluation of the state-of-the-art of such automated event extraction methods.We leverage these methods as the first step in our automated KG construction workflow.Specifically, our template extraction is based on the Log-PAI logparser toolkit [46], which provides implementations of various automated event extraction methods.
Log representation in Knowledge Graphs has attracted recent research interest because graph-based models provide a concise and intuitive abstraction for the log domain that can represent various types of relations flexibly.Therefore, a variety of approaches that apply graph-based models and graph-theoretical analysis techniques to log data have been proposed in the literature, covering applications such as query log analysis [10,45], network traffic and forensic analysis [43,7], and security [36].Whereas these contributions are focused on graph-theoretical metrics and methods, another stream of knowledge-graph-centric literature has emerged more recently.CyGraph [32], e.g., develops a property graph-based cybersecurity model and a domain-specific query language for expressing graph patterns of interest.It correlates intrusion alerts to vulnerability paths, but compared to our approach, it does not aim for semantic lifting of general log data.
In terms of semantic KGs, existing approaches have focused either on structured log data only [31], or on tasks such as entity [20] and relation [37] extraction in unstructured log data.Whereas some of the extraction methods introduced in this context are similar to our approach, their focus is less on log representation, but on cybersecurity information more general (e.g., textual descriptions of attacks).
Other contributions have focused on a conceptualization of the log domain and the development of appropriate vocabularies for log representation in KGs [13].Another recent, more narrowly focused approach [26] that does not cover general extraction introduces a vocabulary and architecture to collect, extract, and link heterogeneous low-level file access events from Linux and Windows event logs.Finally, [24] provides a continuously updated resource that links and integrates cybersecurity information, e.g., on vulnerabilities, weaknesses, and attack patterns, providing a useful linking target in the context of this log extraction framework.
Conclusions and Future Work
This paper introduced SLOGERT, a flexible framework and workflow for automated KG construction from unstructured log data.The proposed workflow can be applied to arbitrary log data with unstructured messages and consists of a template extraction and a graph building phase.Our prototype demonstrated the viability of the approach, particularly if the messages in the log sources do not require frequent relearning of the extraction templates.Configurability and extensibility were key design goals in the development of SLOGERT.For arbitrary log data with unstructured messages in a given log domain, the framework generates a keyword-annotated RDF representation.The demonstrated configuration covers standard concepts for various log sources relevant in a cybersecurity context, however, they can easily be adapted for different log domains.
An inherent limitation evident from our experiments was a sensitivity to the training data set during template extraction; specifically, entities can not be properly identified if there is too little variation in the variable parts of a given log message.This limitation can be tackled through larger log collections, ideally through a community effort towards creating a shared library of extraction templates for standard log data sources. 20More broadly, we also envisage a community-based effort to develop mappings, extensions for specific log sources, and shared domain knowledge such as vulnerability information and threat intelligence.
Due to the widespread use of unstructured log data in numerous domains and the limitations of existing analytic processes, we expect strong adoption potential for SLOGERT, which in turn could also drive adoption of Semantic Web technologies in log analytics more generally.Furthermore, we also expect impulses for KG research and takeup by KG builders that need to integrate log data into their graphs.
In our own research, we will apply the proposed approach in the context of semantic security analytics 21 .Our immediate future work will focus on the integration into logging infrastructures, e.g., by supporting additional formats and protocols.Furthermore, we will focus on graph management for template evolution and incremental updating of log KGs.
Conceptually, our bottom-up extraction approach provides a foundation for future work on linking it to higher-level conceptualizations of specific log domains (e.g., based on DMTF's CIM [9] or the CADF [8] event model).Potentially, this can also provide a foundation for research into event abstraction, i.e., automatically transforming a sequence of individual log events into higher-level composite events or log-based anomaly detection, e.g., through combinations of rule-based methods and relational learning and KG embedding techniques.
Table 1 :
with SLOGERT and stored them together with Query result for activities of a given user in the network (excerpt) the background knowledge in a triple store14.Overall, we collected 838.19MB of raw log files, resulting in 84,827,361 triples for this scenario.
Table 2 :
. As the graph generation is split into multiple Log sources and graph output.The run time for the following phases is measured in seconds: template extraction (P1) and graph building (P2)
Table 3 :
Comparison of SLOGERT with existing LMSs 4 State of the Practice Commercially available Log Management Systems (LMSs) -such as Splunk 17 , Graylog 18 , or Logstash 19 -prioritize aggregation, normalization, and storage over integration, contextualization, linking, enrichment, and rich querying capabilities.They are typically designed to allow scalable retention of large log data and focus on reporting and alerting based on relatively simple rules.Table
|
2021-06-02T13:16:16.621Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "edbe68b1a861d44e060a217421b4e0e22b44e9b6",
"oa_license": "CCBY",
"oa_url": "https://research.wu.ac.at/ws/files/21761789/_ESWC21___Log_Knowledge_Graphs.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1e24486fe57b8074cd6a993e5600d3c5102895e1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
219661308
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative changes in DNA methylation induced by monochromatic light in barley regenerants obtained by androgenesis
Changes in DNA methylation are one of the best known mechanisms of epigenetic regulation of gene expression, which in the process of induced androgenesis is associated with reprogramming of haploid microspores development towards the formation of embryos, as a result of exposure of anthers in ears and then anthers culture in vitro to stress factors. The aim of the study was to test the hypothesis of whether the use of monochromatic light during induced androgenesis might be associated with epigenetic phenomena. The experiments were carried out on DH plants of spring barley (Hordeum vulgare L.) obtained by androgenesis modifi ed by monochromatic light: blue, green and red. A quantitative evaluation of the eff ect of light on the degree of DNA methylation was performed using RP-HPLC for the comparison of regenerants obtained under standard, control conditions (darkness) with those obtained with light usage. The diff erences in the amount of methylated cytidine in comparison to the control were: 0.40%, 0.16% and -0.55%, for blue, red and green light, respectively. The level of global genomic DNA methylation from control plants was in the range 21.32-21.52%. Methylation changes in response to monochromatic light used during callus formation in anthers culture, determined by RP-HPLC, are signifi cant although small.
Introduction
DNA methylation is a modifi cation of deoxyribonucleic acid, which occurs during the S phase of the cell cycle, under the infl uence of DNA methyltransferases, specifi c against nitrogenous bases: cytosine or adenine, which are part of the nucleotides deoxycytidine (dC) and deoxyadenosine (dA). It involves the attachment of methyl groups (-CH 3 ) mainly to the 5 th carbon atom in the pyrimidine ring and less frequently on the methylation of NH 2 + groups located either at the 4 th carbon atom of cytosine or at the 6 th carbon atom of adenine (purine base). Methylation occurring in CpG dinucleotides and CpXpG trinucleotides, where X is A, T, C is defi ned as symmetrical methylation, while in CpXpX sequences, as unsymmetrical.
The methyl group donor is 5'-adenosine methionine and the reaction products are:5-methyl-2'-deoxycytidine (5mdC), the main product of DNA methylation as well as N 4 -methyl-deoxycytidine and N 6 -methyl-deoxyadenosine. The amount of 5mdC depends on both the enzymatically catalysed methylation as well as demethylation, which may be passive or active. Passive demethylation occurs during replication and is not accompanied by melyltransferase-1 (DNMT1), thus resulting in a lack of conservative methylation (Guz et al., 2010). In contrast, active demethylation is catalyzed by bifunctional DNA glycosylases from the D family during the repair of replication errors (Li et al. 2018). It is also associated with modifi cations of histones and, most likely, non-coding RNA (Parrilla-Doblas et al., 2019, Zhang et al. , 2012. In mammals, the share of 5mdC in total dC content is 3-4%, which is 0.75-1% in relation to all nucleosides (Guz et al., 2010). It is estimated that 70-80% of CpG dinucleotides in the entire mammal genome are methylated (Law and Jacobsen, 2010). On the other hand, the amount of methylated cytidine in the plant genome is 20-30% (Finnegan et al., 1998). In the Arabidopsis thaliana genome CpG, CpXpG and CpXpX are methylated approximately at 24%, 6.7% and 1.7%, respectively (Law and Jacobsen, 2010). The degree of DNA methylation changes with tissue age, wherein the methylation non-related to CpG islands is characteristic for diff erentiating cells (Peredo et al., 2006).
The androgenesis in vitro of haploid microspores allows for reprogramming their development from the gametophyte, leading to the formation of functional pollen grains, to sporophyte, leading to embryo formation. This process is accompanied by changes in genomic DNA methylation. Although in mammals most aspects of epigenetic regulation of both embryo as well as cancer cell development are quite well understood (Guz et al., 2010), the mechanisms of plant DNA methylation patterns reset remains predominantly unexplored. Changes in patterns of genomic DNA methylation in plant tissues may appear as a response to environmental stresses, associated with the redox signalling system (Bednarek and Orłowska, 2020). It has been shown that these changes are not only specifi c to individual species, but also diff er within genotypes (Karan et al., 2012). Among others, plant regeneration by tissue culture in vitro also induces variation in the level of DNA methylation, depending on plant regeneration system, genotype of donor plants, explant, nutrient medium, as well as time duration of the culture. Among factors likely affecting changes in DNA methylation is also light. It was found that the total amount of methylated cytidine increased in DNA of barley DH regenerants obtained via androgenesis or somatic embryogenesis in comparison with the donor plant, whereas in triticale it decreased (Machczyńska et al., 2014;Orłowska et al., 2016). However, if one or more subsequent generative propagation cycles are carried out, the amount of 5mdC stabilizes in successive plants thus obtained. Quantitative changes of methylated DNA are accompanied by changes in methylation patterns (Bednarek i Orłowska, 2020;Machczyńska et al., 2014;Niedziela, 2018;Orłowska et al., 2016).
Quantitative determination of genomic DNA methylation can be carried out by RP-HPLC (Reversed Phase-High Performance Liquid Chromatography). This method was used for analyses of genomic DNA of cereal plants in response to abiotic stress (Niedziela, 2018), as well as in studies of barley and triticale in vitro regeneration (Orłowska et al., 2016;Machczyńska et al., 2014). The purpose of the present work was to evaluate whether modifi cations of the androgenesis process in anther cultures in vitro, consisting of the use of monochromatic light at the callus induction stage, aff ects changes in the level of methylation of genomic DNA in regenerants.
Materials and Methods
DNA from leaves of spring barley regenerants (Hordeum vulgare L.), genotype 2dh/8, were used for the experiment. Regenerants were obtained in anther culture in vitro, proceeded according to a modifi ed protocol by using diff erent monochromatic lighting at the stage of callus formation on induction medium (usually run in darkness). Regenerant plants were grown in dedicated trays with wells for single plants, in a phytotron at 18/14° C and in a 16/8 h day/night photoperiod. Leaves for DNA isolation were taken from plants in the tillering stage. The ploidy of regenerants genome was determined using a CyFlow Ploidy Analyzer fl ow cytometer (Sysmex Polska Sp. Z o.o.). This stage of the experiment and its detailed results are the subject of a separate publication (Siedlarz et al., 2020). Here, the subject of research was the DNA of regenerants obtained as a result of androgenesis in vitro modifi ed at the time of callus induction by monochromatic LED light: blue 454.63 nm, green 525.95 nm and red 630.84 nm. The control group consisted DNA of regenerants obtained under standard conditions, i.e. in the dark during callus induction (Orłowska et al., 2016, Bednarek and Orłowska, 2020, Siedlarz et al., 2020. DNA was isolated from pairs of regenerates Quantitative changes in DNA methylation induced by monochromatic light in barley... obtained from the anther of the same ear: 10 in control conditions and 10 in light modifi ed conditions. A total of 60 DNA samples were isolated from leaves of plants in the tillering phase using the DNasy Mini Prepkit kit (Qiagen GmbH, Hilden, Germany), according to the manufacturer's methodology. DNA concentration and purity were determined using a UV-Vis NanoDrop 2000c/2000 spectrophotometer (Thermo Scientifi c, USA). The quality of the samples was verifi ed electrophoretically in 1.4% agarose gel.
Results and Discussion
The experiment was conducted to check whether modifi cations of the in vitro androgenesis by the usage of monochromatic light at the callus induction stage aff ects changes in the level of genomic DNA methylation of regenerants. DNA which was isolated from 60 plants, more specifi cally 30 pairs of plants, obtained under control and light-modifi ed conditions, were subjected to quantitative RP-HPLC analysis. Since the donor plants were characterized by a constant level of DNA methylation, as they came from generative reproduction (Orłowska et al., 2016), we assume that the modifi cations of methylation observed in this experiment were induced by light.
There is no information available on how many generative cycles are needed to stabilize/eliminate (if possible at all) the eff ects of tissue cultures on DNA methylation. Nevertheless, it has been shown that in both the barley and triticale genome the level of methylation stabilizes after one/two cycles (Machczyńska et al., 2014;Orłowska et al., 2016). Thus, to stabilize DNA methylation changes induced in regenerants genome DNA during in vitro tissue culture, the use of regenerants should be considered as donor plants. The total range of methylation changes in the barley genome determined in the present experiment was small and ranged from 21.12 to 21.87%. Also, Orłowska et al. (Fiuk et al., 2010). In the present experiment, the total amount of 5dmC in the DNA of control plants averaged 21.4% and did not diff erentiate the tested regenerants obtained under unmodifi ed light conditions (Table 1). However, in the group of regenerants obtained using light at the callus induction stage, diff erences were observed: for blue light conditions the amount of 5dmC was the smallest, signifi cantly diff erent from the value registered for parallel control plants, while for green light it was the largest and also diff erent from the control group (Table 1). The results obtained for the group exposed to red light were ambiguous. They did not diff er signifi cantly from the value obtained for the group of plants under blue light. However, they did not diff er from the control plants either. The diff erence in the amount of methylated cytidine between control plants and plants regenerated using light was on average: + 0.40%, + 0.16% and -0.55% for blue, red and green light (Fig. 1). In regenerants obtained in conditions modifi ed by red light, diff erences from control ranged from -0.86% to + 0.92%. Those results were ambiguous and require further investigation. The changes in the global level of cytidine methylation in cereal plants described in the literature, induced by generative reproduction, mainly concerned comparisons between donor plants and regenerants under standard conditions for particular laboratories, or between the applied plant regeneration systems. It is interesting that within the Hordeum vulgare L. species, in vitro cultivation of wild forms caused changes in cytosine methylation in both level and patterns of methylation in comparison to the donor plant, in which higher level of DNA methylation were found (Li et al., 2007). An inverse relationship was observed for the barley cultivar Scarlett, since the lowest average value of global methylation was observed in donor plants, while the average value of DNA methylation and their off spring was 20% and 20.13%, respectively (Orłowska et al., 2016). Higher levels of methylation were observed in triticale DNA for plants obtained in cultures of spontaneously released microspores, compared to regenerants from the culture of immature zygotic embryos. Diff erences between donor plants and regenerants have been shown. The average methylation of the genome of a donor plant was 25.4%, whereas regenerants 24.1%. The fi rst generative progeny was characterized by 23.6% DNA methylation, the second 23.8%, and the third generative progeny-23.9%. (Machczyńska et al., 2014). Changes in the level of genomic DNA methylation are specifi c for each species, and also depend on the in vitro culture conditions. Exposure to UV-B light of an annual plant, big sagebrush seedlings obtained from apical meristem shoots induced a signifi cant reduction in the total level of DNA methylation (Pandey and Pandey-Rai, 2015). The presented results also indicate that the type of monochromatic light used during callus induction in anthers in vitro culture causes changes in the level of methylation of genomic DNA, in a manner depending on the wavelength of light used, i.e. the color of the monochromatic light.
|
2020-05-21T09:19:00.056Z
|
2020-04-30T00:00:00.000
|
{
"year": 2020,
"sha1": "8b6ee02de353c25818835380a9e7e3d5a732330b",
"oa_license": "CCBYSA",
"oa_url": "http://ojs.ihar.edu.pl/index.php/biul/article/download/157/132",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ad6654518ead18253d111bfcf13bd7ac25ab5407",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
3006891
|
pes2o/s2orc
|
v3-fos-license
|
Rudiments of Dual Feynman Rules for Yang-Mills Monopoles in Loop Space
Dual Feynman rules for Dirac monopoles in Yang-Mills fields are obtained by the Wu-Yang (1976) criterion in which dynamics result as a consequence of the constraint defining the monopole as a topological obstruction in the field. The usual path-integral approach is adopted, but using loop space variables of the type introduced by Polyakov (1980). An anti-symmetric tensor potential $L_{\mu\nu}[\xi|s]$ appears as the Lagrange multiplier for the Wu-Yang constraint which has to be gauge-fixed because of the ``magnetic'' $\widetilde{U}$-symmetry of the theory. Two sets of ghosts are thus introduced, which subsequently integrate out and decouple. The generating functional is then calculated to order $g^0$ and expanded in a series in $\widetilde{g}$. It is shown to be expressible in terms of a local ``dual potential'' $\widetilde{A}_\mu (x)$ found earlier, which has the same progagator and the same interaction vertex with the monopole field as those of the ordinary Yang-Mills potential $A_\mu$ with a colour charge, indicating thus a certain degree of dual symmetry in the theory. For the abelian case the Feynman rules obtained here are the same as in QED to all orders in $g$, as expected by dual symmetry.
Introduction
It has long been known that monopoles in gauge theories acquire through their definition as topological obstructions in the gauge field an intrinsic interaction with the field. In fact, in an inspiring paper of 1976, Wu and Yang [1] first showed by a beautiful line of argument how the standard (dual) Lorentz equation for a classical point magnetic charge could be derived as a consequence of its definition as a monopole of the Maxwell field. Since electromagnetism is dual symmetric, it follows that the ordinary Lorentz equation for a point electric charge can also be derived by considering the latter as a monopole of the dual Maxwell field. Moreover, it can be seen that this approach for deriving the interactions of monopoles, which we shall henceforth refer to as the Wu-Yang criterion, is in principle not restricted alone to electromagnetism. Indeed, having been supplemented by some technical development necessary for its implementation, the method has since been generalized to monopole charges in nonabelian Yang-Mills theories [2,3,4], not only for classical point particles but also for Dirac particles, giving respectively the Wong and the Yang-Mills-Dirac equations or their respective generalized duals as the result. [5,6,7] All this work so far on the Wu-Yang criterion, however, has been restricted to the classical field level. The purpose of the present paper is to begin exploring the dynamics of nonabelian monopoles at the quantum field level as implied by the same Wu-Yang criterion. We shall start by attempting to derive some rudiments of the "dual Feynman rules" in this approach.
One purpose of this exercise is to compare the Feynman rules so derived for (colour) monopoles with those for (colour source) charges of the standard approach. Although it has recently been shown that nonabelian Yang-Mills theory possesses a generalized dual symmetry in which monopoles and sources play exact dual roles [8], so that the dynamics of (colour) charges derived using the Wu-Yang criterion when they are considered as monopoles of the field is the same as that of the usual Yang-Mills dynamics when these charges are considered as sources, this result is again known to hold so far only at the field equation level. On the other hand, the exciting fully quantum investigation program on duality initiated by Seiberg and Witten and extended by many others [9,10,11,12] applies at present strictly only to supersymmetric theories in a framework in which the Wu-Yang criterion plays no role, and is for these reasons not yet very helpful to the questions raised in the present paper. The crucial point is the existence of the dual potential which is guaranteed only by the equation of motion obtained by extremizing the action and thus need no longer hold in the quantum theory when the field variables move off-shell. It is therefore interesting to explore whether this generalized dual symmetry breaks down at the quantum field level and if so in what way. Furthermore, even if the presently known generalized dual symmetry is eventually seen to apply also at the quantum field level, as seems to us possible, we believe that our investigation here is still likely to prove useful in future for attacking the ultimate problem of both (colour) electric and magnetic charges interacting together with the Yang-Mills field.
Another purpose of this work is mainly of technical interest, namely to examine how Feynman integrals work in loop space. As is well-known, the loop space approach to gauge theory is attractive in that it gives in principle a gauge independent description in terms of physical observables, in contrast to the standard description in terms of the gauge potential A µ (x). A grave drawback of the loop space approach, however, is the high degree of redundancy of loop variables which necessitates the imposition on them of an infinite number of constraints to remove this redundancy, making thus the whole approach rather unwieldy. For the problem of nonabelian monopoles, on the other hand, it turns out that it pays for various reasons to work in loop space, and a set of useful tools has been developed for the purpose. [5,6,7] In fact, it was only by means of these loop space tools that the results quoted above on nonabelian monopoles at the classical field level have so far been derived. We are therefore keen to investigate how these tools apply to Feynman integrals at the quantum field level, the understanding of which, we think, may contribute towards the future utilization of the loop space technique as a whole.
That the definition of a charge as a topological obstruction in a field should imply already an interaction between the charge and the field is intuitively clear, because the presence of a charge at a point x in space means that the field around that point will have a certain topological configuration. When that point moves, therefore, the field around it will have to re-adapt itself so as to give the same topological configuration around the new point. Hence, it follows that there must be a coupling between the coordinates of the charge and the variables describing the field, or in other words in physical language, an "interaction" between the charge and the field.
The Wu-Yang criterion enframes the above intuitive assertion as follows. One starts with the free action of the field and the particle, which one may write symbollically as: where A 0 F depends on only the field variables and A 0 M on only the particle variables. If the variables are regarded as independent, then the field is completely decoupled from the particle. However, by specifying that the particle is a topological obstruction of the field, one has imposed a constraint on the system in the form of a condition relating the field variables to the particle variables. Hence, for example, if one extremizes the free action (1.1) subject to this constraint, one obtains not free equations any more but equations with interactions between the particle and the field. Indeed, it was in this way that the Wu-Yang criterion has been shown to lead to the Lorentz-Wong and Dirac-Yang-Mills equations for respectively the classical and Dirac charge. [1,5,7] For the quantum theory, the equations of motion will not be enough. One will need instead to calculate Feynman integrals over the field and particle variables with the exponential of the action (1.1) above as a weight factor. If the variables are regarded as independent and integrated freely with respect to one another, then we have again a free decoupled system, but since the particle and field variables are here related by the constraint specifying that the particle carries a monopole charge, the resulting Feynman integrals will involve interactions between the particle and the field. Our aim in this paper then is just to evaluate some such Feynman integrals to see what sort of interactions will emerge.
Let us now be specific and consider an su(2) Yang-Mills field with a Dirac particle carrying a (colour) magnetic charge. The free action in that case is: 1 for the field in terms of the gauge potential A µ (x) as variable, and: for the particle in terms of the wave function ψ(x) as variable.
In the presence of monopoles, however, A µ (x) has to be patched, which makes it rather clumsy to use in this problem. For this reason, it was found convenient in all previous work on the classical theory [5,6,7] to employ as field variable instead the Polyakov variable F µ [ξ|s] [13] defined as: for: where Φ[ξ] is the holonomy element for the loop parametrized by the function ξ of s for s = 0 → 2π with ξ(0) = ξ(2π) = P 0 , or in other words, maps of the circle into space-time beginning and ending at the fixed reference point P 0 , and δ µ (s) = δ/δξ µ (s) is the functional derivative with respect to ξ µ at s. In terms of F µ [ξ|s] as variable, the free action of the field now reads as: where: and where the integral is to be taken over all parametrized loops 2 and over all points s on each loop. The action (1.1), with A 0 F as given in (1.7) and A 0 M as given in (1.4), is subject to constraints on two counts. First, the variables F µ [ξ|s], as already noted, are highly redundant as all loop variables are and have to be constrained so as to remove this redundancy. Second, the stipulation that the particle represented by ψ(x) should correspond to a monopole of the field implies that ψ(x) must be related to the field variable F µ [ξ|s] by a topological condition representing this fact. The beauty of the loop space formalism is that both these constraints are contained in the single statement: where: is the loop space curvature with F µ [ξ|s] as connection, and J µν [ξ|s] is essentially just the (colour) magnetic current carried by ψ(x), only expressed in loop space terms, the explicit form of which will be given later but need not at present bother us. 3 That being the case, the Wu-Yang criterion then says that the dynamics of the monopole interacting with the field is already contained in the constraint (1.10). Indeed, it was by extremizing the 'free' action (1.1) under this constraint (1.10) that in our earlier work the (dual) Yang-Mills equations of motion for the monopole have been derived. To extend now the considerations to the quantum theory, we shall need to evaluate Feynman integrals over the variables F µ [ξ|s] and ψ(x), but subject again to the constraint (1.10). Thus, the partition function of the quantum theory would be of the form: (1.12) Equivalently, writing the δ-functions representing the constraint as Fourier integrals, we have: with: (1.14) Since basically the only functional integral we can do is the Gaussian, the standard procedure is to expand into a power series all terms of higher order in the exponent of the integrand and perform the integral power by power in the expansion. We shall follow here the same procedure. However, in contrast to the usual cases met with in quantum field theory, there are in the exponent of the integrand in (1.13) two terms of order higher than the quadratic, namely one coming from the commutator term of G µν [ξ|s] in (1.11) which is proportional to the Yang-Mills coupling g, and the other coming from J µν [ξ|s] which is proportional to the colour magnetic chargeg. The result of the expansion would thus be a double power series in g andg. In view of the fact that g andg are related by the Dirac quantization condition which means usually that if one is small then the other will be large, we can normally regard such a double series only as a formal and not as a perturbation expansion. Only in certain special circumstances can one see it leading possibly to an approximate perturbative method. For example, for gauge group SU(N), the Dirac condition reads as: with an additional factor N compared with the standard Dirac condition for electromagnetism. Thus if the effective gauge symmetry is continually enlarged so that N → ∞ as energy is increased as some believe it may, then in principle both g andg can be asymptotically small. A case of perhaps more practical interest is quantum chromodynamics with N = 3 where for Q ranging from 3 to 100 GeV, phenomenological values quoted for α s run from about .25 to .115 [16]. This corresponds to g, say, running from about 1/2 to 1/3 4 and implies by (1.15) that the dual couplingg runs also in the same range, namely from 1/3 to 1/2. Thus, if we accept, as is at present generally accepted, that the expansion in g gives a reasonable approximation, then it is not excluded that a parallel expansion ing can also do so. However, as far as this paper is concerned, we treat the double expansion in g andg merely as a formal means of generating Feynman diagrams, the study of which only is our immediate purpose. The expansion having been made, the evaluation of the remaining Gaussian integrals then proceeds along more or less conventional lines apart from two complications. First, as was shown in an earlier work [7], the theory possesses now an enlarged gauge symmetry, from the original SU(N) doubled to an SU(N)×SU(N) where the second SU(N) has a parity opposite to that of the first and is associated with the phase of the monopole wave function ψ(x). Under this second SU(N) symmetry the Lagrange multiplier L µν [ξ|s] occurring in the integral (1.13) transforms as an antisymmetric tensor potential of the Freedman-Townsend type [17] and has thus to be gauge-fixed using the technology given in the literature for such tensor potentials. [18,19,20] Second, the field variables F µ [ξ|s] and L µν [ξ|s] being themselves functionals (i.e. functions of the parametrized loops ξ which are functions of s), extra care has to be used in defining functional operations, such as the Fourier transform, of the field quantities. Apart from these complications, the calculations are otherwise fairly straightforward.
In this paper, we have carried the calculation only to order g 0 . Although there is in principle no great difficulty apart from complication to carry some of the calculation to higher orders in g, and we have done so for exploration, the expansion cannot yet be carried out systematically until some basic questions are resolved. Nevertheless, even the simple examples we have calculated are sufficient to demonstrate several interesting facts. First, that it is possible, though unwieldy, to calculate Feynman diagrams in loop space. Secondly, that the Wu-Yang criterion does yield specific rules for evaluating Feynman diagrams of monopoles interacting with the field. Thirdly, that the result so far is dual symmetric to the standard interaction of a colour (electric or source) charge. We are therefore hopeful that these, albeit yet strictly limited, results will give at least a foothold to serve as a base for extending the exploration further.
Preliminaries, Gauge-Fixing and Ghosts
We begin by quoting from earlier work the form of the monopole (or colour magnetic) current expressed in loop space terms: [7] which is to be substituted into the topological constraint (1.10) defining the monopole charge at ξ(s). Here, where Φ ξ (s, 0) is the parallel phase transport from the reference point P 0 to the point ξ(s), and ω(x) is a local transformation matrix which rotates from the frame in which the field is measured to the frame in which the "phase" of the monopole is measured. An important point here is the appearance of s + in the argument of Φ ξ (s + , 0) which represents s + ǫ/2 with ǫ > 0 where ǫ is taken to zero after the functional differentiation and integration in ξ have been performed. [7,8] As a result, Ω ξ (s, 0) satisfies, for example, The occurrence in J µν [ξ|s] of the factors Ω ξ (s, 0) and its inverse, both depending on the point ξ(s), will make the integrations we have to do rather awkward. For this reason, we prefer to recast the whole problem in terms of a new set of rotated, "hatted" variables: Note that these hatted variables no longer depend on the early part of ξ from s ′ = 0 to s ′ = s as the original variables F µ [ξ|s] and L µν [ξ|s] do, but rather on the later part of the loop with s − ≤ s ′ ≤ 2π, where s − = s − ǫ/2, with ǫ being a positive infinitesimal quantity. This can be seen by observing that In terms of the hatted variables, we have for (1.14): which is independent of the early part of the loop up to and including the point ξ(s) since by (2.3), for s ′ < s + : As we shall see, this property of Ω ξ in J µν [ξ|s] will make our task in evaluating Feynman integrals much easier. In terms of the hatted variables, the partition function Z appears now as: 5 Since we shall be working exclusively with these hatted variables from now on, we shall henceforth drop the "hat" in our notation, assuming it now to be understood. We shall also suppress the arguments of the field variables unless this should lead to ambiguities. We shall try now to evaluate the integral (2.9) to order 0 in g starting with the integral in F . To this order, A is quadratic in F so that the integral in F is Gaussian and can be evaluated just by completing squares. This brings about a term of the form δ α L µα δ ρ L µρ in the exponent of the resulting integrand which is thus again quadratic in the variable L µν . To evaluate next the integral in L, we encounter a problem in completing the square for L µν , due to the noninvertibility of the projection operator involved in the quadratic term. This is a reflection of the fact that in L there is a gauge redundancy. Although L µν , like the Polyakov variable F µ , is by construction gauge invariant (apart from an unimportant x-independent gauge rotation at the reference point P 0 ) under the original Yang-Mills gauge transformation, there is another gauge symmetry of the theory [7] under which L µν transforms like an antisymmetric tensor potential of the Freedman-Townsend type [17]. Thus, in order to complete the square for L µν and integrate this field out, we need to impose a gauge-fixing condition on L µν by taking advantage of this new U-symmetry of the theory.
We propose then to impose on L µν the following gauge condition: [14,20] C µ (L) = ǫ µνρσ δ ν (s)L ρσ [ξ|s] = 0. (2.10) In this gauge, which can be shown to be always possible [14], the transverse degrees of freedom of L µν do not propagate and only the longitudinal ones are physical. Following the standard procedures, we introduce then the suppression factor C 2 and the Faddeev-Popov determinant ∆ 1 as follows:
11)
5 There can in principle be a Jacobian of transformation in the integral depending on which variable one chooses originally to quantize in, whether F µ [ξ|s], F µ [ξ|s] or A µ (x). However, working to order g 0 as we do here we need not bother, the Jacobian between any pair of these variables being then just a constant factor. To higher orders, it will matter. In fact our inability as yet to handle the Jacobian is one main reason preventing us from going to higher orders in g at present. and: where η andη are two independent vector-valued Grassmann variables depending on the later part of the loop, and C ′ µ (L Λ ) is obtained by applying to (2.10) a U -transformation with gauge parameter Λ β [ξ|s] [7,14], thus: for ∆L ρσ = ǫ ρσαβ δ α Λ β . The path-integral in (2.9) then becomes: where ✷ ξ denotes: Here, α(s) = 2a ξ (s) is chosen such that the gauge-fixing term for L µν cancels the term δ α L µα δ ρ L µρ brought about by completing the square for F µ . In (2.14), we see again the appearance of a non-invertible operator g µν ✷ ξ −δ µ δ ν . In order to integrate out the fields η andη and eliminate the off-diagonal term η µ δ µ δ ν η ν , we must fix the gauge for a second time, by finding a second gaugesymmetry of the action. This is accomplished by writing the last term in (2.14) as: This symmetry allows us to "fix the gauge" for η by choosing λ such that: which is always possible, given initial conditions for λ. Including the suppression factor: as well as the Faddeev-Popov determinant: 20) in (2.14) yields: where φ i [ξ|s] andφ i [ξ|s] for i = 1, 2, are four independent commuting fields depending on the later part of the loop ξ [18,19], and we have chosen β = −1/2, in order for the second gauge-fixing condition to cancel the off-diagonal term in η µ δ µ δ ν η ν . The ghosts η,η, φ i andφ i can easily be integrated out and decouple from the theory. The effective action after integrating out the ghost fields is: The decoupling of the ghosts can be explained as follows: although the theory itself is nonabelian in character, the U -transformation on L µν does not involve a term coupling Λ µ to L µν [7], in contrast to the usual Yang-Mills "U"-transformation on the gauge potential A µ (x). In the following, we shall assume that the ghost fields have all been integrated out, and drop henceforth the subscript ef f to A from our notation.
the free-field generating functional Z (2) [J ] which depends on the external current J only. The propagator for any field H say, will then be given by the expression: where J here denotes the external current which corresponds to H. Next, collecting the higher order "interaction" terms of the action, say, A I [H], one can write the full generating functional formally as: Any term in the perturbation expansion can then be obtained by taking the appropriate derivative of Z[J ] with respect to J . In our problem here formulated in loop space, the terms of the action A in (1.14) which are second order or lower in the fields ψ, F µ and L µν do not correspond to just the free action A 0 of (1.1), and the concept of interaction has been replaced by that of a constraint imposed through the Wu-Yang criterion. Nevertheless, the method can still be applied. Writing then the action (1.14) to second order in the fields, including external current terms, we have: The generating functional factors into a gauge term and a matter term where the matter term is the same as in ordinary local formulations. For the gauge term written in loop space: Z after completing the squares for F µ and L µν and integrating them out, we obtain, up to a multiplicative factor: Differentiating functionally with respect to the currents yields for the propagators: δ 4 (ξ(s) − ξ ′ (s)), (3.6) and: (3.7) Note that for the functional differentiation above, we have used: To order g 0 , the commutator term in the loop space curvature G µν can be dropped so that there remains only one term in the action A of (1.14) which is of higher than second order, namely the term coming from the current J µν in the constraint: This can be substituted as A I in (3.2) to construct the generating functional Z[J ].
Since the "interaction" only involves the fields ψ and L µν and not F µ , we can put J i µ , the external current for F µ , equal to zero in (3.5), keeping only the remaining relevant terms. If we denote the propagator of ψ by S F (x − y) and the propagator (3.7) for L µν by ∆ µν,µ ′ ν ′ [ξ, ξ ′ |s, s ′ ], the free field generating functional up to factors can then be written as: Applying the operation as indicated in (3.2) to this will give us the full generating functional we want. For example, suppose we are interested in the "interaction vertex", we expand (3.2) to first order in g, obtaining after a straightforward calculation, up to numerical factors: (2) [J ], (3.11) where we have dropped the vacuum term S F (0) and, to avoid confusion, we have written out explicitly the internal symmetry indices. Differentiating with respect to the appropriate currents then yields: (3.12) where the vertex is obtained by eliminating the propagators from the external lines.
The dual potentialà µ (x)
Before we proceed to work out explicitly the loop space formulae for the interaction vertex and generating functional, we shall first introduce a quantityà µ (x) found in an earlier paper [7] which will considerably simplify our task: In the classical theory, it is now known [8] that this quantityà µ (x) plays an exactly dual role to the ordinary Yang-Mills potential A µ (x), acting as the parallel phase transport for the monopole wave function and giving a complete description of the dual field. This last statement by itself does not necessarily imply thatà µ (x) will play the same role also in the quantum field theory but, as we shall see, it turns out to do so, at least to order g 0 . We note first that the gauge fixing condition that we have imposed on the loop variable L µν is in fact equivalent to the standard Lorentz condition on the local quantityà µ (x). This can be seen as follows. Differentiatingà µ (x) in (4.1) with respect to x and then integrating by parts with respect to ξ with the help of δ 4 (x − ξ(s)), we obtain: (4.2) Recall now the fact that we are working with what we called "hatted" variables so that by (2.8) the derivative δ µ (s) above commutes with Ω ξ (s, 0) and acts only on L ρσ [ξ|s]. We see then that the gauge-fixing condition (2.10) that we have imposed on L µν [ξ|s] is indeed equivalent to the Lorentz condition: Secondly, we note that to order g 0 and in the absence of its interactions with ψ, A µ (x) satisfies the free field equation: which allows for its expansion into the usual plane wave creation/annihilation operators. The loop-space curvature G µν = 0 in the absence of the interaction term. To zeroth order in g, the curvature is given by G µν = δ µ F ν − δ ν F µ . The equation F µ = a −1 ξ δ ν L µν also holds, since to zeroth order in g, the covariant loop space derivative is the same as the ordinary loop space derivative. For the same reason, the gauge fixing condition implies ǫ µνρσ δ ν L ρσ = 0. Inserting F µ and using this expression we obtain: On the other hand: For the same reasons as before, the derivatives could be taken inside the bracket to act on L ρσ only. Using (4.5) we then obtain (4.4) as required.
Finally, we show that (3.2) can in fact be expressed in terms of the dual potential A µ (x) and a corresponding local current j µ (x) instead of the L-field and its corresponding current in loop space. We note first that the "interaction" A I in (3.2) or, in other words, A (3) of (3.9), can be rewritten as: for A µ (x) as given in (4.1), so that it is a function of L µν only in that particular combination. We need therefore introduce a current really only for this combination of L µν , namely a local current j µ (x) corresponding to A µ (x), by incorporating in the action a term of the form: This can be rewritten in the original form given in (3.3): provided that J µν i [ξ|s] is of the special form: Substituting this J µν i [ξ|s] into Z (2) [J ] of (3.10) and Z[J ] of (3.11), one easily obtains that up to numerical factors: 11) and that: with: The formulae (4.11) and (4.12) are formally the same as those in standard Yang-Mills theory. Indeed, if the quantity can be identified with the standard propagator of the gauge potential in Yang-Mills theory, then, apart from the gauge-boson self-interaction which has been dropped in working only to order g 0 , one would obtain exactly the same perturbation series ing here as one does in g in ordinary Yang-Mills theory. At present, however, is still given in (4.13) as a complicated integral in loop space. That this integral is in fact the same as the standard propagator of the Yang-Mills potential is the subject of the next section.
Loop Space Fourier Transform and the Propagator forà µ
Inserting the expression for the L-field propagator from (3.7) into (4.13), we have: where we have performed the s ′ integration already. This simplifies to: where we have used the abbreviation: Using the bra-ket notation of Dirac, we now define: δ 4 (ξ(s) − ξ ′ (s)), (5.5) and write: with: Our aim now is to transform this propagator to momentum space so as to compare with the standard propagator of the Yang-Mills potential. Although this propagator is itself an ordinary space-time quantity for which the Fourier transform is well-defined, it is expressed in terms of loop quantities the Fourier transformation of which will require some care. First, if in analogy to x|p = exp (−ipx) in ordinary space-time, we define in loop space: ξ|π = dt exp i ξ(t)π(t).
(5.8) then we can write: and: However, if we proceed now to evaluate these quantities, we shall find δ-functional ambiguities connected with the definition of the loop derivative δ µ (s) = δ/δξ µ (s) and the tangent to the loopξ µ (s) both of which occur in the formulae above.
In other words, some regularization procedure is required in order to give these quantities an unambiguous meaning. Our procedure, which we have followed throughout our program [5,7,8], is to replace first the δ-function δ(s−s ′ ) inherent in the definition of the loop derivative by a bump function β ǫ (s − s ′ ) of width ǫ and the tangentξ µ (s) to the loop at s by: for s ± = s ± ǫ/2, and then take the limit ǫ → 0 after the required operations have been performed. We propose to follow the same procedure here.
With these provisos we return to the evaluation of (5.11). The box-operator there acts on the δ-function, but by integrating by parts, it can be made to act on the exponential function and the above expression becomes: Simplifying further we obtain: , (5.14) Substituting into (5.7) and performing the π ′ -integration we obtain: Integrating then over π(s) fors > s + ands < s − yields up to factors: We recall next that the Ω ξ -matrices in the above formula are actually what we called the "hatted" quantities and they depend on the loop only fors ≥ s + and due to the δ 4 (ξ(s) − ξ ′ (s)) factor, we have ξ = ξ ′ in this range, so that we can replace the factor involving these Ω ξ -matrices with: where the last equality follows from the fact that T j T j is a Casimir operator of the Lie algebra and therefore invariant under rotations in the Lie algebra. The result is a factor independent of Ω ξ and of ξ being thus constant in the remaining integration, which is in fact the main reason why we changed right in the beginning to these so-called "hatted variables". Hence, since the exponentials in (5.16) depend only on ξ(s) and ξ ′ (s) fors ∈ (s − , s + ) andξ µ (s), according to (5.12), only on ξ(s + ) and ξ(s − ), we can perform both the ξ-and the ξ ′ -integration for 0 ≤s ≤ s − , s + ≤s ≤ 2π by using the relation: δξξ ν (s)ξ ν ′ (s) ∝ 1 4 g νν ′ξ 2 (s). (5.18) We now write: which can be simplified further to obtain: Our result for the A-propagator in momentum space therefore is: which is exactly what we wanted to prove.
Remarks
Although the results we have obtained so far in attempting to extend the discussion of monopole dynamics in Yang-Mills fields to the quantum theory are quite limited, they have, we believe, given us some insight on several points. Firstly, the Wu-Yang criterion [1] which has been applied in all previous work in the literature only to monopoles in the classical field theory, has now been shown to be extendable to the quantum field level to define their dynamics and to generate Feynman diagrams. In the nonabelian theory, the result cannot be checked, because the dynamics of monopoles is otherwise unknown. However, the same calculation applies of course also to the abelian theory which is expected to be dual symmetric, so that the dynamics of monopoles there should be the same as that of ordinary charges. Further, in the abelian theory, both the gauge boson self interaction term and the Jacobian can be ignored so that our g 0 calculation given above is exact. Hence, the fact that we obtained the same "perturbation series" ing above as the normal expansion in g in ordinary electrodynamics is a check not only on the Wu-Yang criterion but also of the loop space method employed.
Secondly, if the result recently obtained in the classical theory that Yang-Mills theory is dual symmetric [8] is extendable to the quantum theory, one would expect that the dynamics of monopoles dealt with here, in spite of its original loop space formulation, should eventually be expressible in terms only of the local dual potentialà µ (x). It was seen that at least at the g 0 level we were working, this was indeed the case. Whether it may persist at higher orders in g, and in such a way as to restore the dual symmetry, however, is at present unknown.
Thirdly, we have demonstrated that one can indeed do perturbation theory using the loop space techniques already developed. The calculation is a little clumsy but perfectly tractable. Though starting with loop variables which are invariant under the original U-transformation, it turns out that in order to remove the intrinsic redundancy of loop variables, one encounters in the Lagrange multiplier of the constraint a new field L µν [ξ|s] which depends on the dual (magnetic)Ũ-gauge, so that we had again to gauge-fix. However, it is possible that by imposing the constraint in a different (global) manner [5], one may have a chance of obtaining explicitly gauge invariant results.
For these reasons, in spite of the limited scope of the result obtained so far, we hope that it will serve as a basis for further explorations.
|
2014-10-01T00:00:00.000Z
|
1996-03-28T00:00:00.000
|
{
"year": 1996,
"sha1": "e66286930269089e915491b462b3dca76d21a513",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9603188",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c9e45da4d6b1d2df7f9b45bc70fe24ab1f2742cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
258328712
|
pes2o/s2orc
|
v3-fos-license
|
Mesenteric hydatid cyst mimicking an ovarian cyst - A case report
Highlights • A mesenteric hydatid cyst is very uncommon.• A mesenteric hydatid cyst can mimic as any peritoneal cystic masses.• Vague symptoms can also present as cystic lesion.• Surgery is helpful in relieving mass effect and spillage of the fluid to prevent anaphylaxis.
Introduction and importance
Echinococcal disease is commonly caused by infection with the metacestode stage of the tapeworm Echinococcus granulosus. Hydatid cysts in humans are most frequently found in the liver (55-70 %), followed by the lungs (18-35 %) [14]. A ruptured liver or splenic hydatid cyst can present with intraperitoneal implantation. But the primary mesenteric hydatid cyst is a rare entity, even in the endemic zone. The incidence of hydatid disease involving the spleen, kidney, peritoneal cavity, skin, and muscles is about 2 % each, and the incidence of heart, brain, vertebral column, ovaries, pancreas, gallbladder, thyroid gland, breast, and bones involvement is about 1 % each [15].
This study aims to highlight the fact that this disease should be suspected in cystic lesions involving any organ in the body, especially in endemic areas like Nepal.
Case presentation
A 39-year-old lady with no significant past surgical, medical, or family history presented to our center with a complaint of persistent dull aching pain in the right lower abdomen for two months. She didn't complain of vomiting, fever, or anorexia. Per abdominal examination showed normal in shape with stria marks on the right iliac fossa, left iliac fossa, and hypogastrium. The umbilicus is central and inverted, with no distention. There was no tenderness or palpable mass.
On per vaginal examination, a 4 × 4 cm freely mobile cystic mass was felt in the right adnexa, which was non-adherent to the underlying structure.
Diagnostic work up
Her lab investigations were unremarkable. Blood parameter results turned out to be HB12.3 g/L, TLC-7500, and differential counts of N50, L40, M5, E5, and B0. Liver and kidney function tests were within normal limits. Echinococcus IgG Ab was positive (5.60coi).
Ultrasound (USG) showed ill-defined cystic lesions ( Fig. 1) measuring approximately 78 × 49 mm in the lower abdomen anterior to the urinary bladder. There were multiple septations noted inside the cyst. A CT abdomen and pelvis scan reported a well-defined cystic lesion ( Fig. 2) measuring 92 × 86 mm at the right iliac region and extending into the para vesical region with multiple internal septations inside. Mild free fluid was found in the abdomen and pelvis.
We prepared the patient for laparotomy. On exploration, a cystic mass measuring approximately 12 × 10 cm arising from the mesentery was observed (Fig. 3), suggestive of hydatid cyst. Cyst was adhered to the right lateral abdominal wall. However, the uterus, bilateral ovary, fallopian tube, and remaining organs were normal. The complete cyst was removed without spillage of fluid. Grossly, a cystic structure (Figs. 4-5) measuring 8 × 8 × 0.3 cm with attached fibrofatty tissue measuring 6.5 × 5.0 × 1.0 cm was visible. The outer surface of the cyst was smooth, glistening white having 3 daughter cysts within it. The smallest daughter cyst measuring 2 × 2 cm while the size of largest one was 3 × 2 cm, which indicated the presence of hydatid cyst.
Histopathological examination revealed, an acellular, eosinophilic, laminated membrane-like structure. The adjacent area shows a pericyst composed of necrotic and palisading histiocytes, predominantly eosinophils with few lymphocytes, and focal areas of lymphoid aggregate were seen (Fig. 6). There was no brood capsule or scoliosis noted. Thus, the diagnosis of hydatid cyst was confirmed through histopathological examination.
Her postoperative period was uneventful, and she was discharged on the 4th postoperative day. The patient was advised to take albendazole twice daily for one month. During multiple visits for 3 months, the patient didn't complain of any symptoms.
Clinical discussion
The larvae of the tapeworm Echinococcosis infest as hydatid disease. Humans are mostly infected by the Echinococcus granulosus species. Sheep and cattle farming areas such as the Mediterranean Sea, Asia, North and East Africa, South America, Australia, and the Middle East are the regions in which most cases of this parasite are found [1].
Two hosts are inhabited by parasites during their lifecycle: a definitive host (usually a dog) and an intermediate host such as humans [1,2].
There are two ways for humans to acquire this parasite: through direct contact with an infected dog or ingestion of infected food. Embryos reach the liver in 60 %-70 % of cases when they migrate through the intestinal mucosa and travel through intestinal lymphatics and venules. The embryos can be carried by the bloodstream to any organ if they bypass the liver [3]. These embryos mostly reside in the liver (59-75 %), followed by the lungs (27 %), the kidneys (3 %), the bones (1-4 %), and the brain (1-2 %). Other parts of the body that are rarely infected are the heart, spleen, pancreas, omentum, ovaries, parametrium, pelvis, thyroid, orbit or retroperitoneum, and muscles [1,3].
Almost all peritoneal hydatid cysts are secondary to hepatic cysts, despite the fact that a few primary cases of these cysts have been identified. The peritoneum contains 13 % of the abdominal hydatid cyst. Intraperitoneal hydatid cysts develop because of primary hepatic and splenic cyst rupture, which can be either spontaneous or traumatic and iatrogenic. Only 2 % of all abdominal hydatid cyst cases are primary intraperitoneal cysts, which are extremely uncommon. The surgical management of 27 extrahepatic cysts between 1982 and 1999 was retrospectively reviewed by Balik et al. Nineteen of them had extrahepatic cysts alone (20.6 %), eight had coexisting hepatic cysts (70.4 %), and the remaining patients had cysts in their spleen (three), pancreas (two), adrenal glands (four), mesocolon (five), small intestine mesentery (one), ovaries (one), retroperitoneum (four), and omentum (two). A solitary primary mesenteric cyst develops when the hydatid embryo enters the mesentery via blood or lymph; there are no additional cysts in this instance. Mesenteric cysts are more common in Caucasians and have a slight female predominance [4], however, there is male predominance (62.5 %) in the pediatrics age group [5].
A mesentery hydatid cyst may be asymptomatic for many years before it manifests as a slowly expanding mass in the abdomen and a mass effect on nearby organs [6]. Mass effect, peritoneal seeding, rupture of the cyst, infection of the cyst, transdiaphragmatic involvement of the lungs, mediastinum, and cardiac are the complications of mesenteric hydatid cyst [7]. The complications of mesenteric hydatid cyst include mass effect, peritoneal seeding, cyst rupture, cyst infection, transdiaphragmatic involvement of the lungs, mediastinum, and cardiac [7]. Hydatid cyst can be diagnosed by clinical findings and serological tests. However, radio-imaging studies (abdominal ultrasonography and computerized tomography) can confirm the hepatic as well as extra hepatic localization in preoperative diagnosis [7,8]. Histopathological Diagnostic modalities are ultrasonography (USG) and CT scan of the abdomen which can detect the location, size of the lesion, septation, debris, fluid levels and the thickness of the wall [4]. Magnetic resonance imaging (MRI) is found to be more specific and the most accurate investigation. Therefore, MRI is the modality of choice to accurately define the relationship between the mass and its surrounding structures [9]. The time point of treatment and surgical approach should be based on location of the cyst.
Correct preoperative diagnosis is challenging due to the rarity of cases, lack of specific symptoms, different appearance of imaging and striking resemblance to the organ's malignancy. All abdominal cystic lesions, including mesenteric, pancreatic, gastrointestinal duplication, ovarian cyst, and lymphangia, should be taken into account during differential diagnosis [10].
The best course of treatment is a meticulous and thorough surgical excision, but we can occasionally perform a subtotal or partial cystectomy to prevent harm to other organs. Hypertonic saline or hydrogen peroxide can be applied prior to opening the cavities to stop further spread, anaphylaxis, and to eliminate the daughter cyst. To prevent the recurrence, mebendazole and albendazole can be used as adjuvant therapies. Routine chemotherapy use is preferred in cases of disease recurrence or involvement of multiple sites [11,12].
Anaphylactic shock, which can be brought on by spills during surgery or by taking a biopsy that is incorrectly identified as an intraperitoneal tumor, has also frequently resulted in fatalities, according to reports [3]. Hydatid cysts should definitely be taken into account, especially in areas where they are endemic, because of the variety of presentations that can occur there. Radiography, ultrasonography, computed tomography, MRI, and immunological tests have significant value in diagnosing hydatid cysts [8]. Mesenteric cysts have a very low recurrence rate after surgically removal (0-13.6 %) and patients have an excellent prognosis [13].
Conclusion
Hydatid disease is caused by Echinococcus granulosus and can produce cysts in various organs. Peritoneal, omental, and mesenteric locations are uncommon but challenging for diagnosis and treatment.
Hydatid disease must therefore be distinguished from other abdominal cystic lesions, especially in areas where it is endemic. The preferred course of treatment is surgery, but it must be carried out carefully to prevent dissemination of the daughter cysts throughout the peritoneum and anaphylactic shock. Hydatid cysts should always be included in the differential diagnosis of any peritoneal cystic masses.
Surgical excision with adjuvant therapy using albendazole and scolicidal agents has shown the best treatment outcomes with a low recurrence rate.
Key clinical message
In the case of any cystic lesion of the abdomen, hydatid disease must be ruled out.
Mesenteric hydatid disease is very rare and can be misdiagnosed as a cyst or a mass in other nearby organs.
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images.
Method
The work has been reported in line with the SCARE criteria.
Provenance and peer review
Not commissioned, externally peer reviewed.
Ethical approval
Consent from patient was taken for the case report.
Funding
No funding was required for the work.
Guarantor
Ritika Ranjan is the guarantor.
CRediT authorship contribution statement
Ritika Ranjan drafted the manuscript. Milan Kc, and Khusbu Kumari were involved in editing and revising the manuscript. Bishal Khaniya and Laligen Awale were involved in the surgical therapy of the patient and in the final editing of the manuscript. All the authors read and approved the final version of the manuscript.
|
2023-04-26T15:01:58.518Z
|
2023-04-24T00:00:00.000
|
{
"year": 2023,
"sha1": "701899557008f0370b70c7a1933265452eef1553",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2023.108260",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81421c6d5aeafeeee811322383566416f7d53d7e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252385014
|
pes2o/s2orc
|
v3-fos-license
|
Structural design and effect analysis on a new type of hydraulic oscillator driven with double valve groups
A new hydraulic oscillator was designed, which was able to adjust pressure fluctuations through two sets of dynamic and fixed valves. Hydraulic oscillators can meet the frequency and axial force requirements of drilling at lower drilling fluid flow than general hydraulic oscillators. Oscillator structure was described in detail and over-flow area between the two sets of dynamic and fixed valves was calculated. Based on drilling fluid flow difference, the influence of different fluid flows on oscillator pressure drops was analyzed, its influence law was determined by the finite element software Fluent, and its effect was verified by numerical simulations. The related research is of great significance in the selection of dynamic and fixed valves and provides the theoretical basis for the optimization of the structural parameters of double-valve hydraulic oscillators.
Structural advantages.
(1) Turbine drive section was consisted of a pure metal component, which had some advantages such as high temperature resistance, strong erosion resistance, long service life and high efficiency over screw-driven hydraulic oscillators. (2) Double-valve groups with double-flow change traditional single channel by splitter lip, pressure wave was modulated by two sets of valve groups, and pressure fluctuation was changed by adjusting angle and distance between fixed valves. Different pressure amplitudes and appropriate oscillation forces could be
Analysis of the key structural parameters of oscillators
Calculation of the flow area of valve assembly. Analysis one valve group for the hydraulic oscillator since the structure of the double valve groups are the same, and the structures of dynamic and fixed valves are shown in Fig. 4. The flow channels of valves were central symmetry, with flow channel outer diameter r 2 , flow channel inner diameter r 1 , the angle of standard annular region θ 0 , and circle radii on both sides r 3 , r 3 = (r 2 − r 1 )/2 . The whole dynamic valve rotation process was divided into five stages based on the change of flow channels in one cycle (rotation radian of π ), as shown in Fig. 5. Rotation process was illustrated by taking the following assumptions: the left channel of the fixed valve as the initial position of calculation, dynamic and fixed valves coincided at initial position, and dynamic valve rotated at a constant speed in work process. The left channel front of the dynamic valve did not intersect with the right channel of the fixed valve in the first stage. The left channel front of the dynamic valve t intersected with the right channel of the fixed valve in the second stage, intersection area s 1 was greater than πr 2 3 and intersection area s 2 was less than πr 2 3 . The left channel front of the dynamic valve intersected with the right channel of the fixed valve in the third stage and intersection areas s 1 and s 2 were greater than πr 2 3 in the third stage. However, in the fourth stage, intersection area s 1 was smaller than πr 2 3 and intersection area s 2 was greater than πr 2 3 . The left channel end of the dynamic valve did not intersect with the left channel of the fixed valve in the fifth stage and intersection area was s.The intersect dynamic valve left channel completely coincident with the right channel of the fixed valve and the intersect of dynamic valve right channel totally coincided with the left channel of the fixed valve at the end of one cycle, while rotation radian of dynamic valve was π.
The angular velocity of dynamic valve was assumed to be ω and rotation time was t . Flow channels area between the valves was calculated as follows. www.nature.com/scientificreports/ where θ 1 is the angle between tangent line which is the origin point to the sides circle of the valve and the line which is the origin point to the center of the sides circle, θ 1 = arcsin r 3 r 0 ,where r 0 is the radius of flow channel pitch circle, r 0 = (r 2 + r 1 )/2 . β is the angle between the two radii which are the center of the side circle to the intersection points of the dynamic and fixed valves.β in the second and fourth stages could be calculated as follows.
Flow channel areas had to change all the time depending on field application requirements which could ensure that pressure drop between valve plates could be continuously changed so that disc spring could produce periodic changes. Therefore, the shorter the third stage, the better. Finally, this tool was designed for θ 0 = π/2 to make sure that the time of the third stage was 0 and flow channel areas could be simplified as follows.
Influence of inlet flow rate on pressure drop in valve group. The axial forces and pressure fluctuations of double-valve driving hydraulic oscillators were mainly produced by the change of flow channel between dynamic and fixed valves. Since both valve groups produced pressure drops, pressure fluctuations in oscillator segments generated superposition effects enhancing their performance. One valve group was adopted for analysis and instantaneous pressure drops between dynamic and fixed valves followed thin hole theory.
where C d is flow coefficient which is in the range of 0.6 to 0.8, ρ is drilling fluid density (kg/m 3 ), Q is flow drilling fluid (m 3 /s), A is flow channel area (m 2 ), and p is the pressure drops of valve group (Pa).
Equation (5) shows that the flow area of valve group could be controlled to change pressure drop by changing valve parameters in the design. Due to the limits of valve material, upper working pressure drop, working environment, etc., the maximum pressure fluctuation could not be too high. In order to meet actual pressure fluctuations, the maximum pressure drop p max corresponded to the minimum flow area A min and the maximum pressure drop p min corresponded to the minimum flow area A max ; A max and A min could be calculated as follows.
Field applications showed that the maximum hydraulic pressure consumption of hydraulic oscillator was supposed to be not greater than 4 MPa due to the restriction of underground space, relatively compact downhole tool structure, structure size limitation of rotary valve, etc. 25,26 . Therefore, the maximum pressure drop of one valve group was taken as 3.20 MPa considering turbine joint and local pressure loss. Combined with the external
Numerical simulation of hydraulic oscillator
In order to further study the performance of the design of double valve driving hydraulic oscillators, the work process of the fabricated hydraulic oscillator was simulated by the finite element software Fluent combined with the structure of hydraulic oscillator. Selection of turbulence model. Since the flow channel of double-valve driving hydraulic oscillator had multi-stage cross-section changes, fluid movement could be take the shape of rotary motion. Compared with standard k-turbulence model, RNG model takes into account rotation and swirl flows and it could better deal with high strain rate and flow of streamline 27 . Therefore, k-RNG turbulence model was selected for the numerical simulation of hydraulic oscillators.
Establishment of finite element model. The 3D model of oscillator was built in SolidWorks software.
For the sake of analysis before the analysis. The following basic assumptions were considered to simplify analysis, (1) numerical simulation mainly studied fluid pressure change in oscillating section; hence, the internal flow channel of oscillating section was fixed while modeling and spring effect was ignored. (2) To simplify the internal flow channel of drive section, turbine effect, righting bearing, thrust bearing and others were not taken into account.
(3) Outlet of the lower part of the model of drive section was simplified. In addition, local refinement of flow channel was carried out and the final 3D model of flow channel is shown in Fig. 9.
The finished model was saved as an "xt" file and imported to Fluent of Workbench to divide the grid with node number of 504,777 and grid cell number of 2,505,046, as shown in Fig. 10.
After the completion of the mesh, the flow rate of drilling fluid (simulated with water) had to be translated into inlet speed to set the inlet boundary conditions combined with the actual operating conditions of the hydraulic oscillator. 20 cycles were considered and 25 data points were taken in each cycle. Based on theoretical design, dynamic valve speed was 8 r/s, outlet was set to free export, and the remaining boundary conditions were set in turn according to the actual parameters to complete the simulation.
Analysis results. Outlet pressure, oscillation front end, first valve group front end, and second valve group back end were obtained. Oscillator pressure drop was defined as the difference in the pressures of first valve group front end and second valve group back end, as shown in Fig. 11. Theoretically, the simulated pressure drop was two times as high as that theoretically calculated with one single valve group. Simulation results showed that maximum and average pressure drops were 7.30 MPa and 1.81 MPa in 20 cycles of simulation, respectively. The highest maximum pressure drop in one cycle was two times the theoretical value with 6.10 MPa and the basic simulation law was almost consistent with theoretical calculation.
Summary and conclusion
(1) A new type of turbo-driven hydraulic oscillator was designed in which pressure fluctuations were generated by double valve groups and oscillation force demand could be generated at lower flow rates mainly. Based on the principle of pressure wave superposition, the maximum pressure drops of the new type of turbo- While the flow rate from 20 to 32 L/s, the maximum pressure drop was approximately linear with the flow rate, the minimum pressure drop value to little change, and when flow rate was 28 L/s, the designed requirements was met. (4) The finite element software Fluent was applied to simulate the hydraulic oscillator by establishing a 3D flow channel. Based on the principle of pressure wave superposition, the maximum pressure drops of the new type of turbo-driven hydraulic oscillator were 7.30 MPa in 20 cycles of simulation, and the results showed that pressure drop in double valve groups was two times that of a single valve group, which indicated the feasibility of turbo-driven hydraulic oscillators.
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
|
2022-09-21T14:09:24.206Z
|
2022-09-20T00:00:00.000
|
{
"year": 2022,
"sha1": "4feb7030eba453ef174aaddf954798e690aad408",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "4feb7030eba453ef174aaddf954798e690aad408",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
136431128
|
pes2o/s2orc
|
v3-fos-license
|
Natural Gas Decompression Behavior in High Pressure Pipelines
This paper gives the analysis in the natural gas decompression behavior in pipelines as one of the important items for predicting the fracture safety of latest high-pressure natural gas transmission. By combining "British Gas Theoretical Model of Rich Gas Decompression" and "BWRS Equation of State", authors successfully developed the computational program, which can calculate dual-phase decompression curves of the natural gases. In the calculated results, the phenomenon of the "plateau" in the dual-phase decompression curve has been confirmed. Authors also numerically simulated the natural gas decompression behavior in pipelines and analyzed the fracture initiation process. It was shown that the initiation period is too short to influence the gas decompression curves.
Introduction
In recent years, the pressure in pipelines for natural gas transmission is getting higher and higher. High-pressure pipelines are in modern trend, which is expected to offer great cost advantage. However, as the pressure increases, the stored energy of inner-gas increases and once the crack initiated, the crack may run in the manner of ductile fracture, i.e. the propagating shear fracture. This running fracture may lead to the catastrophic failure of the pipelines. So, it can be said that the fracture safety of such high-pressure pipelines is now one of the urgent problems to be solved.
The propagating shear fracture is controlled by the conflict between the crack velocity of the running fracture and the decompression velocity of inner-gas during the fracture. So, this propagating shear fracture may be arrested in a short length in the case that the gas decompression velocity is much greater than the crack velocity, but it may not be arrested and could be a disaster in the case that the crack velocity is greater than the gas decompression velocity. Therefore, in order to predict this phenomenon for the fracture safety of natural gas transmission, precisely prediction of both the crack velocity and the gas decompression velocity is needed.
This paper offers the analysis in the natural gas decompression behavior in pipelines as one of the important items for predicting the fracture safety of the latest high-pressure natural gas transmission.
Natural Gas Decompression Behavior in Pipelines
The natural gas decompression behavior in pipelines is schematically shown in Fig. 1. When a pipeline failures, gas can escape from the full area of pipe in a process, which is essentially isentropic. During the isentropic decompression of any gas, the temperature falls. In the case of the natural gas containing the heavier alkanes than methane, i.e. so called "rich gas", this may lead the pressure and the temperature to fall in the dual-phase gas (gas-liquid) region.
The pressure remains constant at a higher level when the isentropic crosses from the gaseous region into the twophase region. Because the pressure remains at higher level, the risk of the propagating fracture in dual-phase gases is greater than the risk in single-phase gases. 1) Because the theory for the ideal gas can't predict the dual-phase gas decompression behavior, more complete equation of state is needed to predict the decompression behavior of such rich gases.
Theoretical Model for Dual-phase Gas Decompression
Under the condition of some assumptions, dual-phase gas decompression behavior can be solved theoretically. In this section, the natural gas decompression behavior in pipelines is predicted by "British Gas Theoretical Model of Rich Gas Decompression." 2) From four assumptions described in the following, this theoretical model can be defined as a one-dimensional isentropic homogeneous equilibrium approach.
Assumptions are: 1) The flow is one-dimensional through a "full-bore" opening. In the case of running shear fractures, this is usually the case, except a short period of the initiation process where the full-bore opening is not yet formed.
2) The flow is adiabatic. This implies that the heat addition and the frictional effects are negligible. These assumptions have been used in the theory for dry gas and have been found to be a good approximation. 3) Thermodynamic equilibrium exists everywhere in the flow. Thus, liquid is assumed to condense immediately on reaching the dew line with the amount increasing as the isentrope is followed into the dual-phase gas region. This implies that there is no supersaturation of the gas. 4) Gas and liquid flow together out of the pipe at equal velocity. This assumption is justified if any liquid is carried along in the gas in the form of a mist. There is obviously a limit to the amount of liquid, which can be effectively carried in this way. But usually, the amount of liquid is small enough where the homogeneous assumption holds.
The three continuity equations of mass, momentum and energy, as applied to horizontal one-dimensional flow without heat transfer or friction can be written as: where r is the density, u the gas velocity, h the enthalpy and p the pressure.
The equilibrium velocity of sound in the mixture "a" is defined as: These can be solved to give Equations (4) and (5) are used to calculate the gas velocity "u" as a function of the pressure "p". The boundary condition used is uϭ0, aϭa 0 at pϭp 0 . Then, if p n , r n and p nϩ1 , r nϩ1 (p n Ͼp nϩ1 ) are two successive points on a line of constant entropy passing through the initial conditions, then the following finite difference forms are used for the numerical calculation: .
When a pipeline ruptures, gas can escape from the full cross section area, i.e. the full-bore opening. The decompression disturbance travels back into the pipe, with each pressure level "p" propagating at a fixed velocity "w" given by the difference between the local velocity of sound "a" and the corresponding exit velocity of the gas "u", i.e. where R is the gas constant, T the absolute temperature and A 0 , B 0 , C 0 , D 0 , E 0 , g, a, b, c, d, a the equation parameters. The prediction method of the equation parameters from gas compositions can be found in Ref. 3).
By combining the above mentioned two methods, "British Gas Theoretical Model of Rich Gas Decompression" and "BWRS Equation of State", authors successfully developed the computational program which can cal- culate dual-phase decompression curves of the natural gases.
The calculation was carried out on a natural gas called "C2 gas," which was used in one of the full-scale burst tests carried out by HLP (High Strength Line Pipe) Committee. 5) The compositions of "C2 gas" are shown in Table 1.
The procedure for calculation of the dual-phase decompression curve is shown in Fig. 2. Figure 2(a) shows the phase envelope of the gas and the isentropic line through the fracture initial point. In this case of Fig. 2, fracture initial point is assumed 20 MPa in pressure and Ϫ10°C in temperature. Figures 2(b), 2(c), and 2(d) show the change of the density, the change of the sonic velocity and the change of the escaping gas velocity during the fracture, respectively. The velocity of the decompression wave can be given by the difference between the local sonic velocity and the local escaping gas velocity as shown in Fig. 2(e), and this curve is called the "decompression curve".
The fluid outflow velocity is continuous, but because of the discontinuity of the acoustic velocity of the fluid at the two-phase boundary, there is a "plateau" in the dual-phase decompression curve as shown in Fig. 2(e), which features the dual-phase decompression.
Examples of the calculated decompression curves are shown in Fig. 3 with varied initial pressure. As the initial pressure increases, the plateau grows wide. The sonic velocity, which is shown at the initial point in the figures, grows higher and higher as the initial pressure increase, and the pressure ratio of the plateau to the initial pressure goes lower, which results in the shift of the decompression curve to the higher velocity.
Numerical Simulation for Natural Gas Decompression Behavior in Full-bore Opened Pipe
In this section, the natural gas decompression behavior in pipeline is numerically simulated. The merit of the numerical simulation, in spite of our successful calculation in dual-phase decompression curves by theoretical model in the previous section, is that the numerical simulation makes it possible to change the assumptions or the boundary conditions for future works.
In this study, the gas decompression behavior in the fullbore opened pipe is numerically simulated on the following assumption.
(1) Pipe diameter is constant and straight.
(2) There is no branch and junction.
(3) The pipe is laid horizontally, and inner-gas flows to horizontal direction.
(4) The pipe heat capacity is very small. Thus the heat exchange between natural gas and pipe wall is negligible.
(5) Pipe wall is smooth enough. Thus wall friction is negligible.
(6) Pipe is full-bore opened as shown in Fig. 4 schematically.
(7) The flow of natural gas is uniform in the pipe radius direction. The flow is one-dimensional to the axial direction, and the gas and the mist flow together.
The governing equations are one-dimensional compressible Euler equations same as Eqs. (1) to (3) and the equations are solved by FDS (Flux Difference Splitting) scheme. 6) To complete this system, BWRS equation of state, shown as Eq. (9), is used for natural gas. It is assumed that condensation process is in equilibrium state by using this equation.
The numerically simulated results of C2 gas decompression behavior are shown in Fig. 5, where the gas decompression is shown at various seconds from the formation of the full-bore opening. In this case of Fig. 5, initial gas pressure is assumed to be 15 MPa, initial temperature is 0°C, and the environmental pressure is kept at 1.5 MPa, which is the convenient value instead of the atmospheric pressure because of the calculation convergence. In the decompression process, the natural gas flows in pipeline toward the full-bore opening and the pressure becomes lower gradually. The plateau is observed where natural gas changes from the gas single phase to the dual phase of gas and mist.
The comparison between the pressure distribution of C2 gas and pure methane at seconds is shown in Fig. 6. As shown in Fig. 6, the appearance of the plateau in dual-phase decompression curve results in higher pressure in the decompression process. It can make the shear fracture propagate long. The points on the plateau correspond to the dualphase boundary. The natural gas is in single phase at the upstream of the plateau and is in dual-phase at the downstream of the plateau.
When the natural gas changes from single phase to dualphase, the speed of the expansion wave, so called the decompression velocity, decreases because of the drop of the acoustic velocity, in which jumping of the decompression velocity is observed. This difference of acoustic velocity between single phase and dual-phase induces the plateau.
As shown in Fig. 7, reasonable agreement, except the difference in the low-pressure area caused from the difference in the environmental pressure, between the theoretical model in the previous section and the numerical simulation in this section is obtained.
Numerical Simulation for Natural Gas Decompression Behavior in Partially Opened Pipe
In the previous section, the pipe is assumed to be fullbore opened and the application of the numerically simulation method to the present problem is shown to give a reliable result as the theoretical model. In this section, the natural gas decompression behavior in partially opened pipe is numerically simulated. The partially opened pipe is simulated by the pipe with an orifice shaped end of some cross section ratio as shown in Fig. 8 schematically.
In this case, the governing equation is given as and A/A 0 is the ratio of the local cross section area to the main cross section area. The pressure distribution at seconds is shown in Fig. 9. The flow is blocked with orifice, which prevents inner-gas to flow out from the pipe. For this reason, the smaller open area ratio results in the higher pressure. Every 25 % decrease in the open area ratio brings about 25 to 30 % increases in the pressure at the pipe end.
Two-dimensional Fluid-Structure Interaction Simulation
The studies of the decompression behavior in pipelines are usually done with the assumption of full-bore opening. Strictly speaking, this full-bore opening corresponds to the case where the fractured pipe wall has opened fully, and apparently is not a good approximation during the fracture initiation process. The numerical simulations of the gas de-compression behavior in the partially opened pipe in the previous section indicate that the orifice influences the gas decompression behavior widely if the open area ratio at the pipe-end is kept a constant low value. However, the experimental observations suggest that this initiation effect is negligible because the propagating shear fracture is very rapid phenomenon and the pipe is ruptured very rapidly and also said that the time span of this initiation process is only about 20 to 30 ms.
In this section, in order to confirm the propriety in this assumption of full-bore opening, the two-dimensional fluid-structure interaction simulation of the pipe fracture and inner gas decompression is carried out. The decompression of methane gas is simulated using two-dimensional compressible Euler equations. In this study, it is assumed that methane gas is the ideal gas and the BWRS equation of state is not used here. The equations are solved by Hertedn-Yee's upwind TVD scheme. 6) The deformation of the pipe wall is simulated by two-dimensional dynamic elasto-plastic finite element method using von Mises yield criterion, its associated flow rule and Prager flow rule. 7) The young's modulus is assumed to be 20 000 kgf/mm 2 , the yield stress 60 kgf/mm 2 and the stress at 10 % strain 70 kgf/mm 2 . These values are corresponding to the high-grade pipe. In this two-dimensional model, the through-thickness crack is formed instantaneously and natural gas flows out from the inside as shown in Fig. 10 schematically. Fluid analysis and structure analysis give each other's boundary conditions. Fluid analysis gives pressure at the pipe wall to structure analysis and structure analysis gives velocity and position of the pipe wall to fluid analysis. The numerically simulated results of the pressure distribution and the pipe deformation are shown in Figs. 11 and 12. In the case of Fig. 11 the initial inner gas pressure is 15 MPa and initial outer pressure is 5 MPa. The initial inner gas pressure is 10 MPa and initial outer pressure is 5 MPa in the case of Fig. 12.
The pipe wall moves outside, inner gas flows out and the inner pressure becomes lower. The pipe wall continues to deform even after the inner pressure becomes equal to the outer one. The inner-gas pressure becomes equal to outer one in about 5 ms, but the deformation ends in 19 ms in both cases. This time span of about 20 ms in the fracture initiation process is very small and also agrees well with the past experimental observations. Judging from these results, it can be said that a detailed consideration on the fracture process is not necessary in predicting the natural gas decompression behavior of pipelines and the full-bore opening assumption in one-dimensional simulation is regarded as reasonable.
In the process of deformation the inner pressure becomes lower than outer one because the deformation is very fast and the expansion of the inner space is rapid enough. All the results suggest that the pipe wall is deformed mainly by initial inertia force of the wall itself, and not by inner gas pressure distribution during the fracture. In the high-pressured pipeline fracture, the initial pressure ratio is more important factor than instantaneous pressure distribution because the inertia force of the pipe wall is obtained from high-pressurized inner gas in the beginning of fracture. The effect of inertia force is observed by comparison of the final formations at 19 ms between Fig. 11 and 12. In the case of 15 MPa initial inner gas pressure in Fig. 11, the pipe wall has larger inertia force and deforms greater than in the case of 10 MPa initial inner-gas pressure in Fig. 12.
Conclusions
In this paper, the natural gas decompression behaviors in high-pressure pipelines are analyzed by one-dimensional theoretical model, one-dimensional numerical simulation and two-dimensional fluid-structure interaction simulation. The following are conclusions.
1) By combining two methods, "British Gas Theoretical Model of Rich Gas Decompression" and "BWRS Equation of State", authors successfully developed the computational program, which can calculate dual-phase decompression curves of any kind of natural gases. In this calculation of one-dimensional theoretical model, the phenomenon of the "plateau" in the dual-phase decompression curve has been confirmed. This "plateau" in the dual-phase decompression curve is caused from the discontinuity in the acoustic velocity of the fluid at the two-phase boundary.
2) Authors also numerically simulated the natural gas decompression behavior in pipelines. In the case of the fullbore opening approximation, reasonable agreement between the theoretical calculation and the numerical simulation is obtained.
3) The numerical simulation of decompression behavior through the orifice is carried out. The numerical results of orifice model show that small open area ratio results in high pressure as expected. Every 25 % decrease in the open area ratio brings about 25 to 30 % increases in the pressure at the pipe end.
4) The two-dimensional fluid-structure interaction simulation between the pipe fracture and inner gas decompres-sion is carried out. In this simulation, the pipe wall keeps being deformed even after the inner pressure becomes equal to the outer one. This indicates that the pipe wall is deformed mainly by initial inertia force of the pipe wall itself, which is obtained from high-pressure inner gas in the beginning of fracture, and not by the gas pressure distribution during fracture process.
5) The two-dimensional fluid-structure interaction simulation shows that the time span of the initiation process before the flaps have opened fully is only about 20 ms and this time span agrees with past experimental observations. It indicates that the effect of fracture initiation process is negligible in predicting the natural gas decompression behavior of pipelines and full-bore opening assumption in one-dimensional simulation is regarded as reasonable.
|
2019-04-28T13:12:18.988Z
|
2001-04-15T00:00:00.000
|
{
"year": 2001,
"sha1": "b7dbcec9a1590374ec1e0ee4a709ce26e2451a8e",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/isijinternational1989/41/4/41_4_389/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "78611dd12b5eb148e17c8ecc2c0933888a6a0cf5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
871286
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of enamel defects and associated risk factors in both dentitions in preterm and full term born children
Objectives The aim of this study was to evaluate the prevalence of enamel defects and their risk factors on primary and permanent dentitions of prematurely born children and full-term born children born at Regional Hospital of Asa Sul, Brasília, DF, Brazil. Material and Methods Eighty 5-10-year-old children of both genders were examined, being 40 born prematurely (G1) and 40 born full term (G2). The demographic variables, medical history and oral health behaviors were retrieved using a questionnaire and data obtained from clinical examination were recorded. The teeth were examined and the presence of enamel defects was diagnosed according to the DDE Index and registered in odontograms. Subsequently, the defects were categorized in four groups according to one of the criteria proposed in 1992 by the FDI Commission on Oral Health, Research and Epidemiology. Kruskal-Wallis, Chi-square, Kappa, Mann-Whitney tests and logistic regression were used for statistical analysis. Results 75% of total sample had enamel defects. There was a major prevalence of hypoplasia of the enamel in G1 (p<0.001). There was a significant relationship between low weight and presence of the imperfections on the enamel in G1 on the primary dentition. The logistic regression model showed that the other risk factors such as monthly per capita family income, educational level, dietary and hygiene habits, fluoride exposure, trauma, and diseases were not associated with enamel defects and caries. Conclusions Pre-term labor can be a predisposing factor for the presence of the enamel hypoplasia in the primary dentition.
INTRODUCTION
According to the World Health Organization (WHO) 21 , a newborn of less than 37 weeks gestation or born within fewer than 259 days after the last menstrual period is considered premature or preterm. Prematurity can be classified as mild, when the baby is born between the 35 th and 36 th weeks of gestation; moderate, if the birth occurs between the 31 st and 34 th weeks; or extreme, if the gestational age is less than or equal to 30 weeks.
The birth of pre-term and/or low birth weight newborns represents a public health problem with increased economic, social, family, and individual costs 10 . Preventive and health promoting measures become necessary in order to improve the quality of life of these children. As such, knowing the risk factors to which they are subject is of fundamental importance for the adoption of such measures.
Low gestational age and low birth weight are the main factors that determine the incidence of neonatal complications. Among the most frequent complications are neonatal rickets, hypocalcemia, perinatal anoxia, anemia, infections, and metabolic, renal, respiratory, cardiovascular and hematological diseases. In these circumstances the use of various drug therapies and, frequently, orotracheal or laryngoscopic intubation to overcome respiratory difficulties, are necessary. These pathologies, whether or not associated with ventilatory support, may cause anomalies in the oral structures of these babies. In a previous study, low birth weight preterm infants presented a higher prevalence of hypoplasia than normal birth weight controls, and the most affected primary teeth by hypoplasia were maxillary incisors 7 .
Among the most prevalent oral alterations in these children are hypoplasias and opacities of the dental enamel 6,8 . The formation of these defects is related to disorders during amelogenesis 6 . The term during which the attack on the ameloblasts occurs is very important to the location and the appearance of enamel defects. The final enamel shows a record of injuries received during the development of the teeth, which may appear as hypoplasia of the enamel, diffuse or marked opacity, and hypomineralization of the enamel 20 . enamel hypoplasia is the most common of the changes in human tooth development, with relevant clinical implications due to esthetic reasons, symptoms involved, susceptibility to caries, and also to the difficulty of treatment in many cases. This condition occurs as a direct result of disorders of metabolism of the ameloblast layers of the enamel organ 19 .
Hypoplastic areas are reported as being highly susceptible to dental caries 3,9,15 because, through ultrastructural analysis, they have shown enamel with less mineralization, more porosity, with irregular surfaces allowing the accumulation of bacteria which is favorable to the development of colonies of Streptococcus mutans, resulting in carious lesions 3,11,13 .
Knowing that children born prematurely present alterations in the dental structures, and that these can be related to the development of carious lesions, we must focus our attention on certain risk groups within which they fall. Given this evidence, this study aimed at evaluating the prevalence of enamel defects in the dentition of children born prematurely compared with those born full-term, and the main risk factors associated with those defects.
Sample
This was a cross-sectional study involving children born in the Regional Hospital of Asa Sul -HRAS-DF, a public hospital from the Brazilian Unified National Health System (SUS), considered to be a reference for high-risk births.
The sample for this study comprised 80 children of both genders, born in this hospital between 2000 and 2004, between 5 and 10 years of age, with birth weights that ranged from 605 g to 4300 g. The children were taken from a case-control study done in 2004, aimed at comparing the oral alterations present in 96 premature babies and 96 full-term babies born in this maternity hospital, making a total of 192 children. In calculating the initial sample (1 st study), a confidence interval of 95%, with a margin of error of 10% (more or less) 8 , was considered. Of the children analyzed in the previous study, 98 were selected by having their charts updated with address and telephone number. Of these, 18 chose not to participate in the research. To calculate this sample, we took into account a 10 to 15% annual loss, which can be expected in longitudinal epidemiological studies. This study was completed with 80 children.
Children born before the 37 th gestational week were classified as premature, according to the WHO criteria 21 . In this sample, 40 children were premature and 40 children were full-term. The children were categorized according to birth weight as: very low weight (less than 30 weeks, 1500 g); low weight (between 1500 and 2500 g, between 31 st and 34 th weeks); and normal weight (above 2500 g and >38 weeks) 17 .
Information about the birth conditions, such as number of weeks and birth weight as well as medical history of the mother and child, were taken from the registration forms of the first study, whose data were obtained from the hospital medical charts.
Prior to the beginning of this study, the mothers received all the explanations concerning the monitoring and dental evaluations of the children, and those who wished to participate signed an informed consent form and filled out an individual questionnaire, which was tested against the pilot study.
Use of the questionnaire
The questionnaire was used as an interview technique, incorporating both open and closed questions, by a previously trained interviewer. This technique was chosen as it had been shown to be an accurate research instrument for the investigation of health habits and conditions. The results gained here have been shown, in these cases, to be more reliable that those obtained through fill-in forms 16 .
The questionnaire contained three parts:
general information
This included name, age, gender, date of birth, family income, place of residence, and parents' occupations.
Medical history
Information regarding probable risk factors for opacity and hypoplasia were collected, such as: systemic infectious diseases and rashes which occurred in the first 3 years of life, such as pneumonia, tonsillitis, ear infections, chickenpox, rubella, measles; history of injury and low birth weight.
In relation to the categorization of family socioeconomic status, the standard variables as defined by the National Research of Domestic Samples (PNAD), which is done annually and portrays the socioeconomic situation of the Brazilian population, were considered. The variables defined by the PNAD include: family (group of persons connected by parentage, domestic dependency or norms of familiarity, who reside in the same household); literate persons (those who are capable of reading and writing at least a simple message in the familiar language); level or number of years of formal education (school grade and level -elementary, secondary, or college -attended, considering the last grade completed successfully); monthly per capita family income (monthly family income divided by the number of people in the family). For the determination of income, the value of the Brazilian national minimum wage in force during the months referenced in this study was considered -R$465.00. Starting with this last indicator, families were initially classified into three groups: below poverty level (per capita household income less than ¼ of the minimum wage), at poverty level (per capita household income between ¼ and ½ minimum wage), and above ½ per capita minimum wage. Subsequently, these variables were categorized in 2 groups: ½ minimum wage and above ½ minimum wage, because only 6 and 5 children of G1 and G2, respectively, showed income less than ¼ minimum wage.
Clinical exam
This procedure was done at the Dental Clinic of the Catholic University of Brasilia, under artificial light, using dental mirror number 3 and properly sterilized gauze for cleaning. The patient was seated in the dental chair and submitted to prophylaxis with a Robson electric toothbrush and prophylactic paste manufactured by Vigodent. Afterward, the dental surfaces were dried with streams of air.
The dental surfaces were examined to verify the presence of structural defects in the enamel. The tooth was considered present when any part of the clinical crown was exposed in the oral cavity.
When there was doubt about the presence of defect, the tooth was considered normal. Defects measuring less than one millimeter in diameter were excluded 9,12,14 .
All evaluations were performed by the same examiner, who was trained in the pilot study. To establish the degree of intra-examiner agreement, the Kappa index of 10% of the sample was used, as suggested by the WHO 21 . Re-examination was done 1 week after the initial examination (Kappa=0.90 for opacities, hypoplasias and white spots).
Codes for the diagnosis of enamel defects were used, according to the criteria of the Index of Developmental Defects of Dental enamel (DDe Index) 6 .
Teeth were examined by quadrants for hypoplasia and opacities of the enamel in the following order: right maxillary, left maxillary, right mandibular, and left mandibular. The number of teeth with defects was registered, as well as the total number of primary and erupted permanent teeth.
Hypoplasia of the enamel was defined as a break in the continuity of the enamel with a reduction in the layers, creating grooves or depressions. Opacity was defined as a change in the translucence of the enamel, or a qualitative defect that could vary in color from white to yellow and brown. The extent, type and color of the defect were registered. When hypoplasia and opacity were found in the same tooth, the defect was registered as hypoplasia 8 .
These data were presented in relation to the number of children that showed defects in the enamel, and in relation to the number of affected teeth per child. Subsequently, the defects were categorized in four groups: 1-marked opacity (codes 1 and 2), 2-diffuse opacity (codes 3 to 6), 3-hypoplasia (codes 7 and 8), and 4-other defects (code 9) according to one of the criteria proposed in 1992 by the FDI Commission on Oral Health, Research and epidemiology.
Analysis of data
The Mann-Whitney non-parametric test was used to compare the results of the scores of opacity and hypoplasia. A significance level of 5% was used for analytical purposes.
An adjustment was made to the multivariate logistical regression, in order to examine possible risk factors associated with occurrence of opacity and hypoplasia. Occurrence of opacity (Yes=1, No=0) and occurrence of hypoplasia (Yes=1, No=0) were used as the dependent variables. Type of birth (Premature=1, Full-term=0) was considered as a preponderant risk factor, and the following variables were considered as potential confounding factors: mother's education level (<8 years of school=1, 8 years of school=0), monthly per capita family income (<0.5 minimum wage=1, 0.5 minimum wage=0), dental trauma (had trauma=1, did not have trauma=0), infectious diseases (Yes=1, No=0), and rashes (Yes=1, No=0).
Initially, a bivariate analysis was carried out between the dependent variable and all the independent, confounding variables with the aim of identifying those which could be related to the dependent variable. In this phase, significance was considered when the p-value was less than 0.25. Subsequently, a multivariate analysis was carried out only with the variables selected in the bivariate process, plus the preponderant independent variable as risk factor. In this phase a significance level of 5% was used.
RESULTS
The ages of the children ranged from 5 to 10 years, with a mean of 6.3 years for the premature group and 7.6 years for the full-term group, with a significant different between the groups (p=0.0003). The mean weight for premature children was 1255 g, and for full-term children was 3398 g.
Considering the 80 children examined, approximately 72.5% of the sample presented at least one tooth with enamel defect. According to the number of teeth affected, the groups presented the following mean scores in relation to the 2 types of defects: Presence of Opacity, G1 -2.3 and G2 -2.8 teeth (p=0.8624); and Hypoplasia, G1 -1.1 and G2 -0.1 teeth with this defect (p=0.0011).
No significant difference was found in the prevalence of enamel defects when children from G1 and G2 were divided by gender, with values of p=0.5712 for opacity and p=1.000 for hypoplasia.
The percentage of children in G1 and G2 who presented opacity and hypoplasia was presented at Table 1. The percentage of primary and permanent teeth with opacity and hypoplasia in G1 and G2 was detailed at Table 2
and 3, respectively.
There was a statistically significant difference between the groups for marked opacity (p=0.0174) and hypoplasia (p<0.001), for primary teeth ( Figure 1). However, there was no significant difference for any type of defect for permanent teeth (Figure 2). Controlling for the effects of the variables that could be considered a probable risk factor for the occurrence of defects in the enamel such as: mother's level of education (0.5777), per capita income (0.2442), trauma (p=0.9784), infectious diseases (p=0.4906), and rashes (p=0.9571), the type of birth was not a risk factor for the occurrence of opacity (p=0.8161) with odds ratio equaling 1.11 (0.45 to 2.77). However, the type of birth was a risk factor for the occurrence of hypoplasia (p=0.0034) with odds ratio equaling 7.4 (1.94 to 28.25).
DISCUSSION
In Brazil, developmental defects of enamel are not studied enough although they result in esthetic problems, dental sensibility, and are predisposing factors for dental caries. Prematurity has been described as one of the causes for the appearance of enamel defects 6,8 . The etiological factors for these problems are multiple, and they range from the conception of the baby through the first years of life, making it difficult to demonstrate the association between the variables. The scarcity of studies that evaluate permanent dentition reflects the difficulty in following premature children until the change of dentition.
Comparisons of the results of the present study, characterized as a cross-sectional study of specific samples, with other studies available in the literature and discussed below, must be done with caution owing to the differences in sample delineation, environmental influences, and different methodologies.
This study evaluated information from medical records of the pregnancy through behavioral and social factors of the children and their families which could impact on their oral health. The maternal disorders registered as principal causal factors for prematurity were: hypertension (42%), preeclampsia (14.5%), premature labor (12.5%), anemia (8%), detached placenta (5%), gestational diabetes (5%), and cardiopathy (4%). In addition to these complications, 5 births were twins and 1 birth was a pregnancy with triplets 8 .
In premature children, the most frequent neonatal complications were respiratory distress, hyaline membrane disease, jaundice, pneumonia, osteopenia of prematurity, anemia, and non-specific infections. Various drug therapies were necessary, among them antibiotic therapy, ventilatory assistance, parenteral nutrition, and prescription vitamin supplements containing iron and calcium. The need for these procedures was directly related to the seriousness of the general state of health of the child.
The systemic diseases in the first 3 years of the children's life, which were related by the responsible parties, were pneumonia, tonsillitis, ear infections, chicken pox, rubella, and measles. This period coincides with the time of mineralization of permanent teeth. In this study, there was no significant association between infectious diseases (OR=1.45, IC=0.50-4.22) and rashes (OR=0.96, IC=0.34-2.79) with the presence of structural defects in the enamel. However, we must not disregard the possibility of biased information. These results disagree with Chaves, et al. 2 (2007), who found association between post-natal infectious episodes and the prevalence of defects in the enamel (OR=2.89, IC=0.84-10.03). The children in group 1 (premature) and group 2 (full-term) presented mean scores from 5 to 10 permanent teeth, respectively, which should take into account the difference in mean age between the groups. These findings support the study by Seow 18 (1996), in which differences between dental age and chronological age were found when children from 6 to 8 years of age, born with low birth weight, were compared with children of normal birth weight (p<0.001). However, when children at 9 years of age were analyzed there was no more difference between the groups.
In the present study, approximately 72.5% of the sample presented at least one tooth with enamel defect. These findings agree with the study by Chaves, et al. 2 (2007), who found a prevalence of defects in the enamel in 78.9% of the population studied. Oliveira, et al. 14 (2006) found a prevalence of defects in the enamel in approximately 50% of the sample.
Lunardelli and Peres 12 (2005) obtained a result of 24.4% of children with opacity and hypoplasia. However, there may have been an underestimation in the diagnoses of these patients, due to the absence of prior prophylaxis and drying of the examined surfaces.
No significant difference was found in the prevalence of enamel defects when children from G1 and G2 were divided by gender. These results coincide with those cited by Chaves, et al. 2 (2007) and Oliveira, et al. 14 (2006) who also found no difference.
In relation to the number of children from groups G1 and G2 who presented structural enamel defects in both dentitions, the difference was significant only for hypoplasia (Table 1). These results are supported by those of previous works 1,4,5,8,11 , who also found significant results for hypoplasia alone. From these findings, prematurity associated with low birth weight can be considered a risk factor for disorders in enamel mineralization.
In this study, the distribution of enamel defects according to dental groups revealed that they appeared most frequently in molars (9.8%), followed by canines (9.7%) and incisors (8.7%). These results coincide with the study by Lunardelli and Peres 12 (2005). For primary dentition, the groups of teeth most affected by hypoplasia can be presented in the following descending order: incisors (8.0%), canines (3.6%) and molars (2.3%). These findings coincide with the first study, done in 2004, with the same sample 8 . In the present study, all primary incisors that presented hypoplasia belonged to the premature children ( Table 2). The trauma from intubation and the calcium deficiency associated with the period of mineralization of these teeth, may have triggered the development of these defects 4,8,18 .
When opacity was the defect analyzed in the primary teeth, an inversion of prevalence in relation to hypoplasia occurred: molars (7.5%), canines (6.1%) and incisors (0.7%) ( Table 2). In the first study, the teeth most affected went in the following order: canines, molars and incisors. The inversion in the percentage of affected canines and molars may have been due to the greater number of teeth present in the current examination, because all the children who were examined had the 8 primary molars exposed. This is in contrast to the previous study 8 in which they presented an average of 2.7 molars. The greater prevalence of this defect in the molars and canines may be explained by the mineralization chronology of these teeth, which occurs approximately around the 9 th month of pregnancy. In children born prematurely who presented teeth with defects, the process of amelogenesis may have been interrupted or temporarily impaired. As a consequence, an enamel defect which was dependent upon the stage of amelogenesis occurring in the tooth at that moment would appear.
After categorizing the variables into the three types of defects, marked opacity appeared to be the most prevalent defect, occurring in 43% of the total sample. This was followed by hypoplasia, which occurred in 22.5% of the sample. The children in G1 presented 43 primary teeth with marked opacity (7.0%) and 42 with hypoplasia (6.0%). Of those in G2, 19 (3.7%) and 4 (0.8%) presented those defects, respectively ( Figure 1). These findings disagree with those of Chaves, et al. 2 (2007) and Oliveira, et al. 14 (2006), who found diffuse opacity to be the most frequent defect, followed by marked opacity and hypoplasia. Lunardelli and Peres 12 (2005), analyzing a sample of 431 public preschool children, found a prevalence of 17.9% of diffuse opacity, 11.1% of hypoplasia and 6.1% of marked opacity. Many factors must be considered together in order to justify the occurrence of these defects, without discarding the possibility of diffuse opacity being due to inadequate intake of fluoride thus causing fluorosis in the examined teeth. In relation to the permanent teeth, the dental groups most affected by enamel defects went in the following order: molars (25.5%), incisors (21.4%) and canines (13.6%). These findings support the study by Seow 18 (1996), in which the most affected teeth were the molars followed by the incisors. As the mineralization of the permanent teeth begins at birth, or some months after the premature delivery, it may be hypothesized that persistent systemic problems lead to malformations in the enamel following birth.
Analyzing the permanent teeth affected by structural defects between the two groups, the premature children (G1) presented 43 teeth with opacity (20%) and 2 teeth with hypoplasia (0.9%), while in G2 there were 86 teeth with opacity (22.7%) and 2 teeth with hypoplasia (0.5%). The number of permanent teeth with opacity and hypoplasia between the two groups was not statistically different ( Figure 2). These results disagreed with those of Seow 18 (1996), who found 17% opacity and 3% hypoplasia in the group of very low birth weight, and 8% opacity and 3% hypoplasia in the group of normal birth weight, there being a statistically significant difference in the total prevalence of defects between the groups (p<0.02).
Only 3 permanent incisors presented hypoplasia in the total sample, and all had history of trauma in the homologous primary teeth.
It was observed, through multivariate analysis, that the type of birth was a risk factor for the occurrence of hypoplasia (p=0.0034) in primary dentition. Premature children had 7.4 times more chance of having hypoplasia than full-term children, with a confidence interval of 95% ranging from 1.93 to 28.25.
It is very difficult to distinguish the etiological factors for enamel alterations, because pre-, neo-, and post-natal complications may occur together, interacting among themselves. enamel defects result from severe cumulative events associated with external factors such as social, nutritional state, and life-style questions, which must be considered because they make a strong impact on health and have repercussions on the quality of life of the individual 3 .
CONCLUSIONS
According to the methodology and conditions of this study, it can be concluded that there was a high prevalence of enamel defects in the total sample. The children born prematurely presented greater prevalence of hypoplasia in the primary dentition, when compared with the children born full-term. In the permanent dentition, there was no significant difference between the groups. The variables of income, level of formal education, trauma and infectious diseases did not correlate with the enamel defects in neither of the groups. Low birth weight can be considered a risk factor for defects of marked opacity and hypoplasia in the enamel only in primary dentition.
|
2016-08-09T08:50:54.084Z
|
2012-06-01T00:00:00.000
|
{
"year": 2012,
"sha1": "0c08df791bd64595a2edf2b8a2401480450a76a4",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/jaos/v20n3/v20n3a03.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c08df791bd64595a2edf2b8a2401480450a76a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222291208
|
pes2o/s2orc
|
v3-fos-license
|
A discrete Morse perspective on knot projections and a generalised clock theorem
We obtain a simple and complete characterisation of which matchings on the Tait graph of a knot diagram induce a discrete Morse function (dMf) on $S^2$, extending a construction due to Cohen. We show these dMfs are in bijection with certain rooted spanning forests in the Tait graph. We use this to count the number of such dMfs with a closed formula involving the graph Laplacian. We then simultaneously generalise Kauffman's Clock Theorem and Kenyon-Propp-Wilson's correspondence in two different directions; we first prove that the image of the correspondence induces a bijection on perfect dMfs, then we show that all perfect matchings, subject to an admissibility condition, are related by a finite sequence of click and clock moves. Finally, we study and compare the matching and discrete Morse complexes associated to the Tait graph, in terms of partial Kauffman states, and provide some computations.
Introduction
Given a graph G embedded in the 2-sphere, denote by G * its plane dual, and by Γ(G) the plane graph obtained by overlaying the two graphs. The vertices of Γ(G) are divided into those coming from the vertices of the two original graphs and those arising as the intersection of dual edges. Fix a pair of vertices v and f , one in G and one in G * . A celebrated result by Kenyon, Propp and Wilson [18,Theorem 1], known as the KPW correspondence, provides a map between spanning trees in G and perfect matchings on the graph obtained from Γ(G) by removing v and f . This map is shown to be a bijection if v and f are "adjacent", meaning that they are opposite vertices in a square face of Γ(G). Otherwise, the map is only injective.
With a somewhat different point of view, in his Formal Knot Theory [16] Kauffman introduced what are now know as Kauffman states; these are a special kind of bijections between the set of crossings of a knot diagram and its regions, after two adjacent "forbidden" regions are excluded. In his famous Clock Theorem, he then proved that all Kauffman states are related by a finite sequence of simple swaps, known as clock moves. Every knot diagram D can be chequerboard coloured, thus producing two dual graphs, having as vertices the white (respectively black) regions, and crossings as edges.
Loosely speaking, Kauffman states are in bijection with spanning trees in the black graph, as well as with perfect matchings in the (Tait) graph Γ(D) given by overlaying the two coloured graphs (see e.g. [10]). In particular, the KPW correspondence reduces to Kauffman's bijection for the black graphs with the forbidden regions as adjacent ones.
More recently, Cohen [9] showed how to associate a discrete Morse function -as defined by Forman [12]-to a Kauffman state on a given knot diagram. More precisely, rather than an actual discrete Morse function, his result yields an equivalence class of such objects, which can be thought of as being perfect matchings on the balanced Tait graph (see Section 2 for the definition) of the diagram. One of the aims of this paper is to extend Cohen's work bridging these graph theoretic properties and ideas stemming from knot theory. We tried to keep the exposition as self contained as possible, as well as accessible to people with graph or knot theoretic backgrounds.
Our starting point is to completely characterise the set of possible dMfs arising from a generalised version of Cohen's construction (Theorem 4.3). This will allow us (Theorem 4.6) to prove the existence of a bijection between dMfs arising this way and rooted spanning orthogonal forests in the black and white graphs of the diagram, induced by a generalised version of Kauffman states, called partial Kauffman states.
We use this bijection to first count perfect dMfs in Proposition 5.1, then -using a symbolic version of the graph Laplacian for the two coloured graphs-we give a formula (Proposition 5.2) to count all dMfs, and compute them for a simple infinite family of diagrams in terms of Fibonacci numbers.
We then turn to perfect admissible matchings on Γ(D), where admissible just means that exactly one vertex of each colour is unmatched.
We introduce a set of two moves, called click path and click loop moves; we first prove (Theorem 6.3) that click path moves induce bijections between the sets of perfect dMfs on S 2 with different critical points. This implies immediately that the image of the KPW correspondence only consists of perfect dMfs, regardless of the adjacency condition; furthermore (Corollary 6.4) every perfect dMf arises uniquely as the image of precisely one choice of spanning tree and one vertex of each colour.
We then prove a topological characterisation (Theorem 6.7) of the subgraphs induced by perfect admissible matchings on the black and white graph, as well as a combinatorial one (Theorem 6.8) in terms of Jordan resolutions.
With these results at hand we can state our main result, which is a simultaneous further generalisation of the Clock theorem and KPW's correspondence: Theorem 1.1 (Click-Clock). If the knot diagram D is reduced, any two perfect admissible matchings on Γ(D) are related by a finite sequence of click path, click loop and clock moves.
One easy consequence of this result is that two perfect dMfs induced by Cohen's construction can be transformed into one another by clock and click path moves -or only clock moves if they share the same critical points (this last part is the original Clock Theorem).
The proof of the Click-Clock theorem is surprisingly not straightforward, and relies on several seemingly unrelated graph-theoretic constructions. It is also noteworthy to point out that this generalisation of the Clock Theorem is independent from others that have appeared in the literature, such as Hine and Kálmán's [14] version for triangulated surfaces, and Zibrowius' extension to tangles [24].
Finally we introduce two simplicial complexes associated to the set of partial Kauffman states, the matching and discrete Morse complexes. We study some of the properties of these mysterious complexes, and provide some sample computer aided computations of their homologies.
It remains an open question how to extract more information from these complexes associated to Γ(D) and whether this information can be related to interesting features of the knot.
Acknowledgements. DC is supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 674978). NY is supported by The Alan
Turing Institute under the EPSRC grant EP/N510129/1. The authors would like to thank Vidit Nanda and Agnese Barbensi for their useful feedback and support, and the anonymous referee for their helpful comments.
Knot diagrams and Kauffman states
Consider a knot K ⊂ S 3 and a diagram D for K; as this will not create confusion we will also denote by D the projection of the diagram, i.e. the 4-valent graph 1 on S 2 obtained by disregarding the over/under information at all of the crossings. Choose an arc a on D -that is, an edge in the projection-and call both a and the two regions of S 2 \ D that have a as an edge forbidden. The pair (D, a) is usually called a marked diagram for K.
Definition 2.1 ( [16]). A Kauffman state on (D, a) is a bijection between the crossings of D and the non-forbidden regions of S 2 \ D, such that each crossing c is assigned to one of the (at most 4) non-forbidden regions that are incident to c.
A common way of representing a Kauffman state on (D, a) is to mark the edge a, and specify the assigned region to a given crossing by placing a dot near it. It was proved by Kauffman in [16] that every pair of Kauffman states on (D, a) is related by a finite sequence of clock moves, shown in Figure 2. So, given a marked diagram (D, a), one can construct the clock graph whose vertices are the Kauffman states X(D, a), and edges are given by clock moves (see Figure 4). Clock graphs have been studied previously in [1], [11] and [10], for example. The next objects we introduce have been previously studied in [25] and [10], for example. Definition 2.2. Let D be a knot projection with a black and white chequerboard colouring 2 . The black graph G b (D) has black regions as vertices and crossings as edges (the white graph G w (D) is defined analogously) 3 . The overlaid Tait graph Γ(D) is the superimposition of G b and G w , using their natural embeddings in S 2 .
Note that the graphs G b and G w are plane duals, Γ(D) divides the sphere into square regions, and and edges in Γ(D) are the half edges of the white and black graphs, connecting one of their vertices to a crossing (see e.g. [16] for a more detailed exposition). If one discards the forbidden vertices (one from G w and one from G b ), we obtain the balanced overlaid Tait graph Γ(D, a). The vertices of this tri-partite graph can be split into those coming from black or white regions, and those corresponding to crossings.
In the figures, we will draw black/white regions as black/white dots respectively, and vertices coming from crossings as little crosses, as in Figure 4. 16]). The Jordan trail associated to a Kauffman state x ∈ X(D, a) is the curve formed from x by resolving all crossings as indicated in Figure 5.
We will see later on that Jordan trails associated to Kauffman states are always connected (see also Figure 7). We will expand this definition in Section 4 to include the cases where not all crossings are paired with a region (as in Figure 10). We recall here some graph-theoretic terminology that will be widely used throughout the rest of the paper. Definition 2.4. A matching on a graph G is a subset of edges in G sharing no common vertices. A matching is perfect if every vertex in G is incident to an edge in the matching and it is maximal if it is not a proper subset of another matching. A maximum matching is a matching containing the largest number of edges.
We write T ree(G) to denote the set of all spanning trees in G. Let H be a subgraph of G, with E(G) = {e 1 , . . . , e m } and E(H) = {e i } i∈I for some I ⊆ {1, . . . , m}. A graph H ⊥ in the dual graph G * orthogonal to H is any subgraph of G * with edges a subset of It is a well-known result that the dual H ⊥ of a spanning tree H is itself a spanning tree in the dual graph, and that H ⊥ is completely determined by H. A forest is just a disjoint union of trees, and we allow isolated vertices to be connected components of a forest.
By definition, the set of perfect matchings of Γ(D, a) is in bijective correspondence with X(D, a) (see e.g. Figure 6), hence we will use these notions interchangeably throughout. There is also a bijection between Kauffman states on (D, a) and rooted spanning trees in G b (or G w ), after choosing a common root for the trees and their duals (see [16,Sec. 2]). The choice of these roots is dictated by the arc a: given a spanning tree T ⊂ G b , take the unique vertex corresponding to a forbidden region as the root of the tree and orient all edges away from the root (see Figure 7). Let T * ⊂ G w be the tree dual to T , and, again, orient the edges to flow away from the uniquely determined root. Each crossing lies at the centre of an oriented edge u → v, so associate to each crossing the target v of the edge to which it belongs. This will always be a nonforbidden region because the two forbidden regions are, by construction, only ever the source of an edge. There is a bijection between crossings and non-forbidden regions because T and T * are dual and orthogonal, and they span all regions. In a similar way, one can construct a spanning tree from a Kauffman state. We can now state a version of Kauffman's Clock Theorem linking Kauffman states, appropriately rooted trees in the black or white graphs and Jordan trails, suited for our purposes. Here by adjacent we mean that the two corresponding regions share an edge in the projection; this is also equivalent to requiring that v b and v w are opposite vertices in a square face of the Tait graph Γ(D).
Their main result -recalled below-known as the KPW correspondence (or construction) is equivalent to the first part of Kauffman's Clock theorem, as stated in Section 2, in the case of adjacent unmatched vertices. If instead the unmatched vertices are not at the opposite sides 4 of a square in Γ(D) we only get an injection between the rooted spanning trees of the black graph (with the extra choice of a vertex of the white graph) to perfect matchings in the corresponding balanced overlaid Tait graph.
and v w ∈ V (G w ); then if these are adjacent, there is a bijective map from T ree(G b ) to perfect matchings on Γ(D, v b , v w ). If instead the two vertices are not adjacent, the map is an injection.
The constructions and correspondences outlined here will be generalised in several different directions in the following Sections.
We will first extend the correspondence between perfect dMfs and rooted dual trees to non-maximal dMfs and orthogonal rooted forests, then we will see in detail what happens when considering non-adjacent roots in the perfect setting. We will then prove that the image of the KPW correspondence consists only of dMfs, and moreover, the elements in its image are in bijection with the set of perfect dMfs on a cell structure on S 2 induced by the given knot diagram.
What we have covered so far does not seem to be connected to clock moves, and hence to the second part of Kauffman's Clock theorem. We will remedy this in Section 6. There we prove a direct generalisation of his theorem, extended to perfect admissible matchings on Γ(D), after introducing two further moves in addition to the usual clock ones.
Discrete Morse functions
Discrete Morse functions, first introduced in [12] by Forman, are functions assigning a real number to each cell in a regular cell complex, under certain combinatorial conditions. These discrete Morse functions, as the name suggests, can be thought of as being a discrete analogue of the "classical" smooth Morse functions.
Rather than dealing with functions themselves, following [7] we will use suitable equivalence classes, where two such functions are deemed equivalent if they induce the same acyclic partial matching, which we define below. Following the previous analogy, these acyclic matchings should be considered as a discrete analogue of the gradient vector field of a Morse function.
Let X denote a regular cell complex. A partial matching on X is a decomposition of the cells of X into three disjoint sets R, U and M , along with a bijection µ : R → U such that dim(x) = dim (µ(x)) − 1 and x lies in the image of the attaching map of the cell µ(x), for all x ∈ R. Write x < y if both aforementioned conditions hold for two cells x and y. In the simplicial case, this just means that x is a codimension 1 face of µ(x) for all x ∈ R.
The bijection µ is acyclic if the transitive closure 5 of the relation generates a partial order on R. The cells in M are called critical. A discrete Morse function is perfect if M is minimal. The following result is fundamental in discrete Morse theory.
Theorem 3.1. [12,Cor. 3.5] If µ : R → U is an acyclic partial matching on a regular CW complex X with critical cells M , then X is homotopy-equivalent to a CW complex whose n-dimensional cells correspond bijectively with the n-dimensional cells in M .
In particular, any perfect discrete Morse function on a cell complex on S 2 contains exactly two critical cells: one in dimension 0 and one in dimension 2.
One can visualise discrete Morse functions on a poset graph H(X) (Figure 8), a representation of the poset of cells as a graph, oriented by the incidence relation < defined above.
For cells x < y, there is a downward arrow from y to x. Hence we can associate to a discrete Morse function the partial matching on H(X) composed of the edges involved in the relation .
Remark 3.1. If X is a cell complex and Σ = {(x, µ(x))} x∈R an acyclic partial matching on X, then for any pair Y = (y, µ(y)) ∈ Σ, the matching Σ \ Y is acyclic. This follows Figure 8. A simplicial complex with an acyclic partial matching given by the red arrows. In the centre, its poset graph and on the right its amended poset graph, with arrows between matched cells reversed. We can see the matching is acyclic as there are no directed cycles in the amended poset graph diagram.
from the fact that since Σ is acyclic, there is a partial order on R, defined by x x if and only if x < µ(x ) and this is still a partial order on R \ Y .
We will see in the next section that the overlaid Tait graph of a knot projection (with a suitable orientation) coincides with the poset graph of a certain cellular structure induced on S 2 . We will sometimes abbreviate 'equivalence class of a discrete Morse function' to 'dMf' when referring to the equivalence class represented by a given acyclic matching.
From knots to discrete Morse functions
In this section we outline how to obtain matchings from a knot diagram and present the necessary conditions for such a matching to be acyclic. This is a generalisation of the construction presented by Cohen in [9]. We provide a proof for Cohen's assertion that a matching obtained from a Kauffman state via this construction is indeed a discrete Morse function in Proposition 4.4.
Cohen outlines a correspondence in [9] and in [11,Sec. 8.2] between pairs ((D, a), x) with x ∈ X(D, a) and discrete Morse functions on the 2-sphere with a cell structure determined by Γ(D, a). More explicitly, let us define the map such that Ξ(D) is the cellular structure on S 2 whose 2-cells are given by the vertices of G w , 1-cells by crossings in D, and 0-cells by the vertices of G b .
The unbalanced Tait graph associated to (D, a) is a plane realisation of the abstract poset graph for this cellular structure; more precisely, we also need to orient the edges of Γ(D) from white vertices to crossings, and from crossings to black vertices.
As an example (for one choice of the colouring), the minimal diagrams of the alternating torus knots T 2n+1,2 (see Figure 9) give the splitting of S 2 in two (2n + 1)-gons, attached along their boundary. It follows from the definition that G b (D) provides the 1-skeleton of Ξ(D).
Cohen's main result from [9] is that if we "extend" Ξ to pairs ((D, a), x) with x ∈ X(D, a), then we get a discrete Morse function µ on S 2 with the cellular decomposition Ξ(D). That is, the matchings on D induced by Kauffman states are mapped to acyclic matchings on the cellular structure Ξ(D) on S 2 . A Kauffman state x ∈ X(D, a) induces a matching in H(D) as it is a bijection between crossings and non-forbidden regions (corresponding to crossings and black or white vertices in Γ(D)). It is not immediately obvious that this induced matching is acyclic.
Cohen's construction induces maximum matchings; it was noted in Section 3 that any perfect discrete Morse function on S 2 has exactly two critical (unmatched) cells: Figure 9. The minimal projection D 2n+1 for the alternating torus knots T 2,2n+1 (for n = 2), together with its black and white graphs. The black graph, a cycle of length 2n + 1, provides the 1-skeleton of the cell structure on S 2 (which can be thought of as the equator). The two white vertices correspond to the two hemispheres, attached along their boundaries. Using discrete Morse functions induced by Kauffman states gives rise to a restricted case of perfect discrete Morse functions, as the only two critical cells are incident to each other.
We define 6 partial Kauffman states to encompass non-maximum matchings, as well as maximum matchings with non-adjacent critical cells.
, such that each crossing is paired with exactly one of its four adjacent regions, for all subsets I ⊂ {1, . . . , n}.
A partial Kauffman state (pKs) on the crossings indexed by I will be often represented by placing one dot -called a component of the pKs-near each crossing in {c i } i∈I , in one of the four regions incident to it. In the same way, if v b ∈ V (G b ) and v w ∈ V (G w ), we can define X(D, v b , v w ) as the set of pKs that do not have any component in the regions v b and v w . If, moreover, these two regions are adjacent, we will shorten the notation to X(D, a), where a is the unique arc separating v b and v w .
A partial Kauffman state x is admissible if there is at least one unpaired region of each colour. x is maximal if it injects from the entire set of crossings of the diagram D.
Definition 4.2. The Jordan resolution J(x) of a partial Kauffman state x ∈ X(D) is the curve formed by resolving all dotted crossings as shown in Figure 5. It is the analogue of the Jordan trail of a Kauffman state; one example is displayed in Figure 10. Write |J(x)| to denote the number of connected components of the Jordan resolution, which we will call Jordan cycles.
Since x is not necessarily maximal, J(x) contains double points at all crossings not paired in x. Any connected, complete (i.e. all crossings are resolved) Jordan resolution is just a Jordan trail.
We point out en passant that, for a projection with n crossings, there are a priori 2 n possible complete resolutions. However only a small number of these are Jordan resolutions; Figure 11 shows a complete resolution which is not induced by any perfect matching on Γ(D). Figure 10. Two partial Kauffman states and their Jordan resolutions; on the left, a connected partial resolution and on the right, a (disconnected) Jordan resolution. The pKs on the right is perfect but has non-adjacent unmatched regions; further it is not admissible. Figure 11. A complete resolution not induced by a perfect matching. In particular, there is no pKs inducing this resolution. Note that the circles are not concentric on S 2 (cf. Proposition 6.8).
We say a simple loop γ in Γ(D) is monochromatic if its vertices consist only of crossings and black regions or only of crossings and white regions. We say that a matching x ∈ X(D) supports the monochromatic loop γ, if the edges of γ are alternatively composed by edges of x. One example is shown in Figure 12. The next result is the main tool for checking whether a matching induced by a partial Kauffman state is a discrete Morse function or not. Proof. It is easy to see that if x supports a monochromatic loop, then its components on the loop give rise to an oriented cycle in the poset/Tait graph Γ(D), as in Figure 12. Conversely, if x does not support any monochromatic loop, then all possible oriented loops in Γ(D) must contains vertices of both colours. Hence, in each directed cycle in Γ(D) we can find a portion of Γ(D) that looks like one of the two configurations in Figure 13. The top configuration can occur if and only if both vertices have been The next result is an extended version of the the construction given by Cohen in [9]. Proof. By Theorem 4.3, it is sufficient to show that a maximal partial Kauffman state x with adjacent unpaired vertices does not admit a monochromatic loop on Γ(D, a), where a is the unique arc common to the two unpaired regions. Assume, for a contradiction, that x contains a monochromatic loop γ. The loop divides S 2 , and consequently the vertices of Γ(D, a), in to three parts; those belonging to γ, and two other connected components R and R . We can assume without loss of generality that the two forbidden regions are contained in R, as they are adjacent and so cannot be separated by γ. The thesis will then follow from the Lemma below. Proof. Again, call the two regions R and R , and assume wlog that γ is black and has length 2n, meaning that there are n black regions and n crossings as vertices. Consider the cellular structure on R induced by the restriction G b of the black graph to R. Since R is a disc, we have 1 and w • are the number of black vertices, crossings and white vertices, respectively, in the interior of R. Then, 1 + c • = b • + w • , and there is at least one unpaired white/black vertex. Since we are assuming that x is maximal, there can be at most one unpaired vertex in each region.
We can now conclude the Proposition 4.4 by noting that, by assumption, R contains two unpaired vertices. We conclude this section with a result relating discrete Morse functions to spanning forests, which should be thought of as an analogue of [7, Prop 3.1] and [24, Thm 1.17], and as a generalisation of the already mentioned (see also [18] and [16]) bijection between Kauffman states and orthogonal pairs of rooted spanning trees in G b and G w . Theorem 4.6. There is a bijection between discrete Morse functions on Ξ(D) induced by admissible partial Kauffman states and rooted orthogonal spanning forests in G b and G w . In particular, when the admissible partial Kauffman states are maximal, this reduces to a bijection between perfect discrete Morse functions and rooted orthogonal spanning trees.
The latter part of the statement is just a part of Kauffman's Clock Theorem for non-necessarily adjacent forbidden regions. Before we can prove the theorem, we need to introduce a version of the trees induced by a Kauffman state, extended to admissible partial states. x intersecting an edge e * in H w x . But e and e * intersect at a unique crossing of D, and for both edges to be included in their respective graphs, the crossing must be matched with one black and one white region, contradicting the injectivity condition of a partial Kauffman state. Hence, H b x and H w x are orthogonal. In this setting, critical points of x are given by unmatched vertices and crossings; each connected component (which is necessarily a tree -possibly composed by a single vertex), contains exactly one unmatched region. The root of each connected component in the forests is the unique unmatched vertex in each component.
Conversely, any such pair of spanning orthogonal forests produces a unique equivalence class of dMfs by applying the process in reverse; there are no induced directed cycles in Γ(D) as the matching is induced by forests, hence it gives an equivalence class of dMfs. The roots are uniquely determined in a similar way to the description in Figure 7.
When we restrict to maximum pKs, there is exactly one unmatched vertex of each colour, corresponding to exactly one root for each forest, hence the forests contain exactly one connected component each.
We conclude this section with the following result, relating Jordan resolutions to dMfs. Proof. The result follows easily from the characterisation provided of matchings induced by dMfs just provided, together with the fact that monochromatic loops always disconnect J(x).
Counting discrete Morse functions
Theorem 4.6 provides us with enough structure to count the number of discrete Morse functions on Ξ(D), for a given knot diagram D. Figuring out the number of (perfect) dMfs for a given class of simplicial complexes is in general a challenging problem, and the precise number is known, for example, for the complete graph K n , but unknown for the n-simplex for n > 3 (see respectively Sections 3 and 5 of [6]).
The Laplacian L(G) of a graph G on n vertices is the n × n matrix where D(G) is the diagonal matrix with vertex degrees on the diagonal and A(G) is the adjacency matrix of G. It is well-known 7 that for a connected graph, the Laplacian has exactly one 0 eigenvalue, and that the non-zero eigenvalues λ 2 ≤ · · · ≤ λ n are strictly positive. Denote the characteristic polynomial of L(G) by p G (t) = t n + c 1 t n−1 + . . . + c n−1 t + c n .
Since we are dealing only with connected graphs, we can assume that c n = 0. To stress the dependence of the coefficients on the graph, we will sometimes write c i (G) instead of just c i . Given In other words, the e-th coefficient of p G is (up to alternating sign) the number of rooted forests in G with e edges. In what follows, |det(K)| denotes the knot determinant, which is defined as the evaluation at −1 of the Alexander polynomial of K [21].
and evaluating all variables e i and e * i in 1 for i = 1, . . . , m for a minimal representative. Note that the evaluation of the polynomial in Equation (2) is just the characteristic polynomial of the symbolic Laplacian (up to a multiplication by t, which only affects the result up to a factor of −1) for the disjoint union of the graph G b and its dual; here by disjoint union we mean that the two are to be thought of as being disjointly embedded in the plane. An alternative to evaluating in this quotient ring is to consider the product p symb ) in the same m variables, discard all non square-free monomials, and then set all variables E(G b ) equal to 1.
Proof of Proposition 5.2. Let us start by noting that the coefficients of the characteristic polynomial of the Laplacian of a graph alternate in sign, by Equation 1, so evaluating in t = −1 just gives the sum of the absolute values of the coefficients; moreover, by Theorem 4.6 the number of dMfs on Ξ(D) coincides with number of rooted orthogonal spanning forests in G b and its dual. Using the symbolic Laplacian we see that for a plane graph G, the evaluation p symb G (−1, E(G)) ∈ Z[E(G)] is a sum of monomials, each of which gives a rooted spanning forests in G, and whose coefficient is the number of possible rootings. In particular p G (−1, 1, . . . , 1), is the number of rooted spanning forests in G. Now, the product in Equation (2) gives a sum of monomials in the variables E(G b ) and E(G * b ). The condition e i · e * i = 0 is equivalent to the orthogonality of the forests.
Example 5.3. Using Propositions 5.1 and 5.2, we can count the total number of dMfs for a simple infinite family of knot projections, obtained from the minimal diagrams D 2n+1 for the alternating torus knots T 2,2n+1 , as in Figure 9.
We can use the symbolic Laplacian as in Proposition 5.2 to generalise from perfect to general discrete Morse functions. The symbolic Laplacian matrices for the white and black graphs are respectively The first symbolic characteristic polynomial is easily determined to be p symb Recall that all the products in Equation (3) are computed in the quotient polynomial Using Theorems 1 and 2 from [8] (which provide expressions for the number of rooted spanning forests in path and cycle graphs in terms of Fibonacci numbers) we can see that p symb G b (−1, 1, . . . , 1) = Φ 4n+1 + Φ 4n+3 − 2 and p symb G b (−1, 1, . . . , 0, 1 where Φ i is the i-th Fibonacci number. The second expression gives us the first term in Equation (3). To evaluate the second term in Equation (3), first recall that we are counting orthogonal forests in G b and G w . Hence, for each e i in the sum 2n+1 i=1 e i , we can consider the path graph obtained from G b by deleting the edge e * i . Thus, we can use the second expression above, which counts rooted forests in path graphs. That is, the latter evaluation coincides with the evaluation in (1, . . . , 1) of p symb G b (−1, e * 1 , . . . , e * 2n+1 ) · e i in the quotient ring of Equation (2) for some edge e i ⊂ E(G w ) (as the polynomial is symmetric, we get the same result for any edge e i ). Putting all together we obtain this aesthetically pleasing result:
The Click-Clock theorem
In this section we generalise Kauffman's Clock Theorem to perfect admissible matchings, by introducing two new moves between partial Kauffman states. We also characterise the image of the KPW correspondence in terms of dMfs in Proposition 6.3.
From this point onwards we need an extra assumption on the projection of the knot diagrams considered, namely that they are reduced. This just means that the situation in Figure 15 is not allowed; in knot theory, crossings like those shown in Figure 15 are referred to as nugatory crossings. This implies at once that both the black and white graphs are 2-connected, meaning that they can not be disconnected by removing a single vertex. Definition 6.1. Consider two partial Kauffman states x, x ∈ X(D). A click loop move from x to x consists of altering x on a single monochromatic loop on which x and x disagree such that x and the modified x agree on the loop. We say that x and x are click loop equivalent if they both support the same monochromatic loops, and they coincide everywhere except on at least one of these loops (where they induce opposite orientations).
So, two click loop equivalent pKs are related by a finite number of click loop moves. Definition 6.2. Consider two maximal partial Kauffman states x ∈ X(D, v w , v b ) and x ∈ X(D, v w , v b ); we say that x and x differ by a click path move ρ ⊂ E(G b ) if x and x induce the same unrooted tree on G b , ρ is the unique black path in the branch of the tree determined by connecting the root v b to v b , and the two partial states coincide everywhere except on ρ (see Figure 16). The same can be done if the two critical points differ only in the white graph.
The result below characterises the image of the KPW map as the space of perfect acyclic matchings. In other words, the injection guaranteed by KPW's correspondence becomes a bijection if we restrict the codomain to perfect dMfs. Proof. First of all, let us note that a click path move does not change the maximality of the matching. It then suffices to prove the existence of a bijection for X(D, v w , v b ) and X(D, a), where a is an arc in D separating the two regions v w and v b . Fix x ∈ X(D, v w , v b ), and denote by T x the black spanning tree it supports. Call ρ the unique path in T x connecting the root v b to v b . A click path move along ρ transforms x into a matching x ∈ X(D, a) (again supporting T x as a spanning tree). We can then conclude by observing that a click path move does not introduce or remove any monochromatic loops supported by either x or x (see Figure 17); the only way this would be possible is if a supported loop could be adjacent to a path along which we can apply a click path move. However, we cannot obtain such a configuration without introducing an edge in the matching incident to two crossings. Hence, these moves preserve the acyclicity of maximal pKs by Theorem 4.3. The general statement then follows easily by noting that a single click path move along a black/white path suffices to change the two unmatched vertices of any perfect dMf on Ξ(D). Proof. Each perfect matching on Γ(D) which is in the image of the KPW correspondence is uniquely determined by the triple (T, v b , v w ), with T ∈ T ree(G b ). By Theorem 4.6, each such triple is in bijective correspondence with a perfect discrete Morse function on Ξ(D).
Neither click loop moves nor click path moves change the Jordan trails; clock moves and click loop moves do not change the position of the critical points. In fact, when perfect discrete Morse functions differ only by click path moves, we have the following result (cf. also Lemma 6.10). Proof. Let x and x be two perfect dMfs such that J(x) = J(x ). Then at each crossing we must have one of the two possibilities shown in Figure 18, where the dots are, without loss of generality, in the black regions; in both cases we immediately see that the black edge between the two regions in figure must belong to the subgraph of the black graph supported by the dMfs; by Theorem 4.3 this subgraph can not contain cycles, thus it is a tree.
Conversely, if an edge of the black graph belongs to the common spanning tree, than on the corresponding crossing we are in the situation of Figure 18, and hence the two dMfs have the same Jordan trail. Proof. This follows easily by trying out all possible cases, shown in Figure 19, which shows (up to symmetries) how clock moves change the Jordan resolution. The number in the middle indicates how |J(x)| changes under the clock move, while the dashed line indicates how the local picture eventually joins up. We divide the three possible types depending on their action on the resolution. Type I is the only one needed in the Clock theorem, Type II can only occur for matchings that are not dMfs (as necessarily |J(x)| > 1), and Type III moves are the only ones that can merge/split Jordan cycles. Proposition 6.7. If x ∈ X(D) is a perfect and admissible matching, then each connected component of the black and white orthogonal subgraphs H b x ⊂ G b and H w x ⊂ G w it induces is either a tree or has the homotopy type of a circle. Furthermore, there is exactly one tree of each colour.
Proof. Call H such a connected component, and assume without loss of generality that H is a subgraph of the black graph. Then H ∼ m S 1 for some m ≥ 0. Now, the Euler characteristic tells us that the difference between the number of black vertices and crossings is 1 − m.
Hence we can distinguish 3 cases, shown in Figure 20; if m = 0, then H is a tree, and there is always a perfect matching on (the first barycentric subdivision of) H leaving out exactly one black vertex. If m = 1, then H is homotopically a circle, so it is a simple cycle with some trees "attached" to it. In this case the number of black vertices coincides with the number of crossings, and there is (using the assumption that the matching is perfect) a perfect matching on H (in fact it is possible to prove that there are exactly two) which necessarily contains a monochromatic loop. On the other hand, if m > 1 there are more crossings than black dots, so there is no hope for a portion of a perfect admissible 8 matching to be supported by H. As trees are the only components that can support an unmatched vertex, the admissibility of the matching implies that there is exactly one of each colour.
The following result characterises admissibility in terms of the parity of the Jordan resolution and is used in the proof of Theorem 6.9; as a consequence of the proof, we will also see that the subgraphs induced by admissible and perfect matchings are concentric. Proof. It follows from the proof of Lemma 4.5 that each monochromatic loop supported by x has exactly one unpaired vertex in both discs into which it divides S 2 . Furthermore, by Proposition 6.7 each connected component of the black and white subgraphs H b x and H w x can contain at most one monochromatic loop. Then, by Lemma 4.5, each connected component of J(x) must have an unmatched vertex on both sides; here we are using the fact that monochromatic loops create connected components in the Jordan resolution. Since x is perfect, there are only two unmatched vertices in Γ(D); this forces all Jordan cycles in J(x) to be concentric on the 2-sphere (see Figure 21). Figure 21. A schematic view of a generic admissible matching up to isotopy on S 2 ; the white and black graphs here are the subgraphs of G w and G b , respectively, induced by a maximal pKs. In the central disc, there is the isolated black rooted tree (roots are circled), while the white one is on the outside. The other components are the pseudotrees, and external trees are dotted while internal ones dashed. The only unmatched vertices are the roots of the two trees. The red circles represent the 3 Jordan cycles. The grey shaded region contains the innermost black cycle, its internal trees and the unique isolated black tree.
Connect the two unpaired vertices with a path in S 2 that avoids the crossings of D. Each time the path crosses an arc in the projection it changes colour. Then, note that the parity of the number of the intersections coincides with the parity of the number of circles in any possible Jordan resolution having those two vertices as the unmatched ones.
As a consequence of this last result, we see that for a given perfect admissible matching x, |J(x)| − 1 coincides with the number of monochromatic loops in x.
It follows from the last two results that the subgraph H b x of the black graph induced by a perfect admissible matching is a special case of a spanning pseudoforest. By this we mean that each connected component is either a tree or has the homotopy type of a circle (it is a pseudotree). Up to isotopy of S 2 , the unique black tree can be thought of as being in the centre of all the concentric pseudotrees. We call this tree isolated ; all other components of the black pseudoforest can be decomposed into three parts: a cycle (i.e. a single simple closed loop in G b induced by a monochromatic loop), the internal trees and the external trees, each of which is attached to the cycle at exactly one vertex. We can distinguish between internal and external trees, according to whether they are respectively in the disc bounded by the cycle that contains the isolated tree of the same colour or not. The schematic picture is summarised in Figure 21.
We recall here the statement of the simultaneous generalisation of the Clock Theorem by Kauffman [16,Thm. 2.5] to perfect matchings and of the main result in [18,Thm. 1]. In particular, Theorem 6.9 gives a way to organise all possible discrete Morse functions on a given cellular structure on S 2 in the image of Ξ, where each pair of discrete Morse functions is related by a finite sequence of moves. Theorem 6.9 (Click-Clock). If the diagram D is reduced, any two perfect admissible matchings on Γ(D) are related by a finite sequence of click path, click loop and clock moves.
As an easy consequence of the proof, two perfect dMfs can be transformed into one another by clock and click path moves -or only clock moves of Type I if they share the same critical points (this last part is Kauffman's original Clock Theorem); furthermore, if these two dMfs share the same Jordan trail, then they differ by at most two click path moves. Theorem 6.9 will follow from the next two lemmas; the proof of the first one is a straightforward generalisation of Lemma 6.5 to perfect admissible matchings. The proof of the second one is instead rather involved, and will occupy most of the remaining section. Proof. The "if" direction is trivial, as both kinds of moves preserve the Jordan resolution.
For the other implication, let x and x denote two perfect admissible matchings satisfying J(x) = J(x ). Observe that J(x) = J(x ) if and only if all of the crossings in the corresponding diagrams resolve in the same way. This gives two possibilities, illustrated in Figure 18; the two matchings can either coincide at a crossing or be on opposite sides (on the two incident regions with the same colour).
If x = x then the statement follows immediately. If not, then there must be at least one crossing c on which the second possibility occurs. Without loss of generality, suppose we start at such a crossing, where the components of the two states are in the black regions. Draw an edge in G b if the crossing through which the edge passes is as in the right of Figure 18. Starting from c, draw as many such edges as possible in both directions. If this path forms a loop then the two states are related by a click loop move. If not, then there is a (unique) path between the two unmatched black regions, one from x and one from x , in which case the two states are related by a click path move. To see that this is actually a path, observe that by construction we never have a vertex of degree three or more, because this would imply the existence of at least two components in a single region from the same matching, which violates the definition of a partial Kauffman state. Since the matchings are admissible, they each have exactly one unmatched black region, hence this path must be between the two unmatched regions in the different matchings with the same colour. Lemma 6.11. Up to clock moves and click loop and path moves, the Jordan trail of a perfect admissible matching can be made connected.
Since the proof of this last lemma is going to take a bit of work, we first show that, together with Lemma 6.10, it does in fact imply the Click-Clock theorem: Proof of Theorem 6.9. It suffices to show that for a given perfect and admissible matching x on Γ(D), we can convert it to a perfect dMf of Γ(D) with prescribed adjacent critical points; the thesis would then follow by applying the "classical" Clock Theorem.
Using Lemma 6.11 we can apply a finite sequence of clock and click loop moves on x that make its Jordan trails connected. Call this new matching x; by Corollary 4.8 we know that x is in fact a dMf on Γ(D), and up to click path moves we can transform it into yet another dMf with prescribed and adjacent critical points, by Proposition 6.3.
Before we start the proof of Lemma 6.11, we need to introduce several new objects and study some of their properties. In particular we need the following leaf spin operation, first considered in [13]. A leaf is an edge with at least one incident vertex of degree one. Definition 6.12. Let G be a plane graph, and H ⊂ G a subgraph. If is a leaf in H which is not a leaf in G, define the clockwise (resp. counterclockwise) spin of H along as the subgraph H of G obtained by removing from H, and adding the first edge encountered by spinning around its isolated endpoint in a clockwise (resp. counterclockwise) fashion (see Figure 22). Figure 22. On the right a plane graph with a spanning tree (highlighted in red); on the right the effect of two leaf spins on the tree, one clockwise and one counterclockwise.
If H is a spanning forest (or pseudoforest) then leaf spins preserve this structure as pivoting on a leaf does not change the fact that it is a leaf. Hence, the graph H obtained from H by a leaf spin is also a spanning forest (or pseudoforest). Furthermore, a leaf spin always leaves the cycles of a pseudoforest unchanged. We need to take a bit more care when dealing with rooted pseudoforests, as this will be the case we need. If a component of the pseudoforest contains the root as the terminal vertex of the leaf we want to spin, we first need to move the root along the unique edge to which it is incident. In other words we never spin a leaf around the root; this way the root does not switch between connected components. As a special case, if the rooted connected component consists of only one edge, spinning it will leave the root as an isolated vertex.
We are about to see how leaf spinning will yield the connection between concentric pseudoforests induced by admissible perfect matchings and certain clock moves on them, ultimately allowing us to conclude the proof of Lemma 6.11 and thus Theorem 6.9. More precisely, consider a pair of spanning pseudoforests induced by admissible perfect matchings, coinciding everywhere outside the first black cycle. We will prove that these are related by leaf spins (and hence by clock and click path moves). Lemma 6.13. If the spanning concentric pseudoforest H b x induced by the perfect admissible matching x ∈ X(D) contains a leaf, then all of the pseudoforests obtained by spinning this leaf are induced by matchings that are clock and click path equivalent to x.
Proof. Call R ∈ V (G b ) the region corresponding to the terminal black vertex. Let us start by noting that a leaf in H b x is necessarily "surrounded" by white edges, as shown in Figure 23. Consider the two cases shown in Figure 23: in the left one we can spin the terminal black leaf in a clockwise fashion, and this corresponds to a sequence of clock moves. After one spin, this local configuration is just the one we started with, only tilted by the angle of the spin. Hence, we can reach all other possible edges dual to the white ones through clock moves. In the second case we can start clocking/spinning the leaf in both directions, and also reach all other edges in G b sharing the terminal vertex with the leaf.
It remains to show that these are all of the possible configurations containing leaves; a priori there are two possible things that could go wrong. Firstly, there could be no component of the pKs in nearby white regions that allows for a clock move in either direction (and hence the spin could not correspond to a clock move); this can be excluded by looking at the left part of Figure 24.
Lastly, applying multiple leaf spins might result in a configuration where more than one edge of R does not contain any component of the pKs on the crossings it has in common with R. This configuration contradicts the maximality of the pKs and the fact that there is a leaf, as illustrated in the right part of Figure 24.
Note that if any of these leaf spin moves involve the root, then we need to apply a suitable click path move to shift it away.
Remark 6.1. It follows immediately that spinning a leaf always correspond to clock moves of Types I and II (see Figure 19); in particular, a leaf spin can never merge different components of J(x). Furthermore, a leaf spin cannot alter the cycle in a pseudoforest, but might change the cycle of the dual pseudoforest.
To complete the proof of Lemma 6.11 by proving the existence of a sequence of moves decreasing the number of Jordan cycles, we must take a detour to introduce some technical tools. Definition 6.14. An almost-tree is the spanning forest obtained by removing exactly one edge from a spanning tree (see Figure 25). If the edge removed is a leaf, then its isolated vertex will be considered as a connected component. Proposition 6.15. If G is a 2-connected plane graph, denote by G aT the graph whose vertices are almost-trees in G and edges are leaf spins. Then G aT is connected.
Proof. This is an easy consequence of [13,Thm. 1]. The authors of [13] prove that the related graph G T , whose vertices are spanning trees and edges are leaf spins, is connected. We can conclude by noting that in G aT there are more possible moves that can be performed, as any almost-tree can arise after the removal of an edge in several trees in G. In the top-right configuration we can spin and apply clock moves in both directions; note that we cannot clock clockwise (resp. counterclockwise) after we reach the two edges at the sides of the unique white region (the bold grey arc in the top-right) not matched with a crossing adjacent to the region R. In the bottom-right configuration we can see the result of applying a clockwise leaf spin to the top-right configuration.
Recall that Theorem 4.6 and Proposition 6.7 imply that for an admissible perfect matching x ∈ X(D) we get two induced pseudoforest subgraphs H w x ⊂ G w and H b x ⊂ G b composed by concentric pseudotrees alternating in colour.
Consider the subgraph G b of G b composed of the edges in the innermost black cycle -that is, the unique black pseudotree that surrounds the unique black tree-and all of the vertices and edges within.
In other words, G b is the subgraph of G b bounded by the unique black cycle, in the disc containing the internal black trees. Let H b x denote the intersection of G b with the pseudoforest H b x . Note that H b x is spanning in G b , and has exactly two connected components: one is a pseudotree (containing only internal trees), and the other is the isolated black tree.
A finite 2-connected graph G embedded in the plane, divides R 2 in regions; we call the edges of G in the boundary of the external infinite region its boundary cycle. With the assumptions above, this is in fact a simple cycle in G whose removal from the plane splits it into two discs. Proof of Lemma 6.11. Let us denote by x a perfect admissible matching with |J(x)| = 2k + 1, since the number of components in the Jordan resolution is necessarily odd by Proposition 6.8. If k = 0 there is only one connected component, and we are done.
We want to show the existence of a sequence of clock moves and click path/loop moves that take x to a new perfect admissible matching x with |J(x )| = 1. We do this by induction on k. The strategy is to reduce to a configuration akin to the left side of the third case in Figure 19, so that we can apply one final clock move in order to reduce the number of Jordan cycles by 2. The result then follows by the inductive step.
This can be achieved by suitably changing the subgraph G b of the black graph, defined above.
The idea is the following: if we can make the two innermost cycles (i.e. the black cycle and the white cycle contained in H w x which separates the black isolated tree from the black cycle) as close as possible (see the left of Figure 26), then -up to possibly a click loop move on either cycle-we can apply a clock move that merges the innermost Jordan cycles. Applying a click loop move ensures that there is at least one square in Γ(D) with opposite edges of a square being matched, which makes applying a clock move possible.
We start by modifying the isolated black tree, by adding to it all the edges belonging the internal trees of the first pseudoforest H b x . We can do this using clock moves and possibly click path moves thanks to Proposition 6.15. That is, we apply a sequence of leaf spins to move the internal trees of H b x to the isolated black tree. Leaf spins correspond to sequences of clock moves and changing the roots corresponds to click path moves.
Before we can conclude, we need to eliminate some local configurations that would prevent our strategy from succeeding; in particular, after spinning all internal black trees onto the unique isolated black tree as described in the previous paragraph, there remain configurations that prevent the application of a single clock move to reduce the number of Jordan cycles by 2, as illustrated in Figure 19. After applying the procedure described in the previous paragraph, all external white trees are in fact paths. This is because we "spun away" all black internal trees in H b x . A configuration that would prevent applying the desired final Type III clock move is the presence of an external white tree surrounded by the unique black cycle, as illustrated in the centre of Figure 26. The problem here is that these external linear white trees separate the innermost white cycle from the innermost black cycle, preventing reaching the desired configuration (see the left part of Figure 26) where we can apply the final clock move. In particular, this occurs if there exists an edge in G b \ H b x with both vertices on the innermost black cycle (see the grey dashed edge in the central part of Figure 26).
We can overcome this problem by applying a sequence of leaf spins on these external linear white trees, which can be seen in the central and right parts of Figure 26. Note that these leaf spins have the effect of "shrinking" the innermost black cycle. Figure 26. On the left: two monochromatic loops (dotted) sharing two opposite edges of a square -up to a click loop move on either one-lead to a component-reducing clock move of Type III. This is the desired end-state after applying all of the required leaf spins. In the centre: the local configuration that prevents reaching the desired configuration on the left. In particular, the presence of the dashed grey edge causes the problem. We can solve this problem by applying leaf spins in the white graph; the leaves that have to be spun away in the white graph are dotted. The right part shows the effect on the black cycle of spinning away a white leaf.
Finally, the first black and white cycles are adjacent (as in the left part of Figure 26), and hence we can perform a single clock move to reduce k by one, thus completing the inductive step.
Remark 6.2. It follows from the proof of Lemma 6.11 that if the perfect admissible matching x has |J(x)| = 2k + 1, then we can transform it into a dMf by using at most k clock moves of Type III, k click loop moves, and an unspecified number of clock moves of Type I/II. It is unclear if we can always avoid the use of click path moves.
Complexes
In this section we introduce the matching and Morse complexes -the objects encapsulating all matchings and acyclic matchings, respectively, associated to a given diagram (including both admissible and non-admissible matchings). Another simplicial complex that can be associated to a diagram in this way is the Morse complex of Γ(D), first introduced by Chari and Joswig in [7] for general simplicial complexes. a)) is the simplicial complex whose vertices are the edges of Γ(D) and whose n-simplices are spanned by n + 1 vertices such that the corresponding edges form an acyclic matching in Γ(D). − 1 -connected. As an example, if D is the diagram in Figure 4, then M(D) is 1-connected. The connectivity of the matching complex is also related to the existence of non-extendable pKs, that is, pKs that are not perfect, yet are not faces of simplices of maximal dimension. It is not hard to exhibit examples of non-extendable pKs of arbitrarily high codimension, as illustrated by Figure 27.
Let X pure (D) denote the set of maximal pKs on the projection D, and use the same subscript for the sub-simplicial complexes of M(D) and M(D) generated by the top-dimensional simplices.
The pure matching complex of a graph is the union of the pure parts of the matching complexes of all its spanning trees [2,Thm. 8]. We can prove an analogous result, generalising from graphs to the cellular structures on S 2 obtained from Cohen's construction. Denote by (T, v b ) a spanning tree T ⊆ G b rooted at v b . Recall that a rooted spanning tree in G b along with a root in G w uniquely determines a Kauffman state, and hence a perfect acyclic matching in Γ(D).
Let ∆(T, v b , v w ) denote the simplex in M(D) spanned by the edges in Γ(D) determined by T , v b and v w . Of course, ∆(T * , v w , v b ) = ∆(T, v b , v w ) since T uniquely determines T * and vice-versa. Lemma 7.3. Let D be a diagram with n + 1 crossings. The simplicial complex M pure (D) of perfect discrete Morse functions on Ξ(D) is generated by the following set of maximal-dimensional simplices where each simplex is of dimension n.
Proof. Since each pair (T, v b , v w ) uniquely identifies a perfect dMf on Ξ(D) (see also e.g. Cor. 6.4), we get a bijection between the pure part of the Morse complex and the set of simplices above. Theorem 6.9 gives us insight into the Morse and matching complexes; for example, two perfect dMfs with the same critical cells can differ only by clock moves. At best, they can differ by exactly one clock move, in which case their corresponding simplices in M(D) are attached along an n − 2 dimensional face, where n is the dimension of the simplices. If two perfect dMfs differ by one click path move along a click path of length 2 (see Figure 28), then the corresponding simplices in M(D) are attached along an n − 1 dimensional face. This is unlike the case where we are studying the Morse and matching complexes of graphs (i.e. 1-dimensional simplicial complexes), where maximal-dimensional simplices are connected along unions of (n − 2)-dimensional faces [2].
Whilst it remains a hard problem to fully understand these complexes (even in just the pure case) the three moves between matchings do reveal some structure. We do point out however that complete knowledge of these moves in M(D) or M(D) is not sufficient to determine their homotopy-type(s), indicating that the complexes themselves carry more information than just the set of moves. To see this more concretely, consider the clock graph, which captures all top-dimensional simplices and moves between them for a fixed pair of adjacent critical cells. This graph is not even enough to determine the homotopy-type of the pure Morse complex for adjacent regions, despite being one of the nicest cases dealt with so far, as can be appreciated with the example in Figure 29.
Before providing the some sample computations of the homology of M(D) and M(D) for small diagrams, we make a few remarks; first we want to highlight a striking Figure 4, with the top-left (red) arc a chosen as the forbidden one. The Clock graph shown below in grey is clearly linear, but it is easy to compute that M pure (D, a) is a 3-dimensional simplicial complex homotopic to a circle. difference between dMfs obtained through Cohen's construction, general dMfs and "regular" Morse functions; for concreteness let us restrict to the case of a triangulated smooth closed manifold X.
In this case, the matching complex of (the poset graph of) X can be thought of as a discrete analogue of the space of equivalence classes of gradient vector fields of smooth functions on X, and M(H(X)) as the subset of these gradient fields coming from functions that are Morse [3]. It is well-known that smooth Morse functions are dense in the set of smooth functions on X, and that "small" isotopies are enough to transform any smooth function on X into a Morse one. In the discrete setting, this is no longer true; given a matching, it is generally very hard (see e.g. [19], the discussion [15], and the end of [7, Sec. 2]) to find its minimal "distance" to a perfect dMf.
Of course, given any matching one can always obtain a dMf from it by simply removing one edge from each of its monochromatic loops. However, if we restrict to a perfect and admissible matching x, it is possible to use the Click-Clock theorem to show that the minimal distance between x and a perfect dMf is bounded below by the minimal number of clock and click path/loop moves needed to transform x in a dMf, and the proof of the theorem provides us with an almost explicit algorithm that allows us to perform such a transformation.
Finally, it would be interesting to tie some of the homological properties of M(D) and M(D) to other known graph and knot theoretic invariants and quantities. There is much structure in these simplicial complexes that can be exploited in several ways.
We list here the (reduced) homology of the matching and discrete Morse complexes for all minimal knot diagrams with up to 7 crossings. The computations are performed using a Sage [23] program, available upon request. Table 1. The homology of the various simplicial complexes associated to minimal knot projections, up to 7 crossings. The notation Z a (b) denotes a generators of the homology in degree b. The diagrams are generated using SnapPy [17].
|
2020-10-13T01:00:53.400Z
|
2020-10-12T00:00:00.000
|
{
"year": 2021,
"sha1": "970dea75b0a64c36f580a3c6ae0098ee61fb2d88",
"oa_license": "CCBY",
"oa_url": "https://www.combinatorics.org/ojs/index.php/eljc/article/download/v28i3p3/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "9baad69e5b83e321fceaa82b77aab6fecfe34674",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
11998267
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Utility-Energy tradeoff in Delay Constrained Random Access Networks
Rate, energy and delay are three main parameters of interest in ad-hoc networks. In this paper, we discuss the problem of maximizing network utility and minimizing energy consumption while satisfying a given transmission delay constraint for each packet. We formulate this problem in the standard convex optimization form and subsequently discuss the tradeoff between utility, energy and delay in such framework. Also, in order to adapt for the distributed nature of the network, a distributed algorithm where nodes decide on choosing transmission rates and probabilities based on their local information is introduced.
INTRODUCTION
In many wireless ad-hoc networks due to the lack of a central station, nodes compete for the channel and decide channel access in a random manner. In the random access networks that channel access is not predetermined and depends on the network traffic, it is possible that two nodes simultaneously decide to send data to the same node, resulting in collision. Collisions waste energy, increase transmission delay and reduce throughput. The aim of the random access protocols is controlling collisions in the network in order to achieve the desired network performance.
In an earlier paper [1], we solved the problem of energyutility optimization in random access networks with no delay constraints. In this paper, the notion of delay is added to the analysis of random access networks using the queuing theory. The delay constraint has a non-convex form and in order to convert it into a convex constraint a complete problem reformulation is proposed. An optimal random access protocol which satisfies delay constraints for the links is subsequently presented.
The importance of energy efficiency in ad-hoc networks stems from the multi-hop nature of the network. If nodes of an ad-hoc network run out of energy some routes may become disconnected [2]. Therefore, the available energy of nodes should be consumed cautiously to transmit as much information as possible. Another criterion for the network performance is the utility, which is a monotonically increasing function of the allocated rate to each link. Network Utility Maximization (NUM) has recently received much attention in the literature [5], [6], [7]. It has been first proposed by Kelly [5] in order to optimize end-to-end rates of the wired networks.
It is also used in optimizing transport layer of wireless networks [6], [7]. Also, Nandagopal et. al. [8] used similar approach in proportionally fair channel allocation and [9] developed the idea of optimizing persistence probabilities in the random access wireless network. Transmission delay is another important parameter for the network performance and a delay limit should be practically considered for the packets in the network. Such delay constraint depends on the type of traffic and the required quality of service (QoS) level. Realtime applications such as voice or video conferencing require packets to be transmitted with a specific delay limit. However, data transmission is less sensitive to delay and a more relaxed delay constraint can be adopted for such transmissions. It should be noted that one of the issues in random access networks is the size of queues, as for example, in Aloha it is possible that average length of the queues in some nodes go to infinity. Setting a delay limit for the packet transmission is therefore equivalent to setting a limit for the average queue length.
Energy minimization and lifetime maximization for wireless ad-hoc networks have been the focal point of many research activities [10], [11]. However, to the best knowledge of the authors, the work presented in this paper is the first case which considers energy minimization along with delay constraints in random access networks. For example, although [6] and [9] have formulated and solved NUM for the random access but they have neither considered energy consumption nor delay constraints. Optimal utility-lifetime tradeoff has been achieved in [12] for non-random access networks, however no delay constraints were considered in that approach. Delay minimization for slotted aloha was considered in [13] where transmission probabilities were optimized to achieve minimum delay and maximum throughput. However energy minimization and fairness among nodes were not addressed.
The rest of the paper is organized as follows. First, the network model is presented in the next section. Then, in section III we formulate the problem by defining the goal functions and the link delay constraint. Section IV investigates the tradeoff between energy, utility and delay; it also contains numerical results of the distributed algorithm. Finally, we conclude the paper and review its contributions in section V.
II. NETWORK MODEL
Suppose an ad-hoc network which contains N nodes that are going to transmit their packets through their neighbors using the set of links L. Each node selects one of its links and transmits with probability p ij where i is the transmitter index and j is the receiver index. P i is the transmission probability of node i, which is equal to the sum of transmission probabilities of its output links. We assume that nodes transmit during time slots whose duration equals the packet transmission time. Collision happens if two neighbors transmit packets in the same slot. Nodes are supposed to have infinite buffers, thus there is no packet drop. We also assume that the distribution of packet arrival at each node is Poisson and independent of the other nodes.
The set of neighbors of the node i is denoted by N i , the set of nodes which i transmits to them with O i and the set of nodes that transmit to i with I i . We define connectivity factor by the ratio of communication range to the network dimension. Thus, as the nodes' powers increase, the connectivity factor and number of neighbors, |N i |, increase as well. In this paper, we assume that all nodes have equal power, resulting in symmetric neighborhoods. The case that neighbors use unequal powers was considered in [1]. Although such assumption can be easily incorporated in the current work, it has not been considered in this paper in order to simplify the formulations.
III. DELAY ANALYSIS
In order to calculate average delay in random access networks, we assume packet arrival at each node to be modeled by a Poisson process. It is also assumed that in case of collisions, the packet is retransmitted until it is successfully received at the other end. Thus, when a packet collides it does not return to the queue, but waits until it is served. The service time of each link depends on the transmission probability of that link and the collision probability. We can model each link as an M/G/1 queue and use the following Pollaczek-Khinchin formula to estimate the queue delay [14].
where W is the waiting time of the queue, S is the average service time, r is the arrival rate, and ρ=r S . Thus, the first and second order mean of the service time should be computed. In the slotted access, the service time is a discrete random variable and the probability of transmission after k time slots is equal to x is the probability of successful transmission. Mean and variance of the service time are then given by: Thus, using (1) and (2) the link delay can be found as follows:
IV. OPTIMAL MAC WITH DELAY CONSTRAINT
Our goal is to optimize MAC parameters in order to achieve minimum energy consumption and maximum utility in the network. Solving such a bi-criterion problem is equivalent to finding Pareto optimal points [15]. Pareto optimal points have the characteristics that no other point that is better in both energy and utility exists. Also, an additional delay limit for the links of the network should be considered in this bi-criterion problem. In problems where goal and constraint functions are convex, it is common to use scalarization in order to find the Pareto optimal points. However, in this case delay constraint in its original form is non-convex and the first step in the proposed algorithm converts it to a convex function. Subsequently, scalarization can be used in order to form a convex problem and achieve Pareto optimal points.
A. Convex Formulation
The utility function, U, is defined as the summation of link utilities. In order to achieve proportional fairness between links we use the same approach as [8] and [9], and define utility as a logarithmic function of the link rates: Therefore, the utility which is the first goal function, is a concave function. Another goal function that should be formulated is the energy consumed in the network. The required energy to transmit a packet by node i is equal to e i , so average energy consumption by node i in one timeslot is given by E i =e i ×P i . In this paper we assume equal transmission power for the nodes and thus, total energy consumption of the network is given by the following linear function: The network parameters targeted to the optimization problem are transmission probabilities and rates. The energy and the negative of the utility are goal functions that should be minimized and were shown to be convex functions of transmission rate and probabilities. The next step is to show that the constraints are also convex functions of these parameters. Link delay constraint is found in section III and can be reformulated as follows: where x ij is equal to the throughput of packets on link (i,j). Using the collision model, successful reception probability depends only on transmission probability of j's neighbors. Therefore, a packet is received successfully if and only if neither j nor any of the neighbors of j except i have sent a packet at the same time. Thus, throughput of a link is given by the multiplication of successful reception probability and link capacity: Equation (7) shows that x ij has a product form and is nonconvex, so, the delay constraint (6) is non-convex. In order to obtain a convex delay constraint as a function of p ij ., we first use a logarithmic function, which is monotonically increasing and preserves inequality, on both sides of (6): It is easy to show that the above constraint is a concave function of transmission rates r ij . Therefore, by using a change of variables of the form z ij =log(r ij ), the delay constraint can be converted into a convex function of z ij . This also changes utility to a linear function: We can now use scalarization for the convex goal functions and constraints in order to formulate a convex problem and find Pareto optimal points: There are many well-known algorithms for solving such convex problems. We use Sequential Quadratic Programming (SQP) [16] to solve (10). In section VI, optimal tradeoff curves for energy and utility with different delay constraints are obtained using SQP.
B. Feasibility of the problem
The convex formulation ensures that the problem has a unique solution in its feasible region. One remaining problem is the issue of feasibility of the problem. This depends on the link delay constraint (Dc). It is apparent that using delay constraints smaller than the average service time of any link may turn the problem into an infeasible problem. So, we should find the minimum delay constraint (MinD c ), that ensures feasibility of the problem, and only adopt higher delay constraints for the network. The delay constraint formula (6) shows that the maximum link delay occurs for the link with minimum throughput. As a result, if the minimum throughput is maximized over all links, it is possible to obtain the point that can tolerate the MinDc. This is equivalent to the following maxmin optimization problem: This problem can also be formulated as the convex optimization form of (12). The achieved minimum delay constraint depends only on the network structure. In section V (12) is used to calculate MinDc for different network structures and sizes. Henceforth, we assume that structure of the network is approximately known and Dc in (10) is set so that it is feasible.
C. Distributed MAC Optimization
In general, algorithms such as SQP are applied in a centralized manner. In practice we should use distributed algorithms in the network so that nodes can decide and select their optimal variables through minimum interaction with other nodes. Since, the problem is convex and feasible, the duality gap is zero and we can use the dual problem in order to make separate problems over the nodes. This dual decomposition approach will give update formulas for link probabilities and rates.
First, we write the Lagrangian of (10) as follows: where µ ij is the dual variable for delay constraint of the link (i,j). Using the derivative of the Lagrangian we can find the rate update formula and the corresponding equation for the link probabilities: Using (15) and computing the summation over j, will give us the following quadratic equation for the node probabilities: New link probabilities can be found using updated node probability and dual variables: where proj[·] function projects transmission probabilities in the feasible region.
Also, the following formula can be used to update dual variables: Convergence of this dual decomposition algorithm is guaranteed for small values of α n or when α n goes to zero for large n [15]. In the numerical analysis, we have used a constant small step size since such choice does not require synchronous update of the step size in the whole network and also reduces the complexity.
V. NUMERICAL ANALYSIS Three types of networks are considered in our numerical analysis. First, the sample network of Fig. 1 which has 10 nodes and 12 links is considered. Linear and star networks are also used in order to investigate the problem for different network sizes (Fig. 2 and Fig. 3). For the linear network we assume that nodes are only neighbors of the nodes that they have a common link with. In the star network we assume that all nodes are neighbor of node 1 and their adjacent nodes so in Fig. 3 node 2 is neighbor of 1, 3 and n. Also, in the numerical analysis we assume c ij =1.
A. Centralized Solution
Minimum delay constraint (MinD c ) of the links is a parameter that should be properly set to guarantee the feasibility of the problem. It was shown in section IV.B that this constraint can be found by solving (12). For the sample network of Fig. 1, the MinDc that can be used is equal to 10.47.
For linear and star networks the MinDc may vary with the network size. As shown in Fig. 4, for linear networks this minimum delay changes very slowly with the network size. However, for star networks it linearly increases as the number of nodes increases.
The cost function of problem (10) is a linear combination of energy and utility. The parameters λ 1 and λ 2 can be changed in order to control the tradeoff between energy minimization and utility maximization. Although it is possible to use the MinDc given in problem (10), using MinDc results in only one feasible point. We use delay constraints of about 4× MinDc and more in order to obtain a large enough feasible region. This allows λ 1 and λ 2 to better control the tradeoff between energy and utility. Fig. 5 shows the trade off between energy and utility for three different delay constraints in the case of the sample network. As the delay constraint decreases, the optimal points have less energy and more utility. Our numerical analysis shows that increasing delay from 40 to 100 is more effective than increasing it from 100 to 1000. Also, three regions from left to right can be distinguished on each curve. At large values of λ 1 , the energy is near its minimum value and changes very slowly, but the utility decreases at a high rate. In the next region, the tradeoff between energy and utility is more evident. The last region is where utility slowly reaches its maximum value at the cost of doubling energy consumption from 1.5 to 3.
B. Distributed Algorithm
A simple distributed algorithm was described in section IV.C. We have used this algorithm for the case of (λ 1 ,λ 2 ) = (5, 0.1) and a delay constraint equal to 100. The algorithm starts from initial transmission probability of 0.1 for all links. Results of the distributed algorithm are then compared with the optimum point in Fig. 6 where the percentage of error in network cost function, transmission probability of link (1,2), and link data rate is shown. In these curves, the error percentage of the x is defined as ( ) opt opt x itr x x − . If we use error of less than 1% as a measure of convergence, it can be verified that the distributed algorithm converges in about 12 iterations for the sample network.
One interesting question to address is how the convergence of this algorithm scales with the network size. In order to investigate such effect, a linear network in which the minimum delay does not scale with the network size has been considered. The convergence rate is then compared for different network sizes in which D c =100 and (λ 1 ,λ 2 )=(5,0.1). Our numerical analysis shows that for all linear network sizes between 4 and 32, the number of iterations required for convergence is roughly 15. In order to explain this we note that in gradient direction methods computations scale with the dimension of the problem. However, when distributed computation is used, the number of computers also scales with the network size. Consequently, the number of iterations computed by all nodes in finding the optimum point does not scale with the network size.
VI. CONCLUSION AND FUTURE WORKS
In this paper, the delay constraint was added to the previous work on energy-utility optimization in random access networks. We have modeled links as M/G/1 queues and used this model in order to calculate average delay of the random access protocol. Non-convexity of the delay constraint was the main obstacle for the centralized or distributed optimization algorithms. After proper transformations, the problem is transformed into the convex form. Subsequently, the bicriterion problem of energy-utility maximization with delay constraint is formulated as a standard convex optimization problem and dual decomposition is used to achieve a distributed solution. This convex problem is not only useful for network design but can also be used to find optimum achievable energy and utility values at different delay constraints. The minimum delay constraint that ensures feasibility of the problem is also considered in the paper. It is shown that a maxmin problem should be solved in order to find MinDc. A convex equivalent is also given for this maxmin problem.
Our numerical analysis shows the trade off between energy, utility and delay in the random access network. We indicated that there are some regions that there is gain for energy or utility only at the cost of loss for the other one. Also increasing the delay constraint near MinDc is more effective than increasing it at large delay values. The convergence rate was another parameter considered in the numerical results. The relationship of the network size and convergence rate was also addressed for linear networks.
The next step in continuation of the current work is to consider delay in the cross-layer problem of MAC-Transport optimization where we should consider end-to-end delays and rates. In this case, the assumption of Poisson arrivals at the nodes should be revised. Our initial work shows that this crosslayer problem is non-convex and non-separable.
|
2010-11-23T06:39:34.000Z
|
2009-06-14T00:00:00.000
|
{
"year": 2010,
"sha1": "7fb404f2397a5aa02a3541a3cda33bea5cf14fd0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1011.5115",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7bfa4535ba529a4bc072a5e1fe469836edfcb5d9",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
17995420
|
pes2o/s2orc
|
v3-fos-license
|
Counting within the Subitizing Range: The Effect of Number of Distractors on the Perception of Subset Items
When exploring the mechanisms involved in perceiving numbers we must distinguish between two types of numbers: subset numbers (e.g., perceiving "2" when two plates and one cup are displayed on a table) and the total number of items (e.g., perceiving "3" objects in the previous example). Combining feature perception theories with number perception theories, the current paper explores the mechanisms involved in the perception of small numbers in feature-defined subsets. The paper introduces several theories for how subset items can be represented and examines an important prediction of those theories: Will the number of distractors affect the perception of small subset items? In two experiments, we found that the response time (RT) for counting small target items that differ from their distractors by a single feature was faster when there were few distractors compared to many distractors. This was found for different types of distractors: distractors within and outside the subitizing range. Only when distractors were organized in a specific pattern, allowing distractor grouping, the increase in the number of distractors did not affect target counting. The current study suggests that even when performing simple counting of subset targets, the enumeration process can begin only once the locations of the targets have been identified and the targets' shape is bound to these locations. This pre-counting procedure depends on the number of individual locations occupied by the distractors. These findings are further discussed within the context of the object file theory.
Introduction
The ability to perceive quantities exists not only among human adults but also in some form among infants and animals [1][2]. It has been shown that even very young infants can discriminate between quantities. The perception of quantities develops gradually throughout infancy, long before children learn to count verbally or to identify the arithmetic symbols [3][4]. Kaufman, Lord, Reese and Volkmann [5] named the perception of a small number of items "subitizing" (from the Latin word "subitus" meaning "sudden") to reflect the notion of one's sudden perception of small numbers. This type of perception relates to the processing of one to three or four items. Many studies show that typically developed children and adults perceive quantities within the subitizing range differently than quantities beyond that range. For example, numbers within the subitizing range are perceived more rapidly and accurately than quantities larger than four [6]. In addition, in comparison to the subitizing range, steeper response time slopes are observed for items beyond the subitizing range. That is, beyond four items, the response time significantly increases linearly with each item added [7][8][9]. This is thought to suggest that unlike numbers within the subitizing range that are perceived rapidly, numbers larger than four require serial processing.
It has been proposed that a parallel pre-attentive process is required for perception of numbers within the subitizing range, whereas numbers beyond the subitizing range are connected to a serial attentive process. Piazza, Giacomini, Le Bihan and Dehaene [10] in an fMRI study found a sudden increase in the activity of attention related areas in the posterior parietal and frontal cortices, only for displays that contained four or more items. These results indicate that processing numbers beyond the subitizing range requires greater attention resources than those needed for perception of numbers in the subitizing range. On the other hand, other studies suggest that subitizing depends to some extent on attentional resources [11][12][13][14][15].
In an article recently submitted for publication we [16] discussed the role of attention resources in small number processing. We explored the perception of a total number of items in a display in comparison to the perception of the number of subset items within that total. We suggested that perceiving the number of items in a subset (e.g., perceiving "2" when two plates and one cup are displayed on a table) is qualitatively different than perceiving the total number of items (e.g., perceiving "3" in the previous example). This kind of number of subset perception might occur via an attentional path and it might be an effortful process. At an early perception stage, features in our surroundings such as size, color, and shape are perceived in their special feature maps rapidly, simultaneously, and without utilizing attentional resources [17][18][19]. Detecting numbers in a subset adds a perceptual problem to the enumeration procedure -the problem of detecting identical elements. In order to individuate identical items, a representation must be created in which each individuated location is bound to the specific identity. According to this notion, a subset enumeration requires the creation of a mental representation in which the identity of the items, such as their shape, must be perceived and distinguished from one another in order to enable enumeration. Unlike perception of the total number of items, which only requires knowledge about the location of items (and not their other features), counting a subset first requires the identification of each item presented in relation to the relevant feature that is being counted (e.g., a certain shape). In other words, perception of the number of subset items occurs only after the creation of a mental representation in which individual locations and the relevant identity (e.g., shape) of the items in these locations are already known. In line with this suggestion, Goldfarb and Treisman [16] found that binding errors that characterize the binding of different features of items (such as color and shape) are not present when the binding is between the shape of the items within a subset and their number. These results indicate that while the perception of features such as color and shape occurs simultaneously, the perception of the number of a subset is not concurrent with the perception of the shape of the subset items, but rather follows it via an attentional binding procedure.
Exploring the perception of subset numbers is of importance since it has been found to be related to other mathematical abilities in school learning. For example, Halberda, Mazzocco and Feigenson [20] found a correlation between the acuity of the approximate number system (ANS), measured by performance on a task that requires the perception of subset, and symbolic math performance. Their results show that perception of subset in the ninth grade was correlated with symbolic math performance of individual students from as early as kindergarten. In addition to its relevance for math abilities, the perception of subset numbers is necessary in everyday life, when objects usually appear as part of an overall group of items. In everyday life we rarely have need to simply enumerate the number of "things" (items whose identity is irrelevant). However, we often need to enumerate pre-defined subset items from among a total number of items (e.g., enumerate the number of cups on a table laid with both cups and plates).
The current study
In our previous study, we [16] explored the perception of subset number by addressing the phenomenon of binding errors. In the current paper we explore a new question related to the perception of subset numbers by addressing a different perceptual phenomenon: the interference of the number of distractors. As noted previously, in Goldfarb and Treisman [16] we suggested that perception of the number of subset items occurs only after the creation of a mental representation in which the individual locations and the shape of the items are already bound. An important question arising from this notion is what exactly is the form of this mental representation and, specifically, how do the main characteristics of this mental representation affect the counting of a select subset (i.e., when one must count a certain subset while ignoring other items).
One option of mental representation in which the system can count the number of subsets and suggested in our two recent studies [16,21] is illustrated in Figure 1a. According to this option each instance (such as the features X or O) is stored in the recognition system only once. When identical objects appear simultaneously (e.g., many Xs among Os) and one has to count target items (e.g., Xs) the system must check the shape of each location in the surroundings and then match and bind the shape of the target items to the relevant location. After this stage is performed the arithmetic system can enumerate the number of target subsets. This option is similar to that reflected in the object file theory [22] and it also has a clear prediction regarding distractor interference when counting subsets. The theory predicts that the more distractors there are the more demanding the perception of the number of targets.
Another option is that the number of a subset could be directly pulled out of the relevant feature map, such as the shape map (see Figure 1b). According to the Boolean map theory [23], at an early perception stage, each instance of a feature (e.g., the instance red and the instance green) is represented in a separate map (i.e., a red map and a green map) and these maps cannot be accessed simultaneously. On the other hand, multiple locations of identical items (i.e., many reds) can be represented in a single map and be accessed simultaneously. One possible theoretical option that can arise from this kind of representation is that the number of a certain instance can be represented independently from the existence of other locations filled with other irrelevant distractors.
Meaning that in a case where targets are identical instances that differ from the distractors only in one feature (i.e., reds among greens or Xs among Os) the number of different distractors will not interfere with the perception of the targets' number. Similarly, according to the notion of the activation map [24], when feature targets are searched for within a field of distractors, the relevant locations can be easily activated. Hence theoretically, the arithmetic system can directly enumerate these marked locations and the number of different distractors will not interfere with perception of the targets' number.
A third option (see Figure 1c) can be that at an early stage of perception, each subset number is perceived quickly and independently from among other subset numbers in a display. We noted that at an early perception stage, features in our surroundings such as size, color, and shape are perceived in their special feature maps rapidly, simultaneously, and without utilizing attentional resources [17][18][19]. For example when we see a red cup, the color area quickly processes the red color and is indifferent to the shape, while the specialized shape area quickly processes the cup shape and is indifferent to the color. The binding between the color and the cup occurs only at a later perception stage. The same principle may apply to the processing of subset numbers. When we see 2 plates and 3 cups in a display we might first represent the arithmetical features ''2'' and ''3''. In order to perceive that there are 2 plates and 3 cups in a later perception stage we attach the correct number to the shapes of the cups and the shapes of the plates. In this case, the arithmetic system might be the unit in which the arithmetical features (e.g., ''2'' and ''3'') are represented before any binding to other features takes place.
In Goldfarb and Treisman [16] we rejected this option in the context of binding error. We demonstrated that the number information (''3'' or ''2'') is not subject to the same perceptual illusions as other features, for example, forming illusory conjunctions similar to color, shape, and motion. However if this option was correct and numbers of subsets would independently "pop out" of a display, then this theory also has a clear prediction regarding another perceptual effect: distractor interference. According to this theory, similar to the second option that is based on Boolean maps, an increased number of distractors will not gradually increase reaction time (RT) for the enumeration of targets.
In conclusion, in this study we intend to expand the finding of Goldfarb and Treisman [16] in relation to the process involved in counting subset items (as described in Figure 1a) to another perceptual effect: distractor interference. Although in everyday life we often need to enumerate pre-defined subset items, in a surrounding that contains both targets and non-target items (e.g., enumerate the number of cups on a table laid with both cups and plates), this is the first study that is designed to directly address the role of the number of distractors in a feature subset counting.
Experiment 1
In Experiment 1 participants were asked to count the number of targets within the subitizing range while ignoring distractors. The distractors were either within the subitizing range or outside the subitizing range and they were either few or many (see Figure 2). If counting a subset depends on the prior binding between each possible location and its shape (such as the first option described in Figure 1a) then it is assumed that the RT for counting subset target items within the subitizing range will be faster in a display with few distractors than in one with many distractors. This pattern should be observed regardless of the nature of the number of distractors (i.e., whether the distractors are within the subitizing range or outside the subitizing range). On the other hand the number of distractors should not interfere with the perception of the targets' number if the number of a certain target can be directly pulled out of the scenery, or of a relevant feature map such as a shape map. This will also be the case if a representation in which the activation of all the locations of the feature target object pops out (such as the second and third option described in Figures 1b & 1c). In these cases the effect might also be determined by the nature of the distractors (i.e., distractors within the subitizing range that allow quick perceptions and are similar to the targets' number can be quickly pulled out of the display and might lead to a different effect than distractors that do not do so).
Method
Participants. Sixteen adults between the ages of 18-26 (M = 20.75, SD = 2.15) participated in this experiment. All participants stated that they had no learning disabilities or attention deficits. The participants received credit points for their participation, as part of their obligations as first year students at Haifa University, or were paid 15 new Israeli shekels. Written informed consent was obtained from all participants. The Ethical Committee of Haifa University approved all the procedures in this study.
Measures. Stimuli were composed of black Xs (targets) and Os (distractors) placed on an invisible 666 object grid sized 8.565 cm. The size of each X and O was approximately 160.7 cm. Each stimulus contained either 3 or 4 Xs (targets). The experiment had two types of distractors: subitizing range distractors and beyond the subitizing range distractors. In the subitizing range distractors condition there were either 2 (few) or 4 (many) Os (distractors) (see Figure 2). For each combination of targets and distractors (3 Xs and 2 Os, 3 Xs and 4 Os, 4 Xs and 2 Os, 4 Xs and 4 Os) 8 different versions were created in which each distractor and target could appear in different locations on the grid. In total there were 32 different stimuli in this category. In the beyond the subitizing range distractors condition there were either 12 (few) or 24 (many) Os (distractors) (see figure 2). For each combination of targets and distractors (3 Xs and 12 Os, 3 Xs and 24 Os, 4 Xs and 12 Os, 4 Xs and 24 Os) 8 different versions were created in which each distractor and target could appear in different locations on the grid. In total there were 32 different stimuli in this category. For both subitizing range and beyond the subitizing range conditions for each of the 8 versions the targets' location on the invisible grid remained the same for all types of distractors (2/4/12/24 distractors).
Procedure
The experiment was programmed on E-Prime 2.0. An HP Compaq computer with an Intel core i7-2600 central processor was used to present stimuli and to collect the data. Stimuli were presented on a 22 inch Samsung monitor, while participants sat at a distance of about 60 cm from the screen. A keyboard on which participants conveyed their answer was placed on a table next to the screen. Each participant was tested individually and the experiment took about 10 minutes in total.
The experiment began with a practice block that contained 16 stimuli presented randomly. Then an experimental block began. This block contained 64 stimuli, each presented three times at random (a total of 192 stimuli). Each trial began with a fixation that appeared for 512 ms, followed by the grid with targets and distractors that remained on the screen until the participant responded. All stimuli appeared in the center of the screen on a white background.
Participants were asked to report the amount of targets (Xs) in each stimulus as quickly and accurately as possible. They were asked to use their left index finger to press the number "3" and their right index finger to press the number "4", keeping both fingers on the keyboard at all times. Response time and accuracy were measured by the computer.
Results and Discussion
Trials in which participants did not answer correctly (2.6%) and trials in which the RT was faster than 250 ms or slower than 2500 ms were not included in the analysis (0.7%).
For the remaining trials, the mean RT for each participant in each relevant condition was calculated. A two-way analysis of variance was applied to the mean RT with type of distractors (subitizing range/beyond the subitizing range) and number of distractors (few/many) as within participant factors.
The results revealed a significant effect for number of distractors (few/many), Figure 3). Overall, the results indicate that RT for counting subset target items within the subitizing range became slower as the amount of distractors increased. This effect is not modulated by the nature of the distractors (i.e., whether or not the distractors were also within the subitizing range).
Experiment 2
We noted that in everyday life objects usually appear in an environment together with other kinds of objects. Hence we often need to enumerate pre-defined subset items out of a total number of items. Experiment 1 showed that counting subset items depends on the number of distractors, meaning that the subset number is not simply pulled out independently of other distractors. However, under some perceptual conditions, subset items might be enumerated without checking the location of each distractor. For example when one has to count the number of apples on a tree that contains both apples and leaves, the setup of the leaves can affect the counting. The leaves on the tree could be organized in a way that creates a grouped single object with the tag "the crown of the tree". Hence in this setup, adding more leaves in a way that will be grouped with the existing tree crown will not add more distractions to the apple counting. In other words, the target search will not be conducted against more objects that interfere with the search. Hence in order to conclude if the number of distractors interferes with subset counting, we first need to identify what counts as an object that interferes with the counting. An object can be formed in many ways, of which grouping may be one [25][26].
Grouping is a common phenomenon in which elements are grouped together intentionally or automatically to form a single object (for a review see [27]). Several studies conducted with brain damaged patients suggest that grouping may prevent the item individuating process [28][29]. We have also recently suggested that in normal populations, the grouping of elements may form a single object even before the specific elements that construct it have been individuated [30]. In addition, the tag given to an object file based on grouped elements (i.e., the tag: crown of the tree) can replace the specific object tags that are given to the individuated items (i.e., each single leaf tag) [31]. Similarly, Treisman [32] found that in a search paradigm when the distractor items can be grouped, participants do not serially check the location of each distractor individually but rather refer to them as a group of distractors.
Hence, the described effect of RT increase for subset counting as a result of the increase in the number of distractors might depend on the distractors' organization. This effect might be reduced or even eliminated in an organized display of distractors that enables the additional distractors to be grouped together with the other distractors.
The following experiment is meant to test this assumption as well as to replicate the results of Experiment 1. Experiment 2 includes two types of distractor arrangement: (a) a jumbled distractors' display as in Experiment 1 (that will require a serial attentive location check of the distractors) and (b) an organized distractors' display in which the distractors are arranged as a quadrilateral background. This quadrilateral arrangement is adapted from [32] where it has been shown that similar quadratic arrangement facilitated the grouping of distractors. Unlike the jumbled display, this arrangement facilitates the grouping of identical distractors since it follows several laws of grouping such as the law of proximity, the law of continuity, the law of good gestalt, and the law of symmetry and parallelism (see [33] for a review). Hence we predict that in the jumbled distractors' display, the same pattern of results as in Experiment 1 will be found. That is, RT for counting subset target items within the subitizing range will be faster with few distractors than with many distractors, for both distractors within the subitizing range and beyond the subitizing range. However, when distractors are organized in a specific pattern, allowing grouping, there will be no difference in RT for counting subset target items between few distractors and many distractors.
Method
The method in Experiment 2 was similar to that of Experiment 1 aside from the following changes. Sixteen participants took part in the experiment, with an age range of 20-34 (M = 26.12, SD = 3.54). Xs and Os sized approximately 0.860.5 were placed on an invisible 767 object grid sized with a similar size to the 666 grid of Experiment 1 (8.565 cm). As noted, this experiment included a jumbled display condition that was similar to the displays in Experiment 1. In addition this experiment also included an organized display condition that allowed distractor grouping. In this condition the distractors were arranged as a quadrilateral background using two types of distractors: distractors below 30 (small group) and distractors above 30 (large group). The distractors below 30 condition was similar in the amount of distractors to the beyond subitizing range condition in Experiment 1. In this condition each stimulus was constructed as an invisible grid filled with Os (distractors) and either 3 or 4 Xs (targets). The invisible grid was either 464 (few) or 467 (many) (see Figure 4). For each of those options (3 Xs in a 464 grid filled with Os, 3 Xs in a 467 grid filled with Os, 4 Xs in a 464 grid filled with Os, 4 Xs in a 467 grid filled with Os) 6 different versions were formed in which targets could appear in different locations on the grid. In total there were 24 different stimuli in this category. The distractors above 30 condition was similar to the distractors below 30 condition except for the number of distractors it contained. The purpose of displaying this condition was to verify that the results would not be restricted to a specific number range. In the distractors above 30 condition each stimulus was constructed as an invisible grid filled with Os (distractors) and either 3 or 4 Xs (targets). The invisible grid was either 666 (few) or 767 (many) (see Figure 4). For each of these options (3 Xs in a 666 grid filled with Os, 3 Xs in a 767 grid filled with Os, 4 Xs in a 666 grid filled with Os, 4 Xs in a 767 grid filled with Os) 6 different versions were formed in which targets could appear in different locations on the grid. In total there were 24 different stimuli in this category.
For both the distractors below 30 and the distractors above 30 conditions, for each one of the 6 versions the location of targets on the invisible grid remained the same for all types of distractors (464/467/666/767 grid). Overall, the experiment block con-
Results and Discussion
Trials in which participants didn't answer correctly (2.9%) and trials in which the RT was faster than 250 ms or slower than 2500 ms were not included in the analysis (0.2%).
For the remaining trials, the mean RT for each participant in each relevant condition was calculated. A three-way analysis of variance was applied to the mean RT with type of display (jumbled/organized), type of distractors (small group: subitizing range or distractors below 30/large group: beyond the subitizing range or distractors above 30), and number of distractors (few/ many) as within participant factors.
There was a significant interaction between type of display and type of distractors Overall, the results suggest that in a jumble display similar to Experiment 1, the amount of distractors interfered with the counting of a subset within the subitizing range. However, in an organized display, RT for counting subset target items within the subitizing range did not became slower as the amount of distractors increased.
Notably, one can claim that this pattern of results might derive from differences in crowding between the different conditions. One of the main characteristics used to define crowding is the density of the objects surrounding the targets [34]. One can assume that as the number of distractors increases, the display becomes more crowded and RT is slower. Hence it is possible that crowding may explain the finding that in the jumble displays RT was slower when the display contained more distractors. Although the different conditions in the jumbled display of Experiment 2 (which were the same conditions as in Experiment 1) might differ in their crowding, it is important to note that the crowding factor probably does not explain our current data. The reason for this is that it has been found that when targets and distractors differ from each other by a feature such as shape (as in the current study) the crowding of the distractors does not have a major effect on perception of the target [34]. However, in order to rule out the crowding effect we conducted the following analyses. In the first analysis we compared the RT in the display with 12 jumbled distractors (M = 801.75, SD = 97.5) (e.g., see Figure 2) to the 464 organized grid that had 12 or 13 distractors (M = 790.34, SD = 82.19) (e.g., see Figure 4). Note that the organized 464 display is more crowded than the jumbled display, such that if crowding was the explanation for our previous results then RT in the organized display should be slower than in the jumbled display. However the mean RT suggested an insignificant pattern in the opposite direction [t(15) = 2.9, n.s]. Similarly we conducted another analysis in which we compared the display with 24 jumbled distractors (M = 816.21, SD = 106.72) (e.g., see Figure 2) to the 467 organized grid which had 24 or 25 distractors (M = 791.43, SD = 89.61), (e.g., see Figure 4). Once again, note that the organized 467 display is more crowded than the jumbled display, such that if crowding was the explanation for our previous results then RT in the organized display should be slower than in the jumbled display. Again, the mean RT did not support this assumption as it suggested a significant pattern in the opposite direction [t(15) = 23.09, p,.01]. Hence the crowding factor does not seem to explain our current result and in jumbled displays crowding cannot account for the increase in RT as the display contained more distractors.
General Discussion
In conclusion, this study was designed to directly address the role of the number of distractors in subset enumeration. The study examined and compared different types of distractors: distractors within and outside the subitizing range, small and large amount of distractors, and examined the different arrangement of distractors in the display. In two experiments participants were asked to report the number of target items (Xs) among distractors (Os). Experiment 1 had two types of distractors: subitizing range distractors (few and many) and beyond the subitizing range distractors (few and many). The results demonstrated that RT for counting subset target items within the subitizing range was faster when there were few distractors compared to many distractors, both for distractors within the subitizing range and beyond it. Experiment 2 replicated this finding and also showed that when distractors were organized in a specific pattern, allowing grouping, an addition of distractors did not cause an increase in RT for counting subset target items.
The results indicate that when distractors are not grouped and subset counting is required, the number of subset items cannot be pulled out directly regardless of the number of other items in the display. The addition of distractors increases the cost in terms of RT.
It has been suggested [9] that it is possible to subitize items in an environment that contains distractors when the identities of the target and the distractors differ by a single feature (i.e., counting reds among greens or counting Xs among Os). However, subitizing is not possible when the identity of the targets differs by bound features (i.e., counting red Xs among red Os and green Xs). The current study suggests that even in order to perform a simple counting of a subset target (that differs from its distractors by a single feature), the enumeration process can begin only after the locations of the targets are identified and the targets' shape is bound to those locations. This pre-subitizing or pre-counting procedure depends on the number of individual locations occupied by the distractors. In a jumbled display, additional distractors added additional individual locations, resulting in increased RT. In contrast, in a grouped display such as the organized display condition in Experiment 2 -the additional distractors are perceived as part of the other distractors. The distractors are perceived as a single unit, group, or texture and additional distractors do not add additional individual locations. This, as predicted, does not result in RT increase as a simple function of the number of distractors as in this case there is no need for the attentional process to check each additional individual location.
In the introduction we mentioned several theories that may explain how subset numbers can be enumerated. The current results suggest that the number of a subset cannot be directly pulled out of the shape feature map without checking the location of each individual distractor. Similarly, the arithmetic system cannot directly count marked locations in an activation map without checking the location of each distractor. The results also suggest that the system cannot simply perceive the number of a subset as it perceives other features. This means that the number of a subset is not perceived quickly and independently similar to other features of the target such as the target's color or shape. Instead we suggest, based on the assumptions of the object file theory [22] (as described in Figure 1 -Theory 1), that perception of the number of subset items occurs only after the formation of a mental representation in which the individual locations and the shape of the items are already bound. According to this option, when identical subset items need to be enumerated the system must check each potential location in the environment and then match and bind the object's shape to its location. Only after this stage has been completed can the arithmetic system identify the number of target subsets. This option is compatible with the finding that even for targets within the subitizing range, the more individual distractors there are the more demanding is the perception of the targets' number.
|
2017-04-16T01:29:48.333Z
|
2013-09-16T00:00:00.000
|
{
"year": 2013,
"sha1": "be5a1df3cf80ac12097e826394bcbf5c154c5035",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0074152&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be5a1df3cf80ac12097e826394bcbf5c154c5035",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
254136332
|
pes2o/s2orc
|
v3-fos-license
|
QML estimation with non-summable weight matrices
This paper revisits the theory of asymptotic behaviour of the well-known Gaussian Quasi-Maximum Likelihood estimator of parameters in mixed regressive, high-order autoregressive spatial models. We generalise the approach previously published in the econometric literature by weakening the assumptions imposed on the spatial weight matrix. This allows consideration of interaction patterns with a potentially larger degree of spatial dependence. Moreover, we broaden the class of admissible distributions of model residuals. As an example application of our new asymptotic analysis we also consider the large sample behaviour of a general group effects design.
Introduction
It is a broadly employed assumption in a wide range of theoretical studies on spatial econometrics that the spatial weight matrix is absolutely row and column summable. This restriction is mostly a result of the Central Limit Theorem (CLT) used in the derivation of the result on asymptotic behaviour. Historically, it can be traced to the works of Kelejian and Prucha, e.g. Kelejian and Prucha (2001), who were 1 3 first to formulate their assumptions as explicit requirements regarding the spatial weight matrix. Their CLT, which turned out to be a milestone in the development of asymptotic theories for spatial econometric models, relies on the absolute summability of the weight matrix involved. In our study we attempt to reconsider this approach, and we focus specifically on the Quasi Maximum Likelihood (QML) estimator for the spatial autoregressive model. By revisiting the classical argument of Lee (2004) and, importantly, introducing a generalised CLT for linear-quadratic forms, we are able to provide a theory for consistency and asymptotic normality of QML estimates for high-order spatial autoregressive models under relaxed conditions. In particular, our approach allows for spatial weight matrices that, even if row-standardised, may not be absolutely column summable.
The standard approach, with the absolute summability requirement on the weight matrix, undoubtedly has the appeal of a simple, self-contained theory. Although there might be a perception that the constraint is necessary for showing the desired asymptotic behaviour of various estimation schemes, it has already been recognised that this is not the case, see, e.g. Gupta and Robinson (2018). Indeed, the boundedness of row and column sums can be replaced with the less restrictive requirement of boundedness of spectral norms. 1 Unfortunately, the standard analysis excludes some of the spatial interaction patterns in which the number of spatial units influenced by any given unit grows to infinity. In particular, under infill asymptotics, if spatial units are assumed to interact with other units within a given distance, then the number of nonzero elements in a row or column of the spatial weight matrix grows with the sample size, leaving the potential for either row or column non-summability. Similarly, under increasing domain asymptotics certain spatial weight matrices based on inverted distance also lead to non-summable interaction patterns. 2 Theoretical considerations can also reveal another limitation of the standard analysis. For example, consider the cane of an initial model specification which is subjected to a transformation (e.g. linear filtering or demeaning) to obtain its final estimated form. Under the standard analysis, it is necessary to ensure that the applied transformation preserves the summability of rows and columns of the spatial weight matrix. As a result, the requirements of the standard analysis narrow the class of possible transformations of the model. A somewhat similar problem occurs if the so-called linear structure representation for model innovations is assumed. 3 Then, following the standard approach, the derivation of the asymptotic distribution requires additional restrictions on the class of possible linear relations involved.
This paper provides a positive solution to a problem left by Gupta and Robinson (2018), who have made the first attempt to replace the requirement 1 3 QML estimation with non-summable weight matrices of uniform summability of the spatial weight matrix with boundedness in the spectral norm. In that earlier work, extending the scope of spatial weight matrices beyond the standard asymptotic analysis of Lee (2004) was found to be useful, and results analogous to our Theorem 1 on consistency were independently obtained. However, their derivation of the asymptotic distribution of the estimates still relies on the assumption of absolute summability. Whether a relevant asymptotic distribution theorem under relaxed conditions is possible has been left as an open question.
Therefore, the aim of this paper is to present a refinement to the asymptotic analysis of the Gaussian Quasi-Maximum Likelihood (QML) estimator for high-order, spatial autoregressive models, considering the assumptions imposed on the spatial weight matrix. We further present an example of a possible application of our theorems. To this end, we develop a general group effects, high-order Spatial Autoregresive (SAR) model. Our approach to eliminating the effects components from the spatial process generalises that of Lee and Yu (2010a, b) and Lee et al. (2010), as well as Olejnik and Olejnik (2017), to the high-order autoregressive case. Finally, we present statements on consistency and asymptotic normality of the resulting QML estimator, which would not be possible to obtain with the standard argument.
The paper is organized as follows. Section 2 describes the motivation for addressing the topic. Section 3 presents our statements on the consistency and asymptotic normality of the Gaussian QML estimator. Finally, Sect. 4 develops an estimator for a high-order, spatial autoregressive, general group effects model, together with an analysis of its large sample behaviour. Appendices contain some details of the proofs, as well as a set of Monte Carlo simulations that empirically demonstrate the validity of the theory under the relaxed conditions.
Motivation for the refined asymptotic analysis
This section presents some basic examples of the application of our asymptotic analysis. First we present a class of spatial interaction schemes which cannot be handled by the standard asymptotic analysis of Lee (2004). Then, we describe a class of theoretical arguments for which the new analysis demonstrates a clear advantage over the standard approach. We also discuss the assumptions made in relation to the error term. We conclude the section with a brief discussion on whether our results may be considered optimal.
First, however, let us introduce the notation used in this paper when referring to norms. Unless stated otherwise, vectors, i.e. elements of ℝ m , are column vectors. For an arbitrary row or column vector x the symbol ‖x‖ denotes the usual Euclidean vector norm, which will also be called the l 2 norm. The quantity ∑ m i=1 �x i � is referred to as the l 1 norm. The same symbol, when used for matrices, denotes the induced spectral norm. That is, for a matrix A, the quantity ‖A‖ is the largest singular value of A. For square matrices A we will also use the Frobenius norm ‖A‖ F , the maximum absolute column sum norm ‖A‖ 1 , and the absolute row sum norm ‖A‖ ∞ .
Elementary examples of non-summable interaction patterns
One essential feature of an asymptotic theory for spatial econometric models is the set of assumptions imposed on the spatial weight matrix. These assumptions limit the amount of spatial interactions to a manageable degree, such that the statements on the large sample behaviour of the estimation scheme under question remain valid. It is a widely adopted standard in contemporary spatial econometrics to require that the spatial weight matrix is row and column summable. More precisely, with the conventional notation that W n is the spatial weight matrix for the sample size n ∈ ℕ , it is required that the quantities ‖W n ‖ 1 and ‖W n ‖ ∞ are both uniformly bounded in n. These conditions turn out to be unnecessarily restrictive in limiting the scope of spatial interactions that can be incorporated in an econometric model.
Let us start with a theoretical remark. Notice that the row and column summability of W n is equivalent to the rows and columns constituting a bounded set in l 1 . However, a reader familiar with the ubiquitous nature of the theory of squaresummable functions and sequences in much of the mathematical econometrics and geostatistics literature 4 might expect that the l 2 norm would play a major role in the asymptotic theory-at least for some simple cases. In fact we find that the connection between the sharp boundedness condition for an asymptotic theory and square-summability is more nuanced. Instead of examining the properties of rows and columns of the spatial weight matrices, it is necessary to consider the sequence of the respective spectral norms ‖W n ‖ . The requirement then turns out to be the boundedness of the sequence. 5 This will be discussed in subsequent sections.
The following motivating examples are connected to the class of Inverse Distance Weighting (IDW) interaction schemes, that find a wide range of uses in spatial econometrics, and other quantitative methods of geography. Let us assume that the strength of spatial interaction, represented by the spatial weights, is of the form w ij = 1 dist(i,j) , where > 0 is a parameter and dist(i, j) is a measure of the distance between units i and j. A question then arises: what are the values of for which an individual row or column of the matrix W = [w ij ] satisfies the requirement of boundedness imposed by the standard analysis? That is, for which values of is it absolutely summable?
The answer to this question will depend on the nature of the spatial domain and the type of asymptotic scheme employed. We focus our attention here on the increasing domain asymptotics, since that is the more natural asymptotic scheme in this context. Let us now consider a one-dimensional spatial domain in which the spatial units are more or less evenly spaced, or at least the distance between each pair of consecutive units does not exceed a constant distance D > 0 . Then, for an 1 3 QML estimation with non-summable weight matrices arbitrary unit j the column sums (and, by symmetry, row sums as well) of such a matrix W n = [w ij ] i,j≤n satisfy the bound The right-hand side in Eq. (1) converges if and only if > 1 . Let us note that the condition on the j-th row/column square summability would, in turn, be satisfied if the series 2 2 . Non-summable distance-related matrices were also investigated by Lee (2002) in the context of the Ordinary Least Squares (OLS) estimator. In that work, the complementary condition, that is ≤ 1 2 , was derived as necessary and sufficient for consistency and asymptotic normality (at the rate √ n ) of the OLS estimator if the matrix of inverse distances is row-normalised prior to being used in the model. With ≤ 1 2 the amount of spatial interaction (in each row) is intractable for Maximum Likelihood (ML) estimation. However, dividing elements of the weight matrix by the increasing row sums leads to max i,j≤n w ij → 0 and 1 n tr(G n ) → 0 , with G n = W n (I n − 0 W n ) −1 and 0 being the autoregressive parameter. 6 According to Lee (2002), the dominant component of the bias for the OLS estimator is proportional to 1 n tr(G n ) . That is to say, the row-normalisation reduces the spatial dependence to the extent that the OLS estimator becomes consistent. It should be noted that the results of Lee (2002) still require the spatial weight matrix (after row-normalisation) to be summable in both rows and columns. Let us make clear that in our argument we do not assume that the procedure of row-normalisation is applied to the spatial weight matrix, for the reasons presented in Sect. 2.2.
Although the condition of summability > 1 derived from Eq. (1) is not too restrictive, it is also fair to say that such a linear domain is rarely encountered in practice. If we now extend this example to higher dimensions, the restriction on also changes. To see this, let us assume that the spatial domain is, a two-dimensional plane, as it is in majority of economic applications. 7 Now, the crucial observation is the following. Let us imagine a circle of radius around a fixed spatial unit. The number of spatial units which are intersected by the circle is roughly proportional 8 to its circumference, 2 . Similarly, for any given unit j on the plane, and any radius , the number (j, ) of units i for which − 1 < dist(i, j) ≤ is roughly proportional to . Thus, we have 6 Since I n + 0 G n = (I n − 0 W n ) −1 , the value of 0 n tr(G n ) may indicate the component of the average direct effect due to the feedback loop in spatial interactions, c.f. Le Sage and Pace (2009). 7 To maintain mathematical rigour, we may also have to assume that the distribution of the spatial units is neither pathologically dense nor sparse anywhere in the spatial domain. This is actually the case in all realistic settings. In particular, we might expect that the distances between nearby units are within a fixed range D 1 ≤ dist(i, j) ≤ D 2 or, alternatively, the units are regions with areas in a given range A 1 ≤ area(i) ≤ A 2 . For example, tessellations obtained by generating a Voronoi diagram of a randomly distributed set of points constitute an adequate framework. 8 The actual constants would possibly involve D 1 , D 2 , A 1 , A 2 , see footnote 7.
Again, the series on the right hand side converges only if > 2 . Thus, in most cases for IDW spatial models, unless the exponent is well greater than two, the standard asymptotic analysis fails as the spatial weight matrix is simply not summable. This fact is apparently not widely known as it is still common to see inverse distance decay parameters, either set a priori or estimated, lie in [0, 2], see Anselin (2002). In particular, this also applies to the case of = 2 , which constitutes a popular econometric analogue of the Newtonian gravity model. 9 In this planar case, an IDW row or column would be square summable if only > 1 . Going further, it might be argued that in the less common, but still relevant 10 , three-dimensional domain the summability restriction on alpha becomes > 3 , whereas row and column squaresummability requires > 3 2 . 11 Limitations of the summability requirement of the standard asymptotic analysis are not only encountered in the case of increasing domain asymptotics. In fact, the situation is even worse, for infill asymptotics, as by definition, new neighbours are allowed to emerge within any radius about a given unit. Then, in an extreme case, even the simple common border spatial weight matrix may not be summable. Establishing asymptotics of a model based on such an interaction scheme is therefore highly problematic. Summability issues can also be found in interaction models of a non-geographical nature. As an example, consider a social networking model, where the relation of "friendship" in an online social networking service represents contiguity, and the distance is measured in terms of the contiguity path between given pair of individuals. Then, the average distance n between members an n element group is expected to grow slowly with n. This results in a behaviour similar to infill asymptotics. In other words, the quantity (j, ) is expected to grow very rapidly (cf. Eq. (2)), similarly as it would be the case in a high dimensional space.
Applicability of the new asymptotic analysis
This section explains the circumstances under which our new asymptotic analysis applies to a non-summable interactions scheme. As we noted in the previous section, a sharp condition on boundedness requires a nuanced approach. Although it can be shown that a matrix is bounded in spectral norm only if its rows and columns are square summable, those two classes of matrices are not necessarily coincident. Unfortunately, there is no straightforward test which could be applied to matrix entries to decide whether its spectral norm is bounded. Nevertheless, we 9 It is also common for statistical software to provide the functionality of generating and using such spatial weight matrices without any warning. 10 One might consider autoregressive models of natural phenomena in environmental sciences, where an additional dimension may be present, e.g. depth, altitude etc. 11 In general, in an m dimensional domain (j, ) is proportional to m−1 , thus the two condition are > m and > m 2 , respectively.
3
QML estimation with non-summable weight matrices formulate some general suggestions on the use of non-summable matrices in model specifications.
For a square-summable matrix to be bounded in spectral norm it is sufficient that one of the two additional conditions is met. The first condition is when the number of rows and columns which are not absolutely summable is finite. This case is relevant to models that distinguish a set of units, called economic "centres of gravity", for which the amount of spatial interaction is possibly non-summable. Similarly, a social networking model might distinguish a set of "leader/influencer" individuals with non-summable impact on other members of a group. In those cases the resulting matrices are bounded in spectral norm.
If the number of non-summable rows and columns is infinite or, in particular, all of them are non-summable, then the matrix still can be bounded in spectral norm. It is the case if those non-summable rows and columns are in a sense asymptotically not strongly correlated or, in other words, the corresponding weightings are not too similar. Unfortunately, this formulation is not at all precise, and thus for such interaction schemes, we suggest applying a rescaling factor as described below. Importantly, this operation preserves the structure of the spatial interdependence expressed by relative magnitudes between all weights in the matrix, e.g. Elhorst (2001) or Vega and Elhorst (2015).
Although rescaling is similar to the familiar procedure of row-normalisation, there are important differences that need to be highlighted. We note that rownormalisation is typically applied for the following three reasons. Firstly, almost by definition, it normalises the amount of spatial interaction received by each of the spatial units. This is believed to facilitate the interpretation of the autoregressive parameter. Secondly, together with non-negativity and a zero diagonal, the Greshgorin theorem 12 implies that the space for the autoregressive parameter can contain any compact 13 subset of (−1, 1) . Lastly, the procedure trivially assures summability of rows, which is a part of the boundedness assumption of the standard analysis.
Although, normalisation of rows is indeed beneficial for interpretation in a variety of contexts, e.g. common border or k-nearest neighbours schemes, it may also be harmful in certain interaction patterns. In particular, as emphasised in (Vega and Elhorst, 2015, pp. 355) after (Anselin, 1988, pp. 23-24), and Kelejian and Prucha (2010), "row-normalising a weights matrix based on inverse distance causes its economic interpretation in terms of distance decay to no longer be valid". Secondly, as the maximal modulus of an eigenvalue does not exceed spectral norm, a matrix rescaled by ‖W n ‖ −1 allows any value in (−1, 1) in its parameter space. Lastly, row-normalisation does not in general assure column summability required by the standard analysis, whereas rescaling by ‖W n ‖ −1 produces a matrix with unit spectral norm. 12 The Greshgorin circle theorem states that for any matrix A = [a ij ] i,j≤n and any its eigenvalue v there is . 13 It is prudent to recall that the parameter space for ML-type estimation should be compact.
The merits of the rescaling procedure have long been recognised and applied in practice. For example Elhorst (2001) and Vega and Elhorst (2015) rescale weight matrices by their largest characteristic roots. Notice that in the case of a symmetric weight matrix the root is equal to the spectral norm. However, it is not true in general, not even for arbitrary symmetric matrices, that such rescaling assures row and column summability. This makes our asymptotic theory necessary to justify inferences from such a model.
Although the described procedure may be used to rescale an arbitrary matrix, this does not imply that estimation is possible with any spatial weight matrix. A cautionary example has been given in Lee (2004) of a matrix W n = 1 n−1 n T n − I n , where n = (1, … , 1) T is an n × 1 vector of ones and I n is the identity matrix. In this case, the ML estimator may be inconsistent, even though W n is absolutely summable in both rows and columns. This is explained by Mynbaev and Ullah (2008), who analyse a class of weight matrices, of which W n is a member. For matrices in the class, the identification condition fails. It is not clear to us whether the OLS estimator would be consistent in this case, and nor do the results of Lee (2002) seem to provide the answer. A related matrix W n,m = I m ⊗ W n has also been considered in, e.g. Case (2015a, b), Lee (2004), and Lee (2007b). However, in the case of W n,m , if m grows at a sufficient rate, then favourable asymptotic properties can be assured. Mynbaev and Ullah (2008), and Mynbaev and Ullah (2010) study a class of weight matrices which approximate a kernel of an integral operator on the space of square-integrable functions. These may be related to the class of square-summable matrices. However, their assumptions, in particular the absolute summability of operator eigenvalues, preclude the use of ML estimation. In particular, such weight matrices contain an insufficient amount of information for identification of the autoregressive parameter. Instead, the asymptotic behaviour of the OLS estimator is investigated.
Applicability in theoretical arguments
In this section we argue that the consideration of a wider class of spatial weight matrices can also be beneficial in theoretical arguments when developing new model specifications. Let us start with a rather simplified description of one possible application. Let T n be an operator on ℝ n and let us assume that it is invertible (although in practice this is often not the case). Given a specification of a spatial model, for example the SAR specification Y n = W n Y n + X n + n , we might be interested in its transformed form T n Y n = (T n W n T −1 n )T n Y n + T n X n + T n n . Then it is natural to ask what transformations T n are known to preserve the required properties of the spatial weight matrix, so that the asymptotics of the transformed model could follow easily from the properties of the original specification. In the case of the standard asymptotic analysis of Lee (2004), this means that we want the matrix T n W n T −1 n to be row and column summable whenever W n is. Practically, this limits the class of possible matrices T n to those which are themselves summable, i.e. whose entries we can and must finely control. In that respect, the standard analysis collapses, even in the simple case of T n being an isometry, i.e. an orthogonal matrix.
3
QML estimation with non-summable weight matrices Our refined asymptotic analysis, on the other hand, calls for the operator norm of T n to be bounded. In many cases this is easily satisfied as T n is often a projection, and thus an operator of norm one, for which we have ‖T n W n T + n ‖ ≤ ‖W n ‖ . We note that, although projections are generally not invertible, in practice we may still be able to exploit the fact that the Moore-Penrose inverse T + n is an isometry on the range of T n . A more concrete example can be derived from the work of Lee and Yu (2010a, b), where the incidental parameter problem is addressed in the context of a spatial autoregressive panel model and fixed spatial and temporal effects. In this case, the natural candidate for T n is the demeaning operator, i.e. the projection on the space of zero-mean vectors within units and time periods. A similar idea is applied in Lee et al. (2010) for group effects 14 in social interaction models. With this technique, the expected multiplicative bias correction is derived for consistent estimates. Those results strongly rely on the summability of the demeaning operator T n . However, incorporation of arbitrary group effects, where groups are not necessarily disjoint and the number of groups may increase with n, leads to a demeaning operator which does not have to be summable. We show in Sect. 4 that such generalisation is possible with the refined asymptotic analysis.
On the distribution of innovations
A significant merit of the original paper of Lee (2004) is the consideration of the QML estimation scheme rather than the classical maximum likelihood estimator with the assumption of Gaussian errors. That is to say, the error term in the spatial autoregressive model is allowed to be an arbitrary random vector of independent and identically distributed components, as long as the shared distribution allows moments of order higher than four. This seemingly technical improvement has substantially changed how the validity of inference may be perceived, as it no longer has to rely on a belief in the joint normality of errors.
Similarly, with heterogeneous processes governing the underlying phenomena across spatial units, the expectation of identical distribution of disturbances does not appear to be well-founded. In our analysis, the components of the error term are, for purely technical reasons, assumed to be homogeneous in terms of variance. However, they are not required to follow the same probability law.
Lastly, we have found that the universally employed requirement that the distribution of the residuals should have finite moments higher than four is not necessary. In fact, integrability with the fourth power is sufficient to obtain the Gaussian asymptotics for QML estimates. This result opens the possibility of considering heavier tailed probability laws. For example, one might consider distribution functions with tails of order x −5 log −2 x , as x grows to infinity.
Ultimate optimality
It can be argued that our boundedness condition with the spectral norm (Assumption 4) is already optimal. That is to say, the class of spatial weight matrices considered, in general, cannot be broadened. Indeed, if we consider a matrix W n for which lim n→∞ ‖W n ‖ = ∞ , invalidating our assumptions, then we arrive at the rather uninteresting case of the parameter space of Λ not containing any positive elements. To see this, consider the example of a symmetric spatial weight matrix W n with nonnegative entries. Then, ‖W n ‖ is the maximum eigenvalue and, by a well-known argument, any interval [0, t) ⊂ Λ satisfies t ≤ 1 ‖W n ‖ → 0. Obtaining a Gaussian asymptotic distribution for the estimates requires the use of a CLT. Specifically, it is applied for a random variable Q n which is a linearquadratic form of the residual. If the fourth moments of n were not finite, then the normalising factor for Q n , namely √ Var Q n , would also be infinite. This seriously compromises any effort to derive the limiting distribution. As a result, we believe that any substantial generalisation is unlikely.
Revisiting asymptotic analysis of high-order SAR models
Let W n,1 , … , W n,d be arbitrary n × n matrices. 15 We consider the high-order SAR model described by the following equation where = ( r ) d r=1 ∈ Λ ⊂ ℝ d and ∈ ℝ k . Furthermore, Y n is a vector of n observations on the dependent variable, X n is the matrix of k non-stochastic explanatory variables and n is the error term, for which the assumptions given below hold.
Assumption 1
The matrix X T n X n is invertible, and both Let us note that Assumption 1, used for the consistency argument, does not require the sequence 1 n X T n X n to be convergent. Instead, our reasoning stipulates that this sequence is merely bounded. 16 The necessity of non-singularity of X T n X n is straightforward, as regressors should not be linearly dependent for the slope parameter to be identifiable. Note that our assumption that ‖ r W n,r Y n + X n + n , 1 3 QML estimation with non-summable weight matrices far from the well-known condition necessary for consistency of the OLS estimator for non-spatial regression, i.e. ‖(X T n X n ) −1 ‖ = o(1). 17 Assumption 2 One of the following holds 18 (a) n = ( n,i ) i≤n is a vector of independent random variables with zero mean, variance 2 and uniformly bounded fourth moments, (b) n is of the form n = F̄m where F is an n × m matrix with orthogonal rows of norm one; the underlying ̄m is a vector of independent random variables with zero mean, variance 2 and uniformly bounded fourth moments.
Assumption 3 For every in respective parameter space Λ ⊂ ℝ d the matrix Δ n ( ) = I n − ∑ r≤d r W n,r is invertible.
We investigate the asymptotic behaviour of the widely applied Gaussian QML estimator, which maximises the log-likelihood of the observed sample as if the model innovations were Gaussian, namely the function where = ( T , T , 2 ) T is the model parameter. It turns out that, under certain regularity conditions, this estimation scheme is consistent, even if the model residuals do not follow the normal distribution (c.f. Assumption 2). The estimator will be denoted (̂T n ,̂T n ,̂2 n ) T or ̂n . Our result on the asymptotic behaviour of ̂n requires the following boundedness assumption, which gives the essential condition imposed on a spatial weight matrix.
Assumption 4
The set Λ is compact in ℝ d . There exists a universal constant K Λ such that for all n ∈ ℕ , ∈ Λ , r = 1, … , d the matrix norms ‖W n,r ‖ and ‖Δ n ( ) −1 ‖ do not exceed K Λ .
Notice that any matrix with absolutely summable rows and columns is also bounded in the spectral norm. 19 That is to say, the asymptotic theory presented in this paper is indeed a generalisation of the theory initiated in Lee (2004). Moreover, 18 In particular, elements of the error term do not need to be identically distributed. Also notice that trivially (a) implies (b) for F = I n , m = n . Although condition (a) is simpler and sufficient in a standard setting, the relaxed condition (b) will be crucial in the argument of Sect. 4 when an independent vector of residuals will be transformed by such a matrix F . Note that (b) implies n = 0 and n T n = 2 I n , thus its components may be merely uncorrelated. We emphasise the distinction as the innovations are not assumed to be Gaussian. 19 This follows the fact that ‖A‖ 2 ≤ ‖A‖ 1 ‖A‖ ∞ , for an arbitrary matrix A.
3
it is also a proper generalisation, as there are non-summable interaction schemes bounded in the spectral norm. Remark 1 in the appendix describes an example of such a weight matrix, which is additionally row-standardised. This also shows that row-normalisation, in general, does not ensure absolute summability of columns.
The following identification assumption guarantees that the Gaussian QML estimator is able to asymptotically identify the true value of the spatial autoregressive parameter.
Assumption 5 For every 1 , 2 ∈ Λ , such that 1 ≠ 2 , at least one of the statements (a) or (b) is satisfied: , M X n = I n − X n X T n X n −1 X T n . Assumption 5 is typically called the identification condition. It ensures that there is enough information in the observed process to decrease uncertainty of ̂n , with increasing n. The distinction between the statements (a) and (b) reflects the fact that this information can come from either the spatial autocorrelation of Y n or via the accumulated spatial lag of regressors.
Theorem 1 Under Assumptions 1-5 the Gaussian QML estimator ̂n is consistent.
In order to establish the asymptotic distribution of √ n(̂n − 0 ) we need to adopt a number of additional assumptions. Let Ξ ⊂ ∏ ∞ n=1 ℝ n be the linear space 20 of all sequences (x n ) n∈ℕ , with x n = (x n,i ) i≤n ∈ ℝ n , n ∈ ℕ , for which max i≤n x 2 n,i = o(n) . Additionally, let us set G n,r = W n,r Δ n ( 0 ) −1 for r ≤ d.
Assumption 1'
The earlier Assumption 1 on X n is satisfied. Moreover, each column of the matrices X n and G n,r X n 0 , r ≤ d , is a member of the linear space Ξ.
The above is a technical assumption necessary for obtaining asymptotic normality of the deviation √ n �̂n − 0 � . Intuitively, the limiting distribution can be normal regardless of the original distribution of n only when none of the observations within the regressor matrices makes an overwhelming contribution to the estimate of the corresponding slope coefficient. Let us emphasise that this assumption is also necessary in the simple case of non-spatial least squares regression. Although implicitly, it is also present in the standard analysis as a consequence of other assumptions adopted therein, in particular, the boundedness of elements of X n . 21 20 Naturally, the set ∏ ∞ n=1 ℝ n = ℝ × ℝ 2 × … is a vector space when endowed with element-wise addition. Then, Ξ is its linear subspace. 21 Under the assumption of boundedness of elements of X n , as in Lee (2004) and Lee and Yu (2010a), Assumption 1' is implied by the relation ‖G n,r ‖ 1 = o(n) or, if W n,r is row-normalized, ‖Δ n ( 0 ) −1 ‖ 1 = o(n).
3
QML estimation with non-summable weight matrices Assumption 2' The error term satisfies (a) in Assumption 2. Moreover, the family of random variables 4 n,i , n ∈ ℕ , i ≤ n , is uniformly integrable.
Derivation of the asymptotic distribution requires strengthening of the Assumption 2. However, in Assumption 2' the elements of the error term still do not need to follow the same distribution. Instead, we impose the requirement that those distributions have uniformly integrable tails in terms of the fourth moment.
Assumption 6
The true value of parameter lies in the interior of the space Λ , that is 0 ∈ Int Λ.
Assumption 7 spells out the necessary conditions for the existence of the limiting distribution variance. Note that for consistency of ̂n the sequences (ℑ n ) n∈ℕ , (Σ S,n ) n∈ℕ do not need to converge. A limiting distribution theorem could also be easily obtained under the mere assumption that the norms ‖ℑ n ‖ , ‖ℑ −1 n ‖ , ‖Σ S,n ‖ exist and are uniformly bounded for sufficiently large n. However, its statement would be expressed in terms of a transformation of √ n(̂n − 0 ) , rather than the deviation itself, as is the case in, e.g. Gupta and Robinson (2018). The requirement of invertibility of the matrix ℑ could also be relaxed. However, using the present argument, it would only be possible to obtain partial results. An approach to the problem of the singularity of ℑ which considers various convergence rates has been described in Lee (2004). Finally, we obtain the following generalisation of Theorem 3.2 in Lee (2004).
Application to a higher-order general group effects model
This section provides an illustration of an application of our refined asymptotic analysis to a theoretical argument. We describe a new group effect elimination scheme for the high-order SAR model, and using arguments of Sect. 3, we derive the asymptotics of the corresponding QML estimator. In the simple case of a constant number of group-specific effect dummy variables a consistent, asymptotically normal QML estimator can be obtained quite straightforwardly. In general, however, a more careful approach is necessary. Firstly, the incidental parameter problem must be accounted for to assure consistency of estimates. Secondly, a certain degree of compatibility with the spatial weight matrix has to be assured.
Let us consider a modified version of the SAR model specification (c.f. Eq. (3)) in which an additional term associated with group-specific effects is introduced. The model specification then becomes where ∈ ℝ is the vector of group-specific effects, with = (n) possibly increasing with n, and the columns of the corresponding matrix Φ n are typically dummy variables distinguishing non-overlapping groups of observations. 23 Importantly, we note that, as the number of columns in Φ n may change with sample size, the theorems of Sect. 3 cannot be applied directly.
In applied spatial econometrics it is common to eliminate fixed effects by means of the demeaning procedure, see e.g. Elhorst (2014), which can be understood as a simple projection on the space orthogonal to the columns of Φ n . This is therefore closely related to the well-known Frisch-Waugh theorem, see Baltagi (2005). However, with an increasing number of groups, we must be concerned about singularity of the resulting variance, as expressed in e.g. Anselin et al. (2008) and the incidental parameter problem that can arise in such a setting. An effective method of dealing with those issues is developed in Lee and Yu (2010a), where a projection onto a lower dimensional space is applied to properly derive the asymptotic distribution of the resulting QML estimator. The technique presented in this paper extends this idea. Our approach consists in handling the group effect term together with its spatial lags, that is W n,r Φ n , W 2 n,r Φ n , … . At the same time, the transformed model is projected onto a lower dimensional space, following the idea of Lee and Yu (2010a).
Let K n ⊂ ℝ n be the linear subspace generated by iterating W n,r on the columns of the matrix Φ n . In other words, K n is the smallest W n,r -invariant subspace containing the columns of Φ n . Indeed, any spatial lag of Φ n (member of K n ), when multiplied by W n , is yet another spatial lag, thus it is a member of K n . The same is true for all linear combinations of such spatial lags. Our idea is to filter out those vector components of both Y n and X n which lie in K n . Under the assumption that the orthogonal complement K ⟂ n is sufficiently rich, we can obtain a consistent QML estimator of = ( T , T , 2 ) T . Let n * = n − dim K n and fix an n * × n matrix F whose rows form an orthonormal basis of K ⟂ n . It is easy to observe that M K = F T F is the projection onto K ⟂ n and FF T = I n * .
(5) Y n = d ∑ r=1 r W n,r Y n + X n + Φ n + n , 23 Groups are understood as subsets of observations within the sample to which the specific effect is attributed. As an example, this contains the individual fixed effects model as a special case. That is, for balanced panel data when a longitudinal sample of size n is indexed in a way that distinguishes N spatial units and T time periods, n = NT , groups contain observations relevant to respective spatial units. Similarly, when each group contains observations ascribed to the respective time periods, we arrive at the time-specific fixed effects specification. Let us also note that, formally, columns of Φ n are allowed to be arbitrary vectors, as long as the relevant assumptions of this section hold.
3
QML estimation with non-summable weight matrices Denote Y * n = FY n , X * n = FX n , * n = F n and W * n,r = FW n,r F T . As I n − F T F projects onto K n we have FW n,r I n − F T F = 0 . Thus, transforming the specification of Eq. (5) with F , we obtain since FW n,r = FW n,r F T F and, by definition, FΦ n = 0.
Let us observe that * n satisfies Assumption 2 (b) if the original n satisfies Assumption 2 (a) or (b). The crucial observation, however, is that Assumptions 3 and 4 are satisfied when W * n,r is substituted for W n,r . Indeed, with Δ * n ( ) = I n * − ∑ r≤d r W * n,r = F(I n − ∑ r≤d r W n,r )F T and Δ * n ( ) −1 = F(I n − ∑ r≤d r W n,r ) −1 F T it is sufficient to note that ‖W * n,r ‖ ≤ ‖W n,r ‖ and ‖Δ * n ( ) −1 ‖ ≤ ‖Δ n ( ) −1 ‖ , as ‖F‖ = 1 , whenever n * > 0.
Assumption A The earlier Assumptions 1 and 5 hold for the transformed specification of Eq. (6), that is with X * n , W * n , n * substituted in place of X n , W n , n, respectively.
The following result is an immediate consequence of Theorem 1.
Let us note that the mere application of Theorem 2 is not sufficient to establish asymptotic normality of √ n * �̂ * n − 0 � . The main difficulty is that the components of F n do not have to be independent, even if the original n is. 24 However, with our asymptotic analysis, a valid argument is still possible.
Assumption A'
The earlier Assumption A holds and Assumptions 1' is satisfied for F T X * n , F T G * n,r X * n 0 and n * . Moreover, Assumption 7 holds with Y * n , X * n , n * , ℑ * n etc. in the capacity of Y n , X n , n , ℑ n ....
To capture the relationship between our group effect elimination technique and the classical demeaning operator, let us observe that, if specification of Eq. (6) is further transformed with F T , then we arrive at the proper Gaussian log-likelihood of , given Y † n = F T FY n = M K Y n and X † n = M K X n . Indeed, we obtain where Δ † n ( ) = M K Δ n ( )M K and pdet denotes the pseudo-determinant, i.e. the product of all non-zero singular values.
One advantage of the log-likelihood in Eq. (7) is that it does not depend on a particular choice of matrix F . Moreover, given det Δ n ( ) the pseudo-determinant might be computed using the determinant formula for block matrices. If E is a matrix such that F T , E T is an orthogonal matrix, then we have the relation For some specifications of the group or fixed effects, the determinant of EΔ n ( )E T can be found analytically. For example, in a panel setting, with n = mt , time-invariant matrices W n,r = I t ⊗W m,r and a usual matrix Φ n of spatial unit dummy variables, it can be shown that det EΔ n ( )E T equals to det � I m − ∑ r≤d rWm,r � . If the matrices in W m,r are additionally rownormalised and the matrix Φ n incorporates both temporal and spatial fixed effects, we have det Lastly, in the case of temporal fixed effects specification, it can be shown that
Closing remarks
In this paper we have revisited the analysis of the asymptotic behaviour of the wellknown Gaussian QML estimator for higher-order SAR models. Our findings indicate that the standard assumptions on row and column summability of the spatial weight matrix can be weakened to cover econometric models with a greater degree of spatial dependence. Additionally, it is possible to apply a broader class of model transformations in theoretical arguments, without violating the essential boundedness requirement. Secondly, weaker conditions on the existence of moments of the error term can be imposed and its elements do not need to be identically distributed as long as their kurtosis is uniformly bounded. We expect that our results can be used to reconsider the asymptotic behaviour of QML estimation in more general specifications. Moreover, large sample theories for other estimators, in particular, General Method of Moments or Two-Stage Least Squares, can benefit from reapplication of our arguments, especially our Theorem 5 in "Appendix C". We should also mention that we have made the effort to avoid certain mathematical imprecision that can be found in the arguments of the standard analysis. For example, we properly derive the asymptotic distribution based on the Cramér-Wald theorem. Moreover, 1 3 QML estimation with non-summable weight matrices our proofs rely neither on the existence of the Lagrange remainder in the Taylor expansion nor on the mean value theorem. 25 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
Appendix A: Monte Carlo experiments
We have conducted computer simulations to show that, under the relaxed boundedness condition, the asymptotic theory is valid. We considered four different spatial interaction schemes in a linear setting. The first matrix considered, W 1 n , is a common nearest neighbour matrix, with one distinguished central unit, whose interaction with other units is defined by the IDW scheme with the power parameter = 2 . This is a summable matrix and the results of Monte Carlo experiments may serve as a point of reference for non-summable settings.
The matrix W 2 n is analogous to the matrix W 1 n , with the crucial difference that the power parameter = 1 . This leads to an interaction scheme which is no longer summable. The third matrix, W 3 n , is yet another variation on the same idea. However, instead of using the IDW scheme, the non-summable interaction pattern is now uniform, with all its weights equal to 1∕ √ n . It is the largest possible square-summable, uniform pattern, with respect to size of the weights. Lastly, the matrix W 4 n is obtained from the symmetric non-summble IDW matrix, all elements of which are proportional to 1 |i−j| , with i, j being its indices (here we have = 1 ). This matrix has been rescaled by its norm.
The Monte Carlo simulations for all of the matrices W i n , i = 1, 2, 3, 4 , were conducted with a mixed regressive autoregressive specification, see Eq. (3), with a single autoregressive parameter 0 = 0.3 . The regressor matrix X n contained a constant term c 0 = 2 and one regressor, uniformly distributed in an interval symmetric around zero, with the corresponding slope 0 = 3 . In all simulated models the innovations were Gaussian with variance 2 0 = 1. A constant number of Monte Carlo samples, m = 5000 , was used for all trials. Tables 1, 2, 3, 4 show the expectation estimate, the standard error for the estimator, and the Kolmogorov-Smirnov assessment of normality of the individual components of the scaled difference √ n �̂n − 0 � . For all matrices a clear tendency can be seen for the values of the bias and its standard deviation to diminish with increasing n. Moreover, in most cases the quotient of the absolute bias divided by the standard deviation does not Sign. level - * * * * * * * * * * * Sign. level - * * * * * * * * * * * * * Sign. level -- * * * * * * * * * exceed 0.1256, which implies that true values of parameters lie within 0.05 one-sided confidence intervals for the centre of the distribution of estimates. We have also examined the differences between the theoretical variance of the estimator (implied by the matrix −1 ) and the values derived from the samples. The relative differences ranged from zero to four percent, with an average of roughly two percent, which is consistent with the relative standard deviation √ 2∕m of the 2 (m) distribution.
Appendix B: Additional facts
Remark 1 There is a row-normalised matrix W n which is bounded in spectral norm, yet ‖W n ‖ ∞ is unbounded.
Proof Set D n = [d ij ] i,j≤n with non-zero entries d 1,j = 1∕j , d i,1 = 1∕i and d i,i+1 = d j+1,j = 1 if i, j > 1 . As an illustration, we have Proof of this non-trivial fact can be found in the supplementary material.
Lemma 1 Let U ⊂ ℝ d be an open set and
Lemma 2 Let n = ( n,i ) i≤n be an n × 1 random vector satisfying Assumption 2, and let (A n ) n∈ℕ and (P n ) n∈ℕ be sequences of n × n matrices satisfying If x n ∈ ℝ n , for n ∈ ℕ , is a non-random vector satisfying ‖x n ‖ 2 = O(n) , then (a) for Z a n = 1 n x T n A n n we have Var Z a n ≤ Proof of Lemma 2 is given in the supplementary material.
Appendix C: Theorems
Proof of Theorem 1 Let 0 = T 0 , T 0 , 2 0 T be the true value of parameter . Let S n ( ) = Δ n ( )Δ n ( 0 ) −1 , ∈ Λ . It is a standard approach to use the first-order optimality conditions for and 2 , that is sup n∈ℕ ‖A n ‖ < ∞, sup n∈ℕ ‖P n ‖ ≤ 1 and P n = P T n P n .
to obtain the concentrated log-likelihood ln L c n ( ) = − n 2 ln 2̂2 n ( ) − n 2 + ln | | det Δ n ( ) | | , which is maximised by ̂n . Let us set R n ( ) = 1 n ln L c n ( ) and note that each R n is a random function.
The proof proceeds as follows. First, we obtain consistency of ̂n by the generic argument presented in Lemma 3.1 of Pötscher and Prucha (1997). Thus, we introduce a new, non-random function which uniformly approximates R n for large n. 27 Then we show that 0 is an identifiably unique maximiser of each R n , as in Definition 3.1 of Pötscher and Prucha (1997) and, as a result, we obtain consistency of ̂n . Finally, we deduce consistency of ̂n and ̂2 n .
27 Note that the value of R n represents a log-root-mean-square of n √ L c n ( ) with L c n ( ) being the concentrated likelihood of . Although simply choosing R � n = R n instead of current R n = 1 2 ln e 2R n might seem more natural, the use of R n results in simpler computations and the difference between R ′ n and R n diminishes as the randomness of R n decreases with n → ∞.
Similar arguments imply that
. Finally, as 2 0 = plim n→∞ 1 n ‖ n ‖ 2 , by statement (b) of Lemma 2, we also have plim n→∞̂2 n = 2 0 . ◻ Theorem 5 Let ( n ) n∈ℕ satisfy Assumption 2' and let x n = (x n,i ) i≤n , n ∈ ℕ , be column vectors. Denote Q n = T n x n + A n n and assume that Var Q n > 0 for sufficiently large n ∈ ℕ . If ‖x n ‖ 2 + ‖A n ‖ 2 F = O(Var Q n ) , ‖A n ‖ 2 = o(Var Q n ) and max i≤n x 2 n,i = o(Var Q n ) , then Q n − Q n √ Var Q n converges in distribution to a standardised normal variable N(0, 1).
Proof of Theorem 5 is given in the supplementary material. The argument is based on bounds originally developed in Bhansali et al. (2007), where a CLT for quadratic forms of i.i.d. vectors is shown.
Corollary 1 Let ( n ) n∈ℕ satisfy Assumption 2' and let x n = (x n,i ) i≤n , n ∈ ℕ , be column vectors. Denote Q n = T n x n + A n n . If lim n→∞ Var Q n exists and is positive, ‖x n ‖ 2 + ‖A n ‖ 2 F = O(1) and ‖A n ‖ 2 + max i≤n x 2 n,i = o(1) , then Q n − Q n √ Var Q n converges in distribution to a standard normal variable N(0, 1).
Proof of Theorem 2
With S n defined in Assumption 7 and G n,r = W n,r Δ n ( 0 ) −1 , a straightforward calculation 29 shows that the consecutive entries of 1 √ n S n are We will show that 1 √ n S T n converges in distribution to N 0, Σ S . 30 2 n (̂n) = 1 n 30 Naturally, it is not sufficient to establish asymptotic normality of the above formulae, c.f. Lee (2004). Our argument follows by considering two cases and makes use of the standard Cramér-Wald theorem (see e.g. Billingsley (1995)).
Proof of Theorem 4
The proof relies on the same argument as the proof of Theorem 2, up to the point where our CLT is used to deduce asymptotic normality of the linear-quadratic form 1 √ n * S * , with as previously. Then it can be seen that, for arbitrary x n and A n we have x T n * n + ( * n ) T A n * n = x T n F n + ( n ) T F T A n F n , hence a linear quadratic form of * n is, in fact, a linear-quadratic form of n . Finally, it is sufficient to note that, in the case of 1 √ n * S * , by Assumptions A' and 4 we have (1) and lim n→∞ Var 1 √ n * S * = T Σ S * . Thus, Corollary 1 can be used to deduce that 1 √ n * S * converges in distribution to N 0, T Σ S * . Again, the remainder of the proof proceeds accordingly to the proof of Theorem 2. ◻ (C.6) ‖R n (̂n)‖ ≤ M n ‖Ĩ −1 n ‖
|
2022-12-02T15:09:47.600Z
|
2020-07-04T00:00:00.000
|
{
"year": 2020,
"sha1": "718f51cc623456303dff9c21b10524aa0eac5e8b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10109-020-00326-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "718f51cc623456303dff9c21b10524aa0eac5e8b",
"s2fieldsofstudy": [
"Economics",
"Geography"
],
"extfieldsofstudy": []
}
|
119483349
|
pes2o/s2orc
|
v3-fos-license
|
Low Energy Pion-Hyperon Interaction
We study the low energy pion-hyperon interaction considering effective non-linear chiral invariant Lagrangians including pions, rho mesons, hyperons and corresponding resonances. Then we calculate the S- and P-wave phase-shifts, total cross sections, angular distributions and polarizations for the momentum in the center-of-mass frame up to k=400 MeV. With these results we discuss the CP violation in the csi->pi-lambda and omega->pi-csi weak decays.
I. INTRODUCTION
Why should we study pion-hyperon (πY ) interaction? It is not hard to see that, due to their instability, it is not an easy task for an experimentalist to make beams of pions and hyperons, let them collide and study what happens in such collisions. As far as we know, no experimental data on πY interaction are available. In such a situation, is there any practical interest, besides academic one, in theoretically studying these interactions?
In 1957, Okubo [1] observed that the CP violation allows Σ andΣ to have different branching ratios into conjugate channels. Pais [2] extended this proposal also to Λ andΛ decays. In these reactions, the final-state strong interaction between the decay products plays a very important role. The few studies on πY interactions we could find in the literature [3][4][5] are related to the Ξ → πΛ decay, in which an independent estimate of the πΛ strong phase shifts is needed to correctly analyze the data and conclude about the CP violation. In these references, however, the results presented show some discrepancy among them, especially on δ S , requiring a clarification. As for the other interactions, such as πΣ and πΞ, within our limited knowledge no study has ever been done.
Besides, we have a somewhat different motivation for the present study. It is by now well known that in highenergy proton-nucleus collisions, the inclusively produced hyperons appear usually polarized [6][7][8]. Several models have been proposed to explain this phenomenon [9][10][11][12][13], which at least qualitatively, or even quantitatively, can account for the hyperon polarization. However, as for the anti-hyperons which are generally produced also with polarization [7,8], no one of these models are applicable, since all of them are based on some leading-particle effect in which the incident proton is transformed into a leading hyperon. 1 In [15], it is proposed that at least part of the polarization is caused by the final-state interaction of (anti-)hyperon with the surrounding hot medium where it is produced during the collision of the incident objects. This mechanism would be the dominant one in the case of anti-hyperon polarization, since they cannot be produced as leading particles. In [15], this idea was put forward within a hydrodynamic model, by treating the interaction with the hot medium as given by an optical potential, reproducing all the qualitative features of the existing data. Evidently, it is desirable that, if possible, more realistic microscopic interaction be used instead of purely phenomenological potential with fitted parameters. Since pions are dominant in such a hot medium mentioned above, the microscopic interactions of our interest would be pion-hyperon (or more precisely pion-anti-hyperon) interactions. However, except for few results on πΛ, we are not aware of any study on these interactions. So the main object of the present work is to study the low-energy (with respect to the surrounding medium) pion-hyperon interactions, aiming at a later computation of anti-hyperon polarization in high-energy hadron-nucleus collisions.
The plan of presentation is the following. We shall first explain, in the next section, the general strategy of treating the pion-hyperon interactions. Then, in sections III, IV and V, we apply it, respectively, to the π−Λ, π−Σ and π − Ξ cases. Phase shifts are calculated and from these the energy dependence of the total cross-section, the angular distribution and the polarization for each reaction are computed in these sections. Conclusions are drawn in section VI. Basic formalism is given in the appendix.
II. STRATEGY FOR THE STUDY OF THE PION-HYPERON INTERACTIONS
How could we proceed to study the low-energy πȲ interactions? First of all, due to the CPT invariance, it is enough to study the πY interactions instead of the πȲ ones. For instance, theȲ polarization is obtained from the corresponding one for Y , just by changing the sign. Next, recall that unlike the πY ones, the low-energy πN interaction is, for obvious reasons, very well studied since a long time. There is a large amount of experimental data, and also many models [16][17][18][19][20][21] that reproduce them pretty well. Here, we shall consider a chiral-invariant effective Lagrangian model. In [22], such a Lagrangian was written in terms of π, N , ρ and ∆ fields as a sum of where N , ∆, φ, ρ are the nucleon, delta, pion and rho fields with masses m, m ∆ , m π , and m ρ , respectively, µ p and µ n are the proton and neutron magnetic moments, M and τ are the isospin matrices and Z is a parameter representing the possibility of the off-shell-∆ having spin 1/2. In addition, it also included a σ term as a correction and parametrized it in a way we will show below. Now, since πΛ, πΣ and πΞ systems are similar to πN , we can make an analogy and use the same prescription explained above, adapting it appropriately. The ∆(1232) resonance plays a central role in the low-energy πN interaction. Its contribution dominates the total cross section of π + p (T = 3/2) process and is also important to the other isospin channels. The lowest energy hyperon resonances and their main decay modes are quite well known, so it is possible to use these resonances replacing ∆(1232). As for the coupling constants, they can be estimated from the resonance widths [5].
Another detail we have to take into account is the unitarization of the amplitudes. In an effective model like the one we are considering, the amplitudes we directly obtain are real, consequently violate the unitarity of the S matrix. So, if we want something more than simple cross section, some procedure is required to unitarize the amplitudes. As is often done in effective models [18,20,21,23], and will be explained in detail in the next section, we will do this by reinterpreting the calculated amplitudes as elements of reaction matrix K. Now we are ready to calculate all the phase shifts and then the total cross sections, angular distributions and polarizations. Because we are interested in low energies (k ≤ .4 GeV), we will limit ourselves to the S and P waves, which are generally enough for our purpose.
III. PION-LAMBDA INTERACTION
The πΛ interaction is the simplest case. Since Λ has isospin 0, the scattering amplitude T πΛ has the general form where p µ and p ′ µ are the initial and final 4-momenta of Λ in the center of mass frame, k µ and k ′ µ are those of the pion, and θ the scattering angle. Indices a and b indicate the initial and final isospin states of the pion. We show in Fig. 1 the relevant diagrams, where we have omitted the crossed diagrams, although included in the calculations. We consider only the first resonance Σ * (1385), because we are interested in the low-energy (k ≤0.4 GeV) behavior. The ρ exchange term is absent in the πΛ case, because due to the isospin it does not couple to Λ. To computing the first two of these diagrams, the Lagrangians (1) and (2) have been adapted to by replacing the nucleon by Λ or Σ, and ∆ by Σ * and performing appropriate sums over isotopic spin indices.
Fig. 1: Diagrams for πΛ Interaction
The contributions of the diagram (1a) to the amplitudes are The diagram (1b) gives where ν and ν r are defined in the Appendix and As for the diagram (1c), we only parametrize the amplitudes as done in [16] A σ = a + bt , where a = 1.05 m −1 π and b = −0.80 m −3 π are constants (we use the same values of [22] for πN ). The scattering matrix will then have the form and we can make the partial wave decomposition with The amplitudes a l± , calculated in a tree-level approximation, are real and, so, the corresponding S matrix is not unitary. In order to unitarize these amplitudes, we reinterpret them as elements of K matrix and write The phase-shifts are then computed as The parameters we use are m Λ = 1.115 GeV, m Σ = 1.192 GeV, m Σ * = 1.385 GeV, m π = 0.139 GeV [25], g ΛπΣ = 11.7 [26,27] and Z = −0.5 [22]. The only parameter that is missing is g ΛπΣ * . As mentioned before, we estimate it from the resonance width. Namely, by comparing the δ P 3 phase-shift in the resonance region with the relativistic Breit-Wigner expression [24], where k 0 is the center-of-mass momentum at the peak of Σ * (1385), that is 0.207 GeV. The value obtained in this way is g ΛπΣ * = 9.38 GeV −1 , which we will use here. In Fig. 2, we show the calculated phase-shifts as functions of the center-of-mass momentum k . We also show there the k dependence of the total elastic cross section, the angular distribution and the Λ polarization as function of x = cos θ , for k = 100, 200, 300 and 400 MeV. As we can see, the Σ * (1385) contribution dominates the total elastic cross section in the low energy region (quite similar to π + p scattering). As for the polarization, it begins positive at lower energies and then becomes negative above k ∼ k 0 .
IV. PION-SIGMA INTERACTION
In the case of πΣ interaction, both π and Σ have isospin 1, so the compound system can have isospin 2,1 or 0. For this reason, the scattering amplitude is somewhat more complex in this case and has the following general form where α, β, γ and δ are isospin indices. Decomposing this amplitude into the i-th. isospin states of the system (P i are the projection operators), we have Comparing (17) and (18) we obtain These are the relations that determine all the amplitudes projected on isospin states. The interaction Lagrangians are given by (4), (6) and where the isospin combination matrix t obeys β| t|α = −iǫ βαcêc .
The amplitudes corresponding to the diagram a), Fig. 3, are The ρ exchange amplitude, diagram e), has the form so with Finally, the σ-term has been parametrized in the same way as for πΛ, by using eqs. (11), with the same parameters. In addition to the parameters used in the πΛ case, we use here m Λ * = 1.406 GeV , m ρ =.769 GeV, µ Σ 0 = .649, µ Σ − = −0.16 and g ΣπΣ =6.7 [27]. The coupling constant g ΣπΛ * is not known, but we can proceed in the same way as we have done before, comparing the calculated amplitudes with the Breit-Wigner expression. The best fit is obtained with g ΣπΛ * = 8.74 GeV −1 .
We show in Fig. 4 the phase shifts calculated as explained above. It is also shown the energy dependence of the cross section σ t for each channel described below. Using the isospin formalism we calculate the elastic, as well as the charge exchange, amplitudes as We can see in Fig. 4 that, although the first resonance is important in πΣ interactions, it is not as much as in the π + p or πΛ scatterings. The peak in the Λ * (1405)-mass region is not so high (less than 30 mb) and it appears in the I = 0 state (π − Σ + ). Remark that the other reactions (I = 1 and especially I = 2) have comparable total cross sections.
Before passing to the next section, it is worth-while making the following remarks. Even in the tree-level calculation and in the low-energy (k < ∼ 0.4 GeV) region that we are considering here, there could occur the exchange reactions πΛ ⇀ ↽ πΣ . The possible diagrams for these are similar to Fig. 1 b) and Figs. 3 b) and e), with one of Λ (Σ) replaced by Σ (Λ). However, the contributions of these reactions are small compared with the elastic ones we examined in this paper. First, as mentioned before and could be seen in Figs. 2 and 4, the directresonances dominate over all the other processes, which appear as corrections to the former. They don't change much the cross sections, however are necessary to produce polarization. Now, πΣ → πΛ , which is given by the Σ * term together with ρ exchange one, is much smaller than πΛ → πΛ , because the branching ratio of Σ * decay is (Σ * → πΣ)/(Σ * → πΛ) ∼ 0.16 [25]. As for πΛ → πΣ compared with πΣ → πΣ , as mentioned above first we have σ(πΛ → πΣ)/σ(πΛ → πΛ) ∼ 0.16 for each possible channel. Now, from Figs. 2 and 4, each πΛ → πΛ channel, compared with the sum of the three prominent πΣ → πΣ channels, gives σ(πΛ → πΛ)/ σ(πΣ → πΣ) ∼ 0.60 on the average in the resonance region. So, we estimate that the overall πΛ → πΣ contribution is less than 20% of πΣ → πΣ examined here.
V. πΞ INTERACTION
This case is very similar to the πN scattering, because Ξ has isospin 1/2 (as the nucleon) and the main difference is that the resonance of interest Ξ * (1533) has isospin I=1/2 (instead of I=3/2 as ∆(1232)). Then, the scattering amplitude T ba πΞ has the general form The contributing diagrams are in Fig. 7 and the Lagrangians are almost the same as in the case of πN scattering, eqs. (1-4), where we must replace the N field by Ξ field, and ∆(1232) by Ξ * (1533). The latter implies a substitution of the isospin matrix M by τ . Consequently, A ± Ξ * and B ± Ξ * have different structures as compared with A ± ∆ and B ± ∆ of πN case, whereas all the other A ± and B ± remain the same, with appropriate parameter changes. Fig. 7: Diagrams to πΞ Interaction So, by computing the Feynman diagram a) in Fig. 7, we obtain The ρ exchange, diagram d), gives The contributions from diagram b) with intermediate Ξ * (1533) are wherê The parameters used are m Ξ =1.318 GeV, m Ξ * =1.533 GeV, µ Ξ 0 = −1.25, µ Ξ − = 0.349 and g ΞπΞ = 4. As in the previous cases, we determined the ΞπΞ * coupling constant by using the Breit-Wigner formula and got the value g ΞπΞ * = 4.54 GeV −1 . We display in Fig. 8 the calculated phase shifts to the isospin 1/2 and 3/2 states. We can now obtain the matrix elements for each elastic and charge-exchange channel as We show, in Fig. 9, the integrated cross sections, with Ξ − in the final state, obtained with these matrix elements. We can see that in this case the Ξ(1533) resonance contribution is very important and it dominates three of the reactions. Figure 10 presents the angular distributions and polarizations for the same reactions.
VI. DISCUSSION OF THE RESULTS
In the preceding sections, by making a close analogy with the well established πN case, we have calculated the S-and P -wave phase shifts for πΛ, πΣ and πΞ interactions. Then obtained both the integrated and differential cross sections and polarizations for all the elastic and charge-exchange processes. Let us now discuss these results in connection with the two applications we mentioned in the introduction.
The first application refers to the study of the CP violation. One of the ways to verify this violation is to observe the hyperon weak decays, Λ → πN , Σ → πN , Ξ → πΛ and Ω → πΞ. In such a study, we need an independent estimate of the strong-interaction phase shifts in the final state.
For Λ-and Σ-decays, a large amount of data are available on the strong interaction phase shifts, since πN scatterings are very well studied. In the Ξ decay, there are some estimates of the S-and P 1-wave phase shifts for πΛ system. However, the reported results are conflicting with each other. Whereas the authors of [3] give δ S = −18.7 o and δ P 1 = −2.7 o , in [5], they tell that δ S = 1.2 o and δ P 1 = −1.7 o and, as for the Ref. [4], δ S is between −1.3 o and 0.1 o and δ P 1 between −0.4 o and −3.0 o . In our calculation, with the σ term included, we obtained δ P 1 = −0.36 o and δ S = −4.69 o at the Λ-mass value, that gives δ S − δ P 1 ∼ −4.3 o , that is still small. One should remark that to really fit the phase shifts in πN scattering, especially δ S , it is necessary to include other contributions as the diffractive [16] or the contact [21] terms with correct parameters. So it is possible that some correction is needed in the results we have obtained here.
In this paper, we have also calculated the πΞ phase shifts. So, it is possible to get some information about the CP violation in the Ω → πΞ decay, too. Ω has J p = 3 2 + , so the phase shifts we need are δ I P 3 and δ I D3 . Calculating the asymmetry parameter A in the same way as in [5], the approximate expression reads A = −tan(δ So the strong interaction effect in the asymmetry parameter will appear as tan(−7.17 o ) that is a value close to that obtained in the Ξ decay. So we do not expect that, in the study of CP violation in hyperon weak decays, Ω → πΞ is much useful.
The other application we mentioned in the introduction, and which was the main motivation of this work, is the inclusive (anti-)hyperon polarization in high-energy collisions. As explained there, the anti-hyperon polarization cannot be understood in terms of the usual models [9][10][11][12][13], because all of them are based on the leadingparticle effect and an anti-hyperon cannot be a leading particle. In [15] it has been proposed that anti-hyperons are polarized when interacting with the surrounding particles, which are predominantly pions, that make the environment where they are produced. So the anti-hyperon polarization would appear as an average effect of the low energy πY interaction. It is clear that, generally speaking, such an average procedure washes out any existent asymmetry, so that no polarization would appear as a consequence. This is true if we look at the the central region of the collision. However, the polarization data are obtained in very forward directions where the asymmetry could be preserved. Such calculations will be reported elsewhere [28], but just observing the results of the preceding sections we can draw some conclusions. The Λ polarization, as seen in Fig. 2, is positive below 100 Mev and then changes the sign, so we expect that, on averaging, the most part will be canceled out, implying that the polarization of Λ ∼ 0. As seen in Figs. 9 and 10, the Ξ − polarization is negative and very large in the channels where the cross section is large, whereas the Σ − polarization is positive in most of the cases, Figs. 6. As we can see, the hyperon polarization is different in each case, and seems to be consistent with the experimental data for the anti-hyperons [7,8]. Remark that the polarization sign changes under charge conjugation.
|
2019-04-14T02:32:24.724Z
|
2000-12-28T00:00:00.000
|
{
"year": 2000,
"sha1": "d3dc32aeb3b7bc5183e04b969fb01f2ae5605093",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0012359",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d3dc32aeb3b7bc5183e04b969fb01f2ae5605093",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
63861603
|
pes2o/s2orc
|
v3-fos-license
|
PROCESS PLANNING AND SCHEDULING WITH PPW DUE-DATE ASSIGNMENT USING HYBRID SEARCH
Although IPPS (Integrated Process Planning And Scheduling), and SWDDA (Scheduling With Due Date Assignment) are two popular area in which numerous work done, IPPSDDA (Integrated Process Planning, Scheduling And Due Date Assignment) is a new research field only a few works are done. Most of the works assign common due dates for the jobs but this study assign unique due date for each job in a job shop environment. Three terms are used at the performance measure which is weighted tardiness, earliness and due dates. Sum of all these terms are tried to be minimized. Different level of integration of these three functions is tested. Since job shop scheduling is alone NP-Hard, integrated solution is harder to solve that’s why hybrid search and random search are used as solution techniques. Integration found useful and as integration level increased solution is found better. Search results are compared with ordinary solutions and searches are found useful and hybrid search outperformed random search.
Introduction
Process planning, Scheduling and due date assignment are three important functions that should be integrated. Conventionally these three functions are performed separately. But high interdependence between these three functions makes us to consider integration. Since outputs of upstream functions become inputs to the downstream functions we should handle upstream functions well and integrated with downstream functions.
In short we should integrate these three functions to increase global performance we obtained. As we integrated more functions the problem becomes more complex but global performance becomes better. Integration improves shop floor loading and we get better balanced shop floor loading. Alternative process plans provide us opportunity to choose from and in case of unexpected occurrences and to obtain better balanced shop floor loading we may chose other alternative routes. If due date assignment is integrated with other functions we can give more reasonable due dates that improves performance of the problem and reduce penalty function.
If we consider only scheduling problem, it belongs to NP-Hard class problem and integrated solutions are even harder to solve. For this reason exact solutions are only possible for very small problems. That is why for larger scale problems we should use some practical methods that find a good solution instead of exact solution. At the literature some heuristics are used to find a good solution in a reasonable amount of time. In this search we applied hybrid search and random search to find a good solution in a reasonable amount of time. At the hybrid
MATTER: International Journal of Science and Technology ISSN 2454-5880
© 2016 The author and GRDS Publishing. All rights reserved. Available Online at: http://grdspublishing.org/journals-MATTER-home 22 search we started with random search and continued with genetic search. Since marginal benefit of random search is very high at the beginning but later marginal benefit gets smaller, we used random search at the beginning. Genetic search uses previous best results and random search does not use earlier results and every time it produces brand new solutions that is why genetic search is more powerful and we continued with genetic search.
If we define and give some more information about the functions respectively; According to Society of Manufacturing Engineers, process planning is the systematic determination of the methods by which a product is to be manufactured economically and competitively. (Zhang & Mallur, 1994) defined production scheduling as a resource allocator which considers timing information while allocating resources to the tasks. According to (Gordon, Proth & Chu, 2002) "The scheduling problems involving due dates are of permanent interest. In a traditional production environment, a job is expected to be completed before its due date. In a just-in-time environment, a job is expected to be completed exactly at its due date" In this study we used RDM (Random) and PPW (Process Plus Wait) as due date determination rules. RDM is used for external due date assignment and PPW is used for internal due date assignment.
If we make a survey on literature on penalty functions, some researches penalized only tardiness, some works penalized both tardiness and earliness, some penalized number of tardy jobs and some of them penalized additionally manufacturing costs etc. In this research we penalized all of weighted earliness, tardiness and due date related costs.
Background and Related Researches
Conventionally process planning, scheduling and due date assignments are performed sequentially but high interdependence among these three function forces us to consider integration. There are numerous works on IPPS and SWDDA problems but there are only a few works on IPSDDA problem.
For the IPPS problem it is better to see (Tan & Khosnevis, 2000), and (Phanden, Jain & Verma, 2011) as surveys on IPPS problem. Although it is better to have alternative process plans, Marginal benefits of alternative process plans diminish sharply and alternative process plans makes problem more complex so we should determine optimum number of alternative process plans we should select a plan among alternatives wisely, (Usher, 2003).
MATTER: International Journal of Science and Technology
As we said, as alternative plans increases it becomes more complex to select the plan. (Ming & Mak, 2000) studied process plan selection problem by using a hybrid Hopfield network-genetic algorithm. (Bhaskaran, 1990) studied process plan selection in his study.
Developments in hardware, software and algorithm provide us to prepare process plans easier and as a result CAPP (Computer Aided Process Planning) is developed. (Aldakhilallah & Ramesh, 1999), and (Kumar & Rajoita, 2003) studied integration of process planning with CAPP.
As (Demir, Uygun, Cil, Ipek & Sari, 2015) mentioned, if we look at literature for IPPS problems we can observe that some researchers use genetic algorithm, some use evolutionary algorithms or agent based solutions to the problem. Some of the researchers decomposed the problem into smaller parts such as loading and scheduling sub problems.
Another popular research topic which may be related with this research is SWDDA problem. There is numerous works on SWDDA problem. Before studying this problem it is better to see a good survey on SWDDA that is made by (Gordon et al, 2002).
Due dates are determined internally and externally. In this study we both used RDM and PPW due date assignment rules. We integrated process plan selection with 21 dispatching rules Conventionally only tardiness is punished, but according to JIT environment both earliness and tardiness is punished. Meanwhile in this research we penalized all of weighted earliness and tardiness and due date related costs. Since nobody wants far due dates we penalized due dates also.
In this research we studied m machines, n jobs with separate due date assignment job shop scheduling problem. But at the literature many works are on SMSWDDA (Single Machine Scheduling with Due Date Assignment) problem and some other problems are on MMSWDDA (Multiple Machine Scheduling with Due Date Assignment) problems. Although many works are on common due date assignment problem, in this study we assigned separate due dates for every jobs. For example jobs waiting to be assembled should be assigned common due date.
Also there are numerous works on MMSWDDA problem. (Cheng & Kovalyov, 1999), and (Lauff & Werner, 2004) are some examples to this type of problem.
Although there are numerous works on IPPS and SWDDA problems, there are only a few works on integration of these three functions. If we give some works on IPPSDDA problem; (Demir & Taskin, 2005), (Demir, Taskin & Cakar, 2004), (Ceven & Demir, 2007) studied this new problem. (Li, Ng &Yuan, 2011) studied single machine scheduling of deteriorating jobs with CON/SLK due date assignment. At this study they assumed actual processing time of a job is a linear increasing function of its starting time. They used CON (Common due date) and SLK (Equal slack) methods while determining due dates. They considered the problem of determining optimum due dates and processing sequence concurrently to minimize costs of earliness, due date assignment and weighted number of tardy jobs. (Vinod & Sridharan, 2011) studied due date assignment rules and scheduling methods in a dynamic job shop environment. As a due date assignment method they used PWW, TWK, DTWK (Dynamic Total Work Content), RWK (Random Work Content) and as a scheduling rules they used seven different types of rules. System performance is calculated according to flow time and tardiness of the jobs. (Yin, Cheng, Xu & Wu, 2012) studied single machine batch delivery scheduling and common due date assignment problem. Here while they are determining job sequence, common due date and job delivery scheduling, they are also considering option of performing a rate modifying activity on the machine. In this study they aimed to determine common due date for the jobs, location of the rate modifying activity and delivery date of the each job to minimize the sum of earliness, tardiness, holding, due date and delivery cost. (Yin, Cheng, Yang & Wu, 2015) studied two-agent single-machine scheduling with unrestricted due date assignment problem.
Problem Definition
Three important manufacturing functions, process planning, scheduling and due date assignment are tried to be integrated using random-genetic hybrid search. Classically three functions are treated sequentially and separately but high interdependence between these three functions makes us strongly to consider integration. If we perform process planning and scheduling sequentially then process planners select most desired machines repeatedly and this cause unbalanced machine loading and in case of unexpected occurrences and to balance shop floor loading we may not change process plans at shop floor level and this cause inferior shop floor performance and cause unbalanced machine loading where some machines become bottleneck and some starves and utilization of these resources may be very low. If we perform due date assignment separately then we may give unreasonably close due dates and we cannot keep our promise and this cause customer ill will or we may give long due dates and this increases penalty functions. By integrating due date assignment with other functions we may give reasonable due dates which are neither too early or nor too late. So we can keep our promise
MATTER: International Journal of Science and Technology ISSN 2454-5880
© 2016 The author and GRDS Publishing. All rights reserved. Available Online at: http://grdspublishing.org/journals-MATTER-home 26 better and we can improve performance measure greatly. Sum of total weighted earliness, tardiness and due date related costs become minimum by integrating these three functions.
Although at the literature IPPS problem and SWDDA problems are studied extensively, integration of these three functions is very new research topic and in this study we tried to integrate all of these three functions by using random-genetic hybrid search.
In this study we have alternative routes and select one of these alternatives and we have 21 dispatching rules and we select best suitable scheduling rule and we select best parameters for PPW due date assignment rule. We use hybrid search and random search in solution and compare search techniques with each other and with ordinary solutions. Problem is represented as chromosomes as in explained in Figure 1 at the section 4.
By applying hybrid search and random search we tried to find better route, better scheduling rule and better due date assignment rule that gives better performance.
We have three shop floors to solve. These shops are small, medium and large shop floors.
Characteristics of these shop floors are summarized at Table 1 below. If we explain small shop floor, there are 20 machines, 50 jobs to be scheduled and each job has 5 alternative routes and each route has 10 operations. Processing times of the operations are distributed randomly in between 1 and 30 according to normal distribution with mean 12 and standard deviation 6 according to formula ⌊(12 + * 6)⌋.
At this research we assumed a day 480 minutes. If there is a shift per day then 8*60 makes 480 minutes. We penalized weighted earliness, tardiness and due date related costs.
Penalty function terms are given and explained at the next page. Where weight(j) is the weight of job j. D is the due date, E is the earliness and T is the tardiness of job j. P.D(j) is penalty for due date, P.E(j) is the penalty for earliness and P.T(j) is the Penalty for tardiness of job j. Penalty(j) is the total penalty for job j. Finally by using (5) we determine Total Penalty which is the total penalty for all of the jobs.
Solution Techniques Used
Two search techniques are used which are hybrid search and random search. We selected as the initial step main population. We used these chromosomes of this produced initial step main population as the ordinary solutions. We did not apply any random or genetic iteration over this initial main population yet.
Random Search:
With this technique we applied 200, 100 and 50 random iterations for the shop floors respectively. We use three populations, a population with size 10 as main population, a population with size 8 as great as crossover population and a population with size 5 as big as mutation population. This is because we wanted to be fair in comparison of random search and genetic search. At every genetic iteration we produce new crossover population with
MATTER: International Journal of Science and Technology ISSN 2454-5880
© 2016 The author and GRDS Publishing. All rights reserved. Available Online at: http://grdspublishing.org/journals-MATTER-home 28 size 8 and we produce new mutation population with size 5 by using genetic operators. But at random search we produced these many chromosomes for two similar populations randomly. At every iteration we uses previous step main population and newly randomly produced crossover and mutation population. By using best ten chromosomes of these three populations we produce new main population of the current iteration.
Hybrid search: At this search we started with random search and continued with genetic search. We applied totally 200, 100 and 50 iterations as in random search to be fair in comparison. Initial iterations were random and after that we continued with genetic iterations and iteration parameters are given in Table 4 values. These rules are given at Table 2 below. At the hybrid search initial iterations are random iterations and later we continued with genetic iterations. Iteration parameters are given at Table 4.
SIRO-RDM (Hybrid):
Here all functions are disintegrated. Process plan selection is independent from scheduling and due date assignment. Jobs are dispatched in random order and due dates are determined randomly.
SCH-RDM (Hybrid):
Later we integrated scheduling function with process plan selection but due dates are still determined randomly. We observed substantial improvements at the performance measure by this integration.
SIRO-PPW (Hybrid):
This time we integrated PPW due date assignment with process plan selection but jobs are scheduled in random order. Although there is a substantial improvements with this integration, SIRO rules strictly deteriorates the performance measure back.
SCH-PPW (Hybrid):
At this level of integration three functions are integrated. Process plan selection is integrated with scheduling and PPW due date assignment rule. By full integration we observed substantial improvement and we obtained best performance measure.
Since this is the best combination we tested random search for this level of integration and as it is expected genetic search outperformed random search. In short full integration level with genetic search is found as the best combination.
Experiments and Results
C++ programming language is used while coding the program. We used a laptop with 2 GHz processor and 8 GB Ram while running the program. Operating system was windows 8.1 and Borland C++ 5.02 Compiler is used. While all programs are open at the background, CPU times to run the program written in C++ are recorded and given at Table 5 integration level is solved also by using random search.
At the beginning we tested unintegrated combination and tested SIRO-RDM (Hybrid) and SIRO-RDM (Ordinary) combinations. Here process plan selection, scheduling due date assignment are all disintegrated. After that we integrated scheduling with process plan selection but due dates are still determined randomly. At this level we tested SCH -RDM (Hybrid) and SCH-RDM (Ordinary) combinations. Later we integrated PPW due date assignment with process plan selection but this time dispatching is unintegrated and jobs are scheduled in random order.
Although this integration substantially improves the solution, SIRO rule strictly deteriorates the performance back. At this step we tested SIRO-PPW (Hybrid) and SIRO-PPW (Ordinary) combinations. At the end we tested full integration level and this level is found the best combination. Since this is the best combination we also used random search in addition to the hybrid search. We found full integration with hybrid search as the best combination and at this Table 4. CPU times of small shop floor are listed at Table 5 below.
Program run took 100 to 250 seconds approximately. According to the results, integration levels are found useful, SIRO dispatching rule and RDM due date assignment found very poor and full integration with hybrid search found the best combination. Hybrid search outperformed random search and ordinary solutions found very poor. were approximately in between 300 to 500 seconds. We performed 100 random or genetic iterations. Full integration level with hybrid search is found the best combination. Searches are always found better than ordinary solutions and hybrid search outperformed random search.
Conclusion
Although at the literature there are numerous works on IPPS and SWDDA problems, there are only a few works on IPPSDDA problem. We tried to integrate three important functions which are process planning, scheduling and due date determination. We tried to integrate these functions step by step and observed the results. We also compared hybrid search and random search with each other and with ordinary solutions.
We have shown that integration level improves the performance. Disintegrated combination gives the poorest result. Since each functions tries to get local optima, disintegrated combination deteriorates the global optima. Also these three functions highly effects each other , that's why we should be careful while performing any one of these functions and should integrate them and perform them concurrently. For example if process planning is performed separately then process planners may select same desired machines repeatedly and may not select some undesired machines at all. This cause unbalanced machine load at the shop floor level and reduce shop floor utilization. Similarly if we disintegrate due date assignment from scheduling then we may assign unnecessarily far due dates and this time earliness and due date related costs increase. If we assign unrealistically close due dates then we may not keep our promise and damage our reputation and tardiness costs substantially increase. If scheduling is unintegrated then we may schedule jobs with close due dates later or vice versa. Conventionally tardiness is punished but according to JIT both earliness and tardiness should be punished. At this research we penalized all weighted earliness, tardiness and due date related costs. Since nobody wants far due dates it is better to penalize due date related costs also.
Due date related costs are customer ill will, customer loss, price reduction etc.
As a summary, it is better to integrate all of these three functions and we should use hybrid search instead of random search. At the beginning marginal benefit of random search is high so it is good to use random search, but later marginal benefit of random search strictly diminishes it is better to change into genetic search. So hybrid search utilize advantage of both random search and genetic search. It is very reasonable to penalize all weighted earliness, tardiness and due date related costs.
|
2019-02-16T14:30:08.465Z
|
2017-05-27T00:00:00.000
|
{
"year": 2017,
"sha1": "9656386e48a1f41b7dba6f76ec4c8ed57eb12895",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.20319/mijst.2016.s21.2038",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d2638457810fc1b09d54c0814cb4475460e92cd7",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
235590065
|
pes2o/s2orc
|
v3-fos-license
|
Standardizing and optimizing acupuncture treatment for irritable bowel syndrome: A Delphi expert consensus study
Background Acupuncture has been widely utilized for irritable bowel syndrome (IBS). However, heterogeneity is large among therapeutic strategies and protocols. The aim of this study was to propose some down-to-earth recommendations and establish an optimized protocol for acupuncture practice in IBS. Methods A panel of 74 traditional Chinese medicine (TCM) acupuncturists participated in clinical issue investigation. Subsequently, systematic reviews concerning acupuncture for IBS were screened within 3 databases. An initial consensus questionnaire was formed from the results of clinical issue investigation and literature review. Ultimately, a Delphi vote was carried out to determine these issues. 30 authoritative experts with extensive experience were requested to respond with agreement, neutrality, or disagreement for the items. Consensus achievement on a given item was defined as greater than 80% agreement. Results Following a 2-round Delphi survey, there were 19 items reaching consensus; of which 5 items (26.32%) achieved thorough consensus, and significant agreement was reached for the other 14 items. These items can be classified into the 3 major domains: 1) clinical outcomes that acupuncture can bring for favorable intervention population (5 items), 2) suitable therapeutic principles and parameters of acupuncture (13 items), 3) possible adverse events in the treatment (1 item). Conclusion Without any ready-made guidelines and lacking of homogeneity in the published literatures, such expert consensus could be valuable for TCM acupuncturists in daily practice and patients with IBS to obtain appropriate and standardized acupuncture treatment. In addition, it also points out the clinical focus which need to be further explored in future trials.
Introduction
Irritable bowel syndrome (IBS) is a highly prevalent functional gastrointestinal disorder worldwide, affecting approximately 7~15% of the global population, 1 , 2 and more common among females and young people. 3 Patients with IBS agonize over chronic recurrent abdominal pain and bowel disturbance, leading to a prominently negative impact on quality of daily life and work productivity. 4 Due to its high prevalence, IBS wreaks substantial medical and finan-associated serious adverse events including ischemic colitis and cardiovascular events, which restricts their utility respectively. 10 , 11 Given the existing treatment gaps, plenty of suffering patients would like to seek for alternative treatments. 12 Acupuncture, as a major non-drug modality of traditional Chinese medicine (TCM), 13 has been extensively applied in treating functional gastrointestinal diseases including IBS. 14 , 15 In clinical practice, acupuncture is regarded as a promising physical therapy that can relieve chronic painful states, 16 which may be especially helpful for IBS patients, in light of that recurrent abdominal pain is the most universal complaint and frequent reason that drives healthcare assistance. 17 As a result, numerous IBS outpatients have received acupuncture treatment or ever consulted acupuncture practitioners for help not only in China but also in western countries. 18 In spite of its common application in clinic, previous research evidences on acupuncture for IBS are still relatively scarce and in conflict. On the one hand, a previous Cochrane systematic review (SR) composed of 17 randomized clinical trials (RCTs) indicated that acupuncture exerted more beneficial effects than pharmacological interventions, 19 which was validated again in the latest published study. 20 On the other hand, the sham acupuncturecontrolled trials suggested that acupuncture achieved favourable efficacy but did not show therapeutic advantage over sham acupuncture. 21 , 22 At present, it is still controversial on whether acupuncture can bring ideal clinical outcomes for IBS patients. Furthermore, the parameters of acupuncture and acupoint selection also vary greatly in diverse literatures, which may confound the evaluation of the actual efficacy of acupuncture. Distinct from other pharmacological therapies, acupuncture is a kind of intricate intervention whose therapeutic effects are influenced by a battery of factors. 23 But up to now, there is no endeavour to explore an optimal and standardized acupuncture therapeutic protocol for IBS. Meanwhile, clinical acupuncturists also have their own individual accumulated experience in daily practice, which may be worthwhile to summarize and analyse. 24 It is essential to utilize all the information for better designing new trials to evaluate the true acupuncture efficacy for IBS. Because it is difficult to reach the corroborative conclusions on the basis of the current RCTs and SRs, an expert consensus study is likely to give more comprehensive and direct solutions for intricate clinical issues. 25 On this account, aiming to provide some practical recommendations for acupuncture in treating IBS which may be taken into consideration in future clinical trials as well, we invited a group of Chinese TCM acupuncture experts to carry out a multi-round consensus study via the Delphi technique for gaining agreement on the specific issues that acupuncture practitioners really concern about.
Methods
We took the following 3 major steps to develop this expert consensus. The first step was to conduct a 2-round clinical issue investigation for generating the initial expert consensus voting list. The second step was to retrieve literatures for the research evidence in this field. The third step was to perform a 2-round Delphi consensus vote to determine the final expert consensus statements. A summarized work flow diagram of the study is shown in Fig. 1 .
A steering group was built beforehand for designing the concrete schedule, providing operational guidance, and coordinating the entire process. The steering group was made up of one senior acupuncture expert (Cun-Zhi Liu), two clinical acupuncturists (Xin-Tong Su and Ling-Yu Qi), two methodological scholars (Wei Chen and Li-Qiong Wang), and two assistants (Na Zhang and Jin-Ling Li).
Development of the clinical questionnaire
The steering group put forward the initial clinical questions on this topic via a brainstorming method. Several online video conferences within the steering group were held for further discussion on these questions with the help of Tencent Meeting version 1.4.6 (Tencent®). After multiple revisions by 3 acupuncture experts (Hui Hu, Ying Li, and Lin Du) and a methodological expert (Wei Chen), the modified list of clinical questions was sent to the acupuncturists for clinical issue investigation.
Selection of the clinical acupuncturist panel
To gain the specific issues of acupuncture in treating IBS which should be provided viable recommendations on, a clinical panel of Chinese TCM acupuncturists who had the professional background and clinical experience were invited. The clinical panel not only participated in the 2-round clinical issue investigation but also made suggestions during the process. The individuals within this panel were clinicians in the field and members of an academic association, namely China Association of Acupuncture-Moxibustion. The invitation letters were sent by e-mail or WeChat (a universal Chinese instant messaging app) in advance, to make sure that they were willing and had enough time to join in. Those who replied and consented to take part were included in the clinical panel list.
Process of the clinical issue investigation
The clinical issue investigation was divided into 2 rounds and conducted during March 3rd to 27th, 2020. A questionnaire comprised of semi-open clinical questions was distributed via the online survey program ( www.wjx.cn ) in round 1. After analyzing and summarizing the results, the steering group sent the other questionnaire which included the feedbacks from the first round and the questions for the second round to the same clinical panel again. Similar to round 1, the results of round 2 were also analyzed and summarized. On the basis of these results, the items within the expert consensus voting list were developed according to the PICO (patient, intervention, control, and outcome) principle finally.
Evidence in the field
An electronic literature retrieval within PubMed, EMBASE, and Cochrane Library databases was conducted on April 2nd, 2020 using the following terms: ("acupuncture" OR "electroacupuncture" OR "scalp acupuncture") AND ("irritable bowel syndrome" OR "irritable bowel" OR "irritable colon" OR "functional bowel disease") AND ("systematic review" OR "meta-analysis"). The detailed search strategy applied in PubMed is shown in Supplementary material S1 . Based on the search strategy established by the steering group, systematic reviews (SRs) on acupuncture therapy in treating IBS, restricted to English-published, with full-text obtainable, were included to provide relevant research evidence to the expert panel for better making judgements during the Delphi vote stage. Literatures were cross-searched by 2 researchers (Xin-Tong Su and Na Zhang) independently to make sure that all eligible articles could be identified. Using a pre-designed information summary table, two assessors (Jin-Ling Li and Ling-Yu Qi) extracted information from each paper independently. Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was applied to assess the quality of evidence extracted from the SRs by the 2 methodological scholars (Wei Chen and Li-Qiong Wang). The quality of evidence could be divided into 4 levels, namely "high", "moderate", "low" or "very low". 26 , 27 Inter-assessor discrepancies could be resolved through an arbitration by a third researcher (Cun-Zhi Liu). Then the relevant research evidences and their corresponding GRADE ratings were presented on the expert consensus voting list as the reference for making judgements. In addition, all the experts were encouraged to indicate any papers they thought were missing.
Generation of the expert consensus voting list
Integrating the results of clinical issue investigation with the relevant research evidences extracted from the included SRs, an initial expert consensus voting list for the Delphi survey was prepared. Aiming to help the experts better acquire the current information of acupuncture in treating IBS and making an accurate and objective judgment, each item within the voting list would be attached with the results of clinical issue investigation or the corresponding contents of research evidence.
Selection of the expert consensus panel
On the basis of a comprehensive review of published literatures and acupuncture textbooks in this field, the lead authors or editors were searched for and taken into consideration as potential candidates. Suitable authoritative experts in this part should possess a deputy senior title at least and exceeding 10-year acupuncture practical experience, and be fairly veteran with acupuncture treatment in IBS. After corroboration of their participation, the panel of experts would take part in the multi-round Delphi survey. Prior to the formal Delphi consensus vote, several remote video teleconferences between the expert panel and the steering group were held to introduce the entire workflow in detail and respond to the experts' queries. To ensure confidentiality and independence, both the identities of the experts and their individual choices were kept in secret between each other so that they did not need to worry about whether they made contrary selections with others. Meanwhile, the experts were requested to hold an impartial attitude and make the objective judgement toward each item.
Process of the Delphi consensus vote
The consensus vote was implemented with the assistance of the Delphi method, which is a widely adopted structured process including 3 main distinguishing features: not requiring face-to-face contact, controlled feedback, and statistical expert response. [28][29][30] As such, this method has been utilized successfully in previous expert consensus studies of acupuncture treatment. [31][32][33] To make sure that the responses were collected entirely, experts were requested to fill in their real names in each round. The experts were instructed to make a judgement for each item by integrating the evidence with their individual knowledge and clinical experience. Comment boxes were also attached to the voting list to collect the reasons for their selections and provide experts with the opportunity to share their suggestions. Panelists could fill in the explanations for their choices on each item. These feedbacks contributed to further modifying the items within the process. The voting results of the last round would be shown in the voting list for the next round. The same online questionnaire tool ( www.wjx.cn ) was applied for producing the expert consensus voting list and collecting the responses from the expert panel. The consensus vote was conducted during April 5th to 30th, 2020.
A 3-option question including 'Agree', 'Neutral' and 'Disagree', was used to measure the experts' attitudes toward each item. The results of Delphi consensus vote were displayed as statistic percentages. According to previous studies of this type, consen- sus achievement was defined as that the threshold of agreement amongst respondents should be more than 80%. 34 , 35 If the level of agreement was ≤ 80%, it indicated that the item was still lack of agreement and needed the next round to either reach a higher level of agreement or no consensus achievement. 28 In an iterative mode, the identical process was implemented among the same experts again. The general voting results, experts' own choices, and anonymous qualitative comments of the last round were presented on the new voting list. This allowed the participants another opportunity to dig over the items in view of the whole expert panel's response. To minimize experts' fatigue and workload, items for which the level of agreement was < 50% in the last round would not be discussed in the next round, and the votes should not be more than 3 rounds.
Formulation of the final expert consensus statement
Based on the results of the Delphi consensus vote, the steering group took charge of drafting the initial manuscript at first. Subsequently, this manuscript was sent to all the experts who participated in the Delphi consensus vote for scrutiny. After further discussion and revision through several online meetings and e-mails, the final expert consensus statement was built.
Clinical issue investigation
A sum of 100 individuals from different administrative regions of China were invited initially. Ultimately, there were 74 TCM acupuncturists (response rate: 74%) who expressed interest and participated in the first-round clinical issue investigation and 70 of them completed both rounds. The dropout reason of the 4 participants was lack of time due to their personal arrangements. Over half of the clinical practitioners owned a Doctor's degree and more than 10-year acupuncture practical experience. The detailed demographics of the acupuncturist clinical panel are shown in Table 1 .
Following iterative discussion and modification within the steering group, the initial questionnaire of clinical issue investigation consisting of 23 semi-open questions was generated and sent out in round 1. The predominant contents of the questionnaire were made up of the following 4 aspects: 1) suitable intervention population (2 questions), 2) acupuncture principles and parameters (17 questions), 3) clinical outcomes (2 questions), and 4) adverse events (2 questions). The detailed information of these questions is shown in Supplementary material S2 . After analyzing and summarizing the results of the first round, 9 semi-open questions got exact answers, and the other 14 questions were further explained in detail to the participants and asked again in round 2. Information from the 2-round clinical issue investigation was collected and used for developing the items within the Delphi consensus voting list.
Evidence in the field
The initial database search yielded 24 records in total. Among these, 17 articles were excluded by title and abstract screening. After whole-text detailed screening, we found that one Cochrane SR had an updated version and the same version was published in Cochrane Database of Systematic Reviews and the other journal simultaneously. 19 , 36 , 37 Therefore, 5 eligible SRs were included in our study finally, including 3 traditional SRs 37-39 and 2 network meta-analysis SRs. 18 , 40 The contained results of interest were extracted out as research evidence and attached to the corresponding items on the initial expert consensus voting list. However, according to the GRADE system, the quality of many research evidences was rated "low" or "very low". On the other hand, only a few items (5/19 items) could be affiliated with the corresponding evidence from these SRs. The other 14 items on the voting list without available evidence had to be provided with the results from the 2-round clinical issue investigation as reference. The detailed results extracted from the included SRs and their GRADE ratings are shown in Supplementary material S3 .
Delphi consensus vote
In aggregate, there were 30 well-known Chinese TCM acupuncture experts (response rate: 100%) who accepted our invitation and fulfilled all the 2-round Delphi survey votes without dropout during the process. 28/30 experts (93.33%) owned a senior title and 21/30 (70%) had at least 20-year acupuncture practical experience. The detailed demographics of the expert consensus panel are shown in Table 1 and Supplementary material S4 . The initial expert consensus voting list included 19 items which were built on the basis of clinical issue investigation. After the 2-round Delphi vote, there were 5 of 19 items (26.32%) achieving complete consensus, and more than 80% expert agreement was reached for the other 14 items. Among these, expert consensus was achieved on 17 of 19 items in just one round ( Fig. 2 ), and the other 2 controversial items near consensus in the first round were voted again among the same experts in the second round ( Fig. 3 ), which also met the standard for consensus ultimately. According to the panelists' feedback in the online questionnaires, no new item was put forward by the expert panel during the 2-round vote. Therefore, there were 19 statements in total compiled into the final version of expert consensus.
The 19 statements addressed the most relevant and debatable topics on acupuncture in treating IBS, which could be roughly categorized into 3 major domains: (1) therapeutic effects, (2) therapeutic principles and protocols, (3) possible adverse events. The detailed contents of the final expert consensus statements are shown in Table 2 . The specific contents of the consensus reached can be summarized as below: Statements 105 refer to the clinical outcomes that acupuncture may bring for the suitable intervention population. The consensus reached by the experts in this part is: Acupuncture can be recommended for mild and moderate IBS patients to relieve clinical symptoms, improve quality of daily life, and ameliorate psychological and mental conditions. Additionally, acupuncture is also recommended to be used in the treatment of 3 IBS subtypes based on the Roman Ⅳ Criteria. The efficacy produced by one course of acupuncture treatment can be maintained for 1 to 6 months.
Statements 6018, as the key part, primarily analyze the therapeutic principles and concrete protocols of acupuncture, including TCM theory based on, specific acupoint selection, acupoint combination, treatment frequency, etc. The experts recommend that acupuncture treatment should be conducted based on syndrome differentiation from 4 types of TCM syndromes. The specific acupuncture protocols recommended for IBS in this consensus are: Acupuncturists can select acupoints on the 4 meridians, namely Spleen Meridian, Stomach Meridian, Large Intestine Meridian, and Liver Meridian. The application of specific acupoints should be laid stress on. Tianshu (ST25), Zusanli (ST36), and Zhongwan (CV12) It is recommended to choose Mu acupoints and Xiahe acupoints as the common species of applied specific acupoints.
Round 1 10
It is recommended to choose Shu-Mu combination and Xiahe-Mu combination as the common applied specific acupoint combinations.
Round 1 11
De qi is a crucial factor for the achievement of favorable therapeutic efficacy.
Round 1 12
Course of treatment is a crucial factor for the achievement of favorable therapeutic efficacy. The recommended course of treatment should be 4 weeks.
Round 1 13
It is recommended to choose uniform replenishing-reducing method as the common applied acupuncture manipulation.
Round 2 14
It is recommended to select 4~6 acupoints per session.
Round 1 15
The recommended duration time of needle retention should be 30 min per session.
Round 1 16
The recommended treatment frequency should be 3 times per week.
Round 1 17 It is recommended to combine acupuncture with other TCM therapies, such as moxibustion or Chinese herbal medicine, so as to improve the clinical efficacy.
Round 1 18
It is recommended that acupuncture practitioners eligible for IBS treatment should own the TCM license and at least 3-year medical practice experience. can be chosen as the main specific acupoints. Shu-Mu combination and Xiahe-Mu combination are the recommended specific acupoint combination methods. 406 acupoints are preferable to be applied per session. After routine skin disinfection, the needles should be slowly and vertically/horizontally inserted into the corresponding acupoints. The uniform reinforcing-reducing acupuncture manipulation can be conducted to elicit De qi for patients. Needles can be retained for 30 min per session. The ideal treatment frequency should be 3 times per week and the whole course of treatment would be better as long as 4 weeks. Other TCM therapies, such as moxibustion or Chinese herbal medicine are recommended to be in combination with acupuncture if necessary. Statement 19 considers the general incidence of possible adverse events in the treatment of IBS with acupuncture. Even though there are some possible adverse events, the expert panel believes that acupuncture is a safe therapy and these adverse events are uncommon and tolerable.
Discussion
The purpose of this study was to achieve consensus among a panel of prestigious TCM acupuncture experts on specific clinical issues and develop a relatively optimized and standardized acupuncture protocol in the treatment of IBS via a Delphi consensus study. Ultimately, 19 items categorized into 3 main domains reached consensus. In view of the theoretical background of acupuncture, some statements in the final expert consensus are specific to the TCM meridian terms. Nevertheless, this expert consensus may still provide some practical and generalized recommendations for clinical acupuncturists.
According to the final consensus statements, most of the experts ( > 90%) agreed that acupuncture could be used to relieve clinical symptoms and improve the quality of daily life in mild and moderate IBS. The recommendations are consistent with the results of a previous clinical trial that compared acupuncture plus usual care with usual care alone. This study concluded that acupuncture could reduce IBS Symptom Severity Score and improve the proportion of successful treatment. 41 Two SRs indicated that acupuncture produced a preferable therapeutic effect in improving IBS patients' quality of life compared with sham acupuncture or western medicine. 37 , 38 Of note, a latest published high-quality study including 531 patients found that acupuncture showed more effective results in reducing IBS symptoms and improving quality of life compared with pharmacologic therapies. 20 In addition, these effects could last up to 3 months, which is also in line with our consensus statement that relief of IBS symptoms can be maintained for 1 to 6 months after one course of acupuncture treatment. As a disease with a sort of psychiatric disorder, refractory IBS symptoms can exacerbate patients' coexisting psychological distress. 4 The experts reached a complete consensus (100%) that acupuncture could ameliorate the psychological or psychiatric conditions of patients. In light of the absence of corresponding evidence, we suggest that the outcomes of mental state should be focused on in more future studies, so as to verify the efficacy of acupuncture in this respect.
Based on the classical Meridian theory, specific acupoints are specially defined points on the 14 main meridians, which may have their specific therapeutic effects. Specific acupoint selection is very critical that different specific acupoints should be applied in distinct diseases, and stimulation on relevant specific acupoints in a certain disease can result in more outstanding effects than other acupoints. 42 , 43 Therefore, in particular, we laid stress on the application of specific acupoints and set 3 items to discuss these issues. The recommended specific acupoints in this consensus were Tianshu (ST25), Zusanli (ST36), and Zhongwan (CV12). Of note, this result is concordant with the founding of two SRs that these acupoints were 3 of the top 6 most commonly adopted acupoints. 40 , 44 Although the 3 specific acupoints are always taken into consideration for the treatment of gastrointestinal diseases and supported by the Chinese Medicine theory, further clinical trials are still needed to corroborate their superiority over other acupoints in treating IBS. In addition, the overwhelming majority of experts acknowledged Mu acupoints and Xiahe acupoints as the common species of specific acupoints, and Xiahe-Mu combination as the common acupoint combination, which is in line with the traditional concept that Mu acupoints and Xiahe acupoints are the preferred choices in treating Zangfu diseases such as IBS. 44 Because acupoint selection and combination plays a crucial role in enhancing the efficacy of acupuncture, we suggest that not only clinical practice but also new trial designing may refer to these expert consensus statements.
Due to its inherent characteristic as a kind of complex therapy, the optimal-defined conditions have impact on the maximal efficacy of acupuncture. Unfortunately, these fatal issues reported within the distinct papers vary extremely, which may cast uncertainty on the comparability with each other. It has been supported by published literatures that better clinical outcomes produced by acupuncture may be dose-dependent and influenced by appropriate acupuncture protocols. 45 Nevertheless, this point is always overlooked by most of RCTs that focus much more on validity assessment but are of less note to the optimal algorithm. 31 There were 5 items discussed to ensure the adequate stimulation of acupuncture for achieving the ideal therapeutic effects, including course of treatment, acupuncture manipulation, number of acupoints per session, duration time of needle retention, and treatment frequency. Although more than 80% of the experts considered that it was reasonable about these acupuncture parameters in the final statements, we suggest more studies to explore their reliability.
As a kind of chronic and recurrent disease, IBS patients may always need long-term treatment. Given this, there is no wonder that safety should be of great concern among patients and clinicians. Patients have to be faced with the increased risk of adverse drug reactions, especially after long-term utilization. The consensus agrees that adverse events are uncommon in the treatment of IBS with acupuncture, which is another superiority of acupuncture demonstrated by previous SRs. 18 , 37 Therefore, physical therapy like acupuncture with few side-effects may be considerable for patients suffering from IBS for a long time.
Evidence-based medicine, especially RCTs and SRs, has an invaluable impact on medical research and clinical practice. Meanwhile, they may possess several intrinsic weaknesses simultaneously. Generally speaking, they require rigid inclusion and exclusion criteria to restrict the study population and may not be generalizable to common clinical treatment. 26 Additionally, there are as well some clinical measures such as parameters of acupuncture and acupoint selection, which acupuncturists actually and urgently concern about in real practice but have not been answered by published articles. These existent complicated clinical issues are hardly solvable by RCTs and SRs in a short time, either. A study of expert consensus sets up a structured process to collect information from a series of semi-open questions of interest with controlled feedback, followed by multi-round expert votes to achieve agreement on specific issues. 35 The Delphi methodology has a particular merit that it can even be carried out securely while current research evidence is insufficient or there exists uncertainty in a certain field. [46][47][48] The Delphi survey can synthesize experts' opinions in a high-quality and scientific way and provides a considerable modality in determining the solution to some controversial clinical issues. 30 Notwithstanding, aiming to run this study more systematically, before the formal expert consensus vote, we carried out a two-round clinical issue investigation to comprehensively understand the real need for acupuncture in the treatment of IBS among acupuncturists from different regions of China. Simultaneously, we still sought evidence from SRs and presented it as well as the results from the clinical investigation to provide the expert panel with more information for making decisions.
As this expert consensus is for mainly Chinese acupuncture clinical setting, it is of necessity to illuminate why Chinese SRs were not included in this study. Before making the search strategy, the steering group discussed about this issue and consulted the methodological experts for their opinions. Eventually we decided to choose the English-published SRs from PubMed, EMBASE, and Cochrane Library databases given the following reasons: (1) Most of the retrieved SR articles published in English had already included the Chinese-published RCTs originating from Chinese databases such as CNKI, CBM, VIP, Wanfang, etc. Therefore, the expert panel members could still learn about the meta-analysis results of these studies. (2) Even though a number of Chinesepublished SR articles about acupuncture could be searched, but most of them did not follow the PRISMA statement strictly. 49 Due to the cherry-picking RCTs for meta-analysis, the results from these literatures are mostly positive, but their methodological qualities are rated "low" or "critically low" according to the AMSTAR tool. 50 To avoid confounding the expert panel for making decisions, we did not choose the Chinese-published SR articles. (3) Both of the positive and negative results could be extracted from the Englishpublished SRs, and their reporting and methodological qualities were relatively higher, which were helpful for the experts in making an objective selection for the items.
It is arguable about the method of expert selection in this study and that the evidences are insufficient (or absent) for many items under discussion. We totally consent that a multidisciplinary expert panel is preferable for ensuring the balance of perspectives to generate an objective consensus. Consensus reached by an expert board comprising participants who are professionals in IBS but do not necessarily practice acupuncture can achieve a higher credibility. However, this survey is about a complicated intervention whose primary intention is to acquire some pragmatic answers from reputable experts for specific clinical issues under the circumstance that the relevant evidences are really too scarce. Almost 70% of the items voted in the consensus study are closely associated with the concrete acupuncture procedures that are necessary to be standardized and optimized. These recommendations are also what acupuncture practitioners really want to learn about in daily practice. Without referable evidences, these items have to be discussed among experts with background and actual clinical experience on acupuncture. It is difficult for experts who have never practiced acupuncture in IBS's treatment to make a judgement on these actual issues, especially in the absence or lack of available evidences from the published literatures. Therefore, only authoritative acupuncture experts were recruited in our study. Nonetheless, to counteract the probable bias as possible, we set a more rigorous criterion (over 80% agreement) for consensus achievement in our study rather than 70% or 75% in usual. 51 , 52 Moreover, we double the sample size of experts needed, while the minimum allowed sample size in Delphi survey is 12~15. 29 , 53 , 54 The final expert panel is also a mix of practitioners and academics. On the other hand, evidences are indeed very essential in establishing guidelines for clinical practitioners. For a long time, alternative and complementary techniques are criticized and prone to base on ideology, beliefs, and personal experience, rather than on proper and wellbuilt evidence. Although numerous acupuncture clinical trials have emerged in recent decades, most of these studies only focused on assessing the efficacy of acupuncture, and the fact that acupuncture is one kind of intricate intervention whose therapeutic effects can be influenced by a series of factors is always overlooked. The concrete acupuncture protocols vary greatly in distinct papers, which may confound the exact interpretation of their results. Due to scarce or absent evidences in the field, the comprehensive experts' opinions collected with the assistance of Delphi method may provide another source of reference for acupuncturists to base their treatment on at present. Although many items in this consensus are the specialist's recommendations rather than a guideline, and their validities still need further verification, these items can point out the reference directions for future studies at least, and make researchers know what can be taken into consideration in better designing the new study protocol and which optimal acupuncture parameters should be further explored.
There are several potential limitations within the study. The major limitation is that only Chinese TCM as opposed to international acupuncture experts participated in the survey, although they came from different areas of China. In spite of the careful expert panel selection and the rigorous Delphi method followed, it is likely that the consensus does not cover the entire acupuncture community's opinion. While TCM style acupuncture is probably the most commonly used style of acupuncture in many coun-tries, it uses diagnoses and treatment techniques that are different from other traditional acupuncture styles that are practiced for example in Japan, Korea, Australia, UK, US, Europe and so on. Hence, the transferability of these TCM-based recommendations to other countries may be hindered by the circumscribed medical theory background. Another limitation is that most of the final statements were based on the individual opinions and clinical experience of experts. The expert opinion is regarded as the lowest valuable source of evidence level system. 55 Thus, the agreement of a certain item is at a particular point in time and may be changed with emerging new evidence and experience. In addition, even though there may exist some diversity between doctors' and patients' opinions toward the treatment, 56 our clinical issue investigation was conducted only among clinicians, which means the consensus was based on doctors' general perspective but not patient specific.
Given the limitations mentioned above, the current survey is more a preliminary dialectical consensus than a proper evidencebased guideline in some degree, which needs to be interpreted with caution by the readers. Notwithstanding, we still look forward to emerging proper RCTs and robust SRs, which can verify the experts' recommendations and further underlay an ideal expert consensus or clinical guideline. When more evidences can be provided, an updated version of multidisciplinary, international, and thoroughly evidence-based expert consensus survey will be feasible and indispensable.
Taken together, it is of necessity to provide patients suffering from IBS safer and more cost-effective therapeutic choices, and acupuncture is one of the promising alternative non-drug interventions in IBS treatment. Therefore, we delineated a therapeutic paradigm to improve standardization into clinical practice and trials given that current research evidence is fairly insufficient. The proposed recommendations are not claimed the best or the most correct ones, but do give TCM acupuncturists some pragmatic guidelines to polish and refine their treatment.
Data availability
The data that support the findings of this study are available within the article and its supplementary materials.
|
2020-12-14T20:14:51.959Z
|
2021-04-24T00:00:00.000
|
{
"year": 2021,
"sha1": "543c4dd60ce77b791ad477921ea664c6ebe7582f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.imr.2021.100728",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "edc63d68d935a7d7a484a4d155ea0e8dc5fa87da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258773665
|
pes2o/s2orc
|
v3-fos-license
|
The Study of Customer Satisfaction on Natural Skincare Products for MSME’s E-Business Sustainability
. The Covid-19 pandemic causes all activities carried out online and increases the demand for natural skincare products. E-business is a promising business model during the pandemic and post-pandemic for MSMEs that sell skincare products made from natural ingredients. This research aimed to define the influence of quality and price of natural skincare products on customer satisfaction, which can support the sustainability of MSMEs' E-business. The object of this research is Zavennie, an MSME that sells green skincare products using the B2C E-business model. This study uses a quantitative approach by employing a survey distributed through an online questionnaire. The respondents of this research are customers who have bought Zavennie products. The data analysis method applied in this research is multiple linear regression. The results revealed that product quality and price partially and simultaneously had a significant influence on customer satisfaction. The findings in this study also indicate that improving product quality and determining the proper selling price could support the sustainability of MSMEs' E-Business which produces skincare products made from natural ingredients. The findings of this study will provide a reference for MSMEs that operates E-Business to enhance product quality and determine appropriate selling price to maintain customer satisfaction. For further study, it is suggested to apply a wider research object and involve more respondents to gain a deeper insight regarding factors that influence customer satisfaction and its impact on the sustainability of MSMEs E-Business model.
Introduction
The majority of business ventures in Indonesia are in the form of MSMEs.In 2018 Indonesia had 64.2 million (99.99%) of MSMEs [1].Technology disruption has driven MSMEs to implement E-business models in their business activities.The E-business model in Indonesia is widely used by MSMEs (Kementrian Komunikasi dan Informatika Republik Indonesia 2015).In fact, [2] discovered that the E-business models such as businessto-business (B2B), business-to-consumer (B2C), and consumer-to-consumer (C2C) are suitable to be applied in Indonesia.However, many MSMEs are reluctant to operate the E-business model due to limited access and difficulties in establishing partnerships in the marketplace [3].
Further analysis is needed so that E-business is massively applied by MSMEs in Indonesia.In addition, it is also necessary to examine the action to maintain the sustainability of MSME's E-business in Indonesia.The COVID-19 pandemic has forced most activities conducted online.The pandemic also influences MSME's business activities in terms of increasing implementation of the E-business model.Furthermore, it is essential to maintain the E-business of MSMEs in the post-pandemic era to expand the business exposure and increase financial performance.
Skincare products have become a necessity that is classified as a primary need.Hence, it drives a tight competition in the skincare business because of the great demand by consumers [4].New competitors in the skincare market cause the customers to feel confused and anxious about selecting the appropriate type of skincare.Deciding which skincare products to purchase is critical for customers to prevent undesirable things such as skin damage.The issues often faced by customers regarding skincare products are lack of knowledge about the ingredients, differences between the actual product quality and product advertised, and illegal products circulating in the market.These problems cause customers to feel uneasy and fearful when they want to purchase skincare products.Hence, these concerns induce the customers to use skincare products made from natural ingredients (green skincare).In addition, these issues also provoke the customers to demand the presence of natural skincare products at affordable prices.Therefore, it is necessary to conduct a study on consumer satisfaction to support the sustainability of the E-business model implemented by MSMEs producing skincare products made from natural ingredients.
One of the MSMEs that implements the B2C Ebusiness model and produces skincare products made from natural ingredients is Zavennie.In general, Zavennie is a new business venture engaged in skincare products made from natural ingredients (green skincare).Following the recent phenomenon, Zavennie also operates online B2C marketing.In addition, Zavennie also sells skincare products made from natural ingredients at affordable prices.Zavennie's income statement growth rate is relatively good because there has been a significant increase in sales of its skincare products.Business prospects and growing sales performance are indicators of Zavennie's success in implementing the E-business model.Hence, it is considered an attractive point to conduct a further study concerning Zavennie's customer satisfaction which can support the sustainability of the E-business model implemented.
Objectives
This study aimed to examine the influence of the quality and price of natural skincare products toward consumer satisfaction to maintain the sustainability of MSMEs' Ebusiness.This study has originality in terms of research object that involves a new business venture in the form of MSMEs.The findings in this study will provide a reference for MSMEs' E-business to improve product quality and determine appropriate selling prices in order to maintain customer satisfaction and business continuity.
Literature review
The intensive business competition requires the business enterprise to offer quality and value-added to distinguish their products from their competitors.Product quality becomes one of the consumer's concerns before making a purchase.The reliabilities of product quality are embedded in the minds of consumers.In addition, customers are also increasingly critical of what they receive and expect from a product.[5] argues that the ability to fulfill consumer demands will determine consumer satisfaction which also reflects the quality of the product; hence every business venture should make optimal endeavours to improve the quality of its products in order to meet consumer expectations.If it is not following customer expectations, then a business venture will lose its potential customers.[6] even stresses that the ability to fulfil the consumer expectations will create consumer satisfaction which will encourage repeat buying; on the contrary if consumers are dissatisfied with the product's quality, it will drive the consumers to switch to other brands.[7] also emphasize that product quality can affected customer satisfaction; furthermore, it is determined by product performance, additional product features, conformity with specifications, reliability, durability, aesthetics (product appearance), and serviceability.Based on this, the issue of product quality should be examined regarding customer satisfaction [8].
Price is one of the aspects that consumers consider before buying a product.Furthermore, the selling price affects consumer perceptions.Selling price is an indicator of value when it is linked with the perceived benefits of an item.[9] even assert that price is an indicator of product quality.When determining the value of an item, consumers will compare the ability of an item or service to meet their needs with the substitute goods or services.According to [10], there are four indicators regarding the price factor i.e., affordability, competitiveness, appropriateness with benefits, and suitableness with product quality.Therefore, price also could affect consumer satisfaction.Hence, the selling price also needs to be considered because it is associated to customer satisfaction [11].
Consumer satisfaction is crucial because it will impact company' financial performance and public perception of a product [12].In addition, customer satisfaction can also be a differentiating factor in the market or business competition [13].However, [14] argues that one of the biggest challenges for E-business is providing and maintaining customer satisfaction.It implies the importance of the management team's ability to provide high quality products, exceptional service, and competitive selling prices to succeed the competition in E-business.Furthermore, research by [15] reveals that consumer satisfaction can support the development of business ventures, boost brand recognition, and lessen negative word of mouth from consumers.These findings display the importance of achieving and even maintaining customer satisfaction for business venture.
The product quality and price are components of the marketing mix which aims to fulfil sales targets.There are several indicators of customer satisfaction such as the suitableness of expectations, the increasing intention of repeat orders or revisiting, and willingness to recommend to others [16].Research conducted by [4], [17][18][19] reveals that the product quality and price have a significant and positive influence toward customer satisfaction.Therefore, there is an influence between the price and product quality towards customer satisfaction.Below is the hypothesis in this study: H1: The product quality has a significant influence toward customer satisfaction H2: The price has a significant influence toward customer satisfaction H3: The product quality and price have a significant influence toward customer satisfaction.
Methods
This study is a quantitative research.The population in this research were customers who had bought Zavennie products in or outside Malang City.The amounts of samples in this study are 50 customers determined by the census technique, i.e., the total number of people who have purchased Zavennie products during February -December 2021.The measurement scale in this study will use a questionnaire where the respondents' answers will be measured using Likert's scales [20].The objective of employing the Likert's scales is to discover the respondent's answer in numerical scores from 1-5 such as strongly disagree, disagree, neutral, agree, and strongly agree.
The instrument test in this research consists of validity and a reliability test [21].The classical assumption test consists of normality, multicollinearity, heteroscedasticity [21], autocorrelation and linearity test [22].The data analysis method is descriptive analysis to determine the average and total value [23] by using the scale range formula used in the descriptive analysis [24] and using verification analysis methods consisting of multiple linear regression analysis [25] and coefficient of determination [26].In addition to employ descriptive and verification analysis, this study also uses a hypothesis test consisting of two tests, i.e., the t-test and the F-test [27].
Data collection
The research data collected in this study is in the form of primary data.The primary data in this study acquired through online questionnaire, which will provide detailed and accurate information to find solutions of the pre-existing problems.
Numerical results
The multicollinearity test in this study indicates that the tolerance value is 0.611 and the VIF is 1.636.Therefore, there is no tolerance value below 0.10.Furthermore, the VIF value is less than 10.Hence, there is no multicollinearity.In addition, the heteroscedasticity test shows that the significance level of the product quality is 0.729 > 0.05, and the significance level of the price is 0.675 > 0.05.Hence, there is no heteroscedasticity issue.The autocorrelation test in this study displayed a significant value of 0.253.Therefore, 0.253 > 0.05 which indicates that the regression model has no autocorrelation or the residual data occurs randomly.Moreover, the linearity test in this study reveals that the value of Sig.Deviation from linearity on product quality is 0.884 > 0.05, and the value of Sig.Linearity deviation on price is 0.268 > 0.05.Hence, it suggest a linear relationship between the product quality, price, and customer satisfaction.This study also calculate the coefficient of determination.The Adjusted R Square coefficient determination value is 0.590.It means that the product quality (X1) and the price (X2) affect Zavennie's consumer satisfaction by 59.0%, while the remaining 41.0% is affected by other factors.Adjusted R Square value will be associated with the interpretation of the coefficient.The correlation is very strong if the coefficient interval is 0.80-1000; the correlation is strong if the coefficient interval is 0.60-0.799; the correlation is quite strong if the coefficient interval is 0.40-0.599; the correlation is weak if the coefficient interval is 0.20-0.399,and the correlation is very weak if the coefficient interval is 0.00-0.199.Hence, the value of Adjusted R Square is 59.0% which indicates that the relationship level between variables in this research model is quite strong.Table 3 summarize the Adjusted R Square calculation in this study:
Graphical results
The normality test in this study used the one-sample Kolmogorov-Smirnov test.The test results indicated a significant value of 0.183.Hence, the significant value is 0.183 > 0.05 which indicates that the data is distributed normally.Figure 1 depicted the result summary of normality test in this study: In addition for the normality test, the histogram graph is in a bell-shaped line and does not deviate to the right or left.Hence, the data in this study is normally distributed and fulfils the assumption of normality.Figure 2 below display histogram graph of the normality test in this study: Furthermore, the data or points in this study gathered around the diagonal line and followed the diagonal line.Therefore, the residuals of the regression model are normally distributed in this study.Figure 3 display the result of normality test from P-P Plot Curve:
Proposed improvements
Overall, statistical results indicate the relationship between variables that can affect customer satisfaction.The variables used in this research are product quality and price.In particular, the findings in this study regarding the significant influence of the product quality toward customer satisfaction support the opinion of [8], which describes that reliable product quality will be embedded in the minds of consumers so that it can influence customer decisions in making purchases.Based on the research findings, the sustainability of Zavennie's E-business as an MSME that produces natural skincare products can be accomplished by underlining consistency or enhancing product quality.Zavennie can ensure the sustainability of the B2C Ebusiness model by paying attention to the function of product quality as described by [7], such as durability, aesthetics, and perceived quality.These functions could improve product quality and increase Zavennie's customer purchase intentions.If the product quality is consistently maintained, it will also contribute to customer retention due to customer satisfaction with the quality of Zavennie's products.
The findings in this study also discover that the price factor has a strong influence on Zavennie's customer satisfaction.These findings confirm the research of [11], which state that the price factor could influence customer perceptions on the product and eventually customer satisfaction.The findings in this study also reveal the need to apply the right pricing strategy.Zavennie can maintain the sustainability of their Ebusiness by paying attention to price aspects such as price suitability with benefits, competitiveness, compliance with product quality, and affordability [10].The consistency of the pricing strategy with the product's benefits could improve Zavennie's customer satisfaction because customers feel that they get skincare products made from natural ingredients at a proportional price.The proper pricing strategy not only influences customer satisfaction, but also create value for Zavennie's products.This will reflect the added value in the eyes of customers so that they will feel satisfied with the value received from the money paid.
The research findings confirm the previous research by [4], [17][18][19] which state that the product quality and price have a significant and positive influence toward customer satisfaction either partially or simultaneously.In particular, the results of this study imply that the sustainability of Zavennie's E-business as an MSME that produces skincare products made from natural ingredients is achieved by maintaining product quality.Exemplary product quality is an opportunity and creates a competitive advantage in building customer satisfaction.By improving the product's quality, customers will trust Zavennie's products.Concerning the price factor, Zavennie should be able to determine the proper pricing strategy because, in the skincare business, price is a crucial consideration for customers.The right pricing strategy also supports Zavennie to compete with the prices of similar products from competitors.Besides product quality and correct selling price, building close relationships with customers could create a pleasing experience.Eventually, it will drive customers to make repeat purchases on Zavennies's products.
Validation
The validity test for all question item scores is valid.All scores for each question are more than the R-table value, which is 0.2787.The value of the R-table is obtained from the formula df (N-2).The N value is the number of respondents which is 50 customers and employs a twoway test significance level of 0.05.Table 4 display the summary of the validity test: The reliability test in this study also indicated that all variables were reliable.The product quality Cronbach's Alpha is 0.792 > 0.6, so H0 is accepted.The price Cronbach's Alpha is 0.741 > 0.6, so H0 is accepted.The customer satisfaction Cronbach's Alpha is 0.760 > 0.6, so H0 is accepted.Table 5 summarize the reliability test in this study.Moreover, multiple linear regression analysis revealed a constant of 0.712.Therefore, if the product quality and price are 0, then the customer satisfaction at Zavennie is 0.712.Furthermore, if there is an increase in product quality by one unit, there will be an increase in customer satisfaction of 0.364.In addition, if there is an increase in the price by one unit, then the customer satisfaction will increase by 0.469.The hypothesis testing in this study employed the t-test and F-test.The t-statistic value was obtained through the SPSS program.The t-table value was obtained with the provisions of t-table = t (0.05 / 2; n-k-1) with a significance of 5% (0.05).So t-table = t (0.025 ; 47) = 2,012.The value of n is the number of samples, while the value of k is the total independent variable.Figure 4 display the multiple linear regression coefficient test and the result of t-test in this study: The value of t-statistic for the product quality (X1) is 3,893 > 2,012 (t-table) and the significant value is 0.000 < 0.05.Therefore, H0 is rejected and H1 is accepted for product quality.Hence, the product quality partially has a positive influence on Zavennie's customer satisfaction.In addition, the table above also displays the value of t-statistic for the price (X2) is 3,491 > 2,012 (t-table) and a significant value of 0.001 < 0.05.Hence, H0 is rejected, and H2 is accepted for the price.Therefore, the price partially has a positive effect on Zavennie's customer satisfaction.
Besides running the t-test, this study also employs the F-test to test the hypothesis.The value of F-statistic was obtained through the SPSS program.The F-table value was obtained with the provision that F table = F (k; n-k) with a significance of 5% (0.05).So F-table = F (2; 48) = 3.19.The value of n is the number of samples.The value of k is the total of the independent variable.The value of F-statistic is 36,238 > 3.19 (F-table) and a significant value of 0.000 < 0.05.Therefore, H0 is rejected and H1 is accepted.Hence, the product quality and the price simultaneously has a positive influence toward Zavennie's customer satisfaction.
The hypothesis testing indicates that price has a higher regression coefficient value than the product quality.Therefore, the price has a greater influence than the product quality.Based on the sig value on the t-test of 0.000, product quality has a partial and significant influence toward customer satisfaction.In addition, the sig t-test value on the quality product is less than the sig value of 0.05.It reveals that the product quality influences customer satisfaction.Furthermore, the sig value on the t-test is 0.001.Hence, the price has a partial and significant influence toward customer satisfaction.This findings also reveal the value of the t-test sig on the price is smaller than the sig value of 0.05.Therefore, the price also influences customer satisfaction.
The hypothesis test for the F-test displays a sig value of 0.000.Therefore, the product quality and the price have a simultaneous and significant effect toward customer satisfaction.Moreover, the value of the F-test sig on the product quality and the price is smaller than the sig value of 0.05.Hence, the product quality and the price affect customer satisfaction.The coefficient of determination in this study reveals the Adjusted R Square is 0.590 or 59.0% that confirms this research model is quite strong.The Adjusted R Square displays the influence's magnitude of the product quality and price toward customer satisfaction.The remaining 41.0% is affected by factors not involved in this study.
Hypothesis testing reveals that product quality and price simultaneously influence consumer satisfaction.It implies that product quality and selling price require particular attention due to their significant influence on consumer satisfaction.This finding also supports the thought [9] that price is an indicator of product quality.On the other hand, product quality can also be a factor that determines the selling price of the product.Selling prices in accordance with product quality that meet consumer expectations will foster the achievement of consumer satisfaction.
If examined separately, the hypothesis test in this study indicates that partially both product quality and the selling price also have a significant influence on consumer satisfaction.Product quality in accordance with consumer expectations will cause consumers feel satisfied.The findings in this study also support the opinion of [5] which emphasizes the ability of business venture to not only fulfill consumer demand, but also provide good quality products in accordance with consumer expectations.It signifies the importance of improving or maintaining product quality to satisfy consumers.As stated by [6] consumer satisfaction will encourage repeat buying.Hence, satisfied consumers will drive the creation of brand loyalty.
However, the hypothesis test in this study also display interesting findings.The hypothesis test reveals that price variable has greater influence than product quality toward consumer satisfaction.This finding also strengthens the view of [9] which states that the selling price is an indicator of product quality.The selling price in accordance with the product quality will lead to The results of hypothesis testing in this study typically show the critical role of product quality and selling price on consumer satisfaction.[12] argue that consumer satisfaction will impact financial performance and product perceptions in the market.If examined based on the results of hypothesis testing in this study, the selling price that proportionate with the product quality will improve purchases.Hence it will boost the business's income.Furthermore, high quality product could create a positive perception.It is because the product considered by consumers able to provide the satisfaction.In addition, [13] also stated that customer satisfaction can be a differentiating factor for the business ventures.The results of hypothesis testing in this study support this opinion.The appropriate selling price and good product quality will distinguish a business venture from its competitors.Hence, the selling price and product quality that can satisfy consumers can be a competitive advantage for a business.
Conclusion
The findings in this research reveals that product quality partially has a significant and positive influence toward customer satisfaction.In addition, the selling price partially also has a significant and positive influence toward customer satisfaction.Furthermore, this study find that simultaneously product quality and price have a significant and positive influence toward customer satisfaction where the relationship between the research model is quite strong.Based on those findings it could be concluded that MSMEs who produce skincare products made from natural ingredients can maintain the sustainability of their E-business by paying attention to the product quality and selling price aspects.These two aspects need to assess carefully.Fail to do so will bring a significant impact in the future.Both product quality and price factors influence consumer satisfaction and the company's financial performance.Hence, these two factors need to be considered to guarantee the sustainability of the MSMEs business.The findings of this study will provide a reference for MSMEs that operates E-Business model to enhance product quality and determine appropriate selling price to maintain customer satisfaction.For further study it is suggested to apply a wider research object and involve more respondents to gain a deeper insight regarding factors that influence customer satisfaction and its impact on the sustainability of MSMEs E-Business model.
Fig. 3 .
Fig. 3.The result of normality test from P-P plot curve.
Fig. 4 .
Fig. 4. The result of multiple linear regression coefficient test and t-test.
Table 2 display
the result of heteroscedasticity test in this study:
Table 2
summarize the linearity test in this study:
Table 3 .
Coefficient determination (adj r square) model summary b .
E3S Web of Conferences 388,03003 (2023)https://doi.org/10.1051/e3sconf/202338803003ICOBAR 2022 consumer satisfaction because consumers feel they have spent costs proportionate with the quality obtained.Therefore, it is crucial for a business venture to set the right selling price that fulfill consumer expectations toward product quality.
|
2023-05-19T15:10:15.879Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "1bfe747432d4a2289cd2e8375b50044d9874299d",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/25/e3sconf_icobar2023_03003.pdf",
"oa_status": "CLOSED",
"pdf_src": "Anansi",
"pdf_hash": "15fca868ab776c4df5ee08d0d75b6cd402582222",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
14531299
|
pes2o/s2orc
|
v3-fos-license
|
Antibiotic and heavy-metal resistance of Vibrio parahaemolyticus isolated from fresh shrimps in Shanghai fish markets, China
Vibrio parahaemolyticus is a causative agent of human serious seafood-borne gastroenteritis disease and even death. Shrimps, often eaten raw or undercooked, are an important reservoir of the bacterium. In this study, we isolated and characterized a total of 400 V. parahaemolyticus strains from commonly consumed fresh shrimps (Litopenaeus vannamei, Macrobrachium rosenbergii, Penaeus monodon, and Exopalaemon carinicauda) in Shanghai fish markets, China in 2013–2014. The results revealed an extremely low occurrence of pathogenic V. parahaemolyticus carrying two major toxic genes (tdh and trh, 0.0 and 0.5 %). However, high incidences of antibiotic resistance were observed among the strains against ampicillin (99 %), streptomycin (45.25 %), rifampicin (38.25 %), and spectinomycin (25.50 %). Approximately 24 % of the strains derived from the P. monodon sample displayed multidrug resistant (MDR) phenotypes, followed by 19, 12, and 6 % from the E. carinicauda, L. vannamei, and M. rosenbergii samples, respectively. Moreover, tolerance to heavy metals of Cr3+ and Zn2+ was observed in 90 antibiotic resistant strains, the majority of which also displayed resistance to Cu2+ (93.3 %), Pb2+ (87.8 %), and Cd2+(73.3 %). The pulsed-field gel electrophoresis (PFGE)-based genotyping of these strains revealed a total of 71 distinct pulsotypes, demonstrating a large degree of genomic variation among the isolates. The wide distribution of MDR and heavy-metal resistance isolates in the PFGE clusters suggested the co-existence of a number of resistant determinants in V. parahaemolyticus population in the detected samples. This study provided data in support of aquatic animal health management and food safety risk assessment in aquaculture industry.
Introduction
China has become the world's largest producer of aquatic products since 2002 (People's Republic of China, Fishery Products Annual Report). Along with the fast growing aquaculture industry, however, aquatic animal diseases have also rapidly increased . Antimicrobial agents are commonly used in the animal breeding industry and effectively prevent disease outbreaks caused by pathogenic microorganism. Nevertheless, the inappropriate usage of antimicrobial drugs in aquaculture contributed to the development of antibiotic-resistant bacteria and imposed serious problems on aquatic ecosystems, particularly in the developing countries (Woolhouse and Farrar 2014). For example, the high incidences of resistance to antimicrobial agents such as ampicillin, rifampicin, and streptomycin have been reported in V. parahaemolyticus isolates originated from some aquatic products in Asian and European countries, e.g., southern China (Xie et al. 2015), Korea (Kang et al. 2015), Poland (Lopatek et al. 2015), and Italy (Ottaviani et al. 2013). On the other hand, in addition to increasing industrialization, environmental pollution has become one of the most challenging issues in the developing countries. High occurrence of heavy metal resistant bacteria has been detected in various environments, e.g., marine, river and agricultural soil (Sabry et al. 1997;Ansari et al. 2008;Malik and Aleem 2011). Contaminated water with industrial pollutants (e.g., heavy metals) was supposed to enhance the selection for antibiotic resistance and vice versa Matyar 2012;Zhao et al. 2012).
V. parahaemolyticus is a Gram-negative, halophilic bacterium that thrives in marine, estuarine, and aquaculture environments worldwide (Broberg et al. 2011;Letchumanan et al. 2014). The bacterium is a causative agent of serious human seafood-borne gastroenteritis disease and even death (Boyd et al. 2008;Ceccarelli et al. 2013). In China, the incidence of food-borne illnesses caused by consumption of aquatic products contaminated with V. parahaemolyticus has become one of the most important food safety risk, particularly in the southeast littoral provinces (Wang et al. 2007;Chen et al. 2010). Shrimps, often eaten raw or undercooked, are an important reservoir of V. parahaemolyticus. To date, numerous s t u d i e s h a v e b e e n c o n d u c t e d t o c h a r a c t e r i z e V. parahaemolyticus from clinical samples in different parts of the world (e.g., Boyd et al. 2008;Broberg et al. 2011;Ceccarelli et al. 2013;Tsai et al. 2013;Letchumanan et al. 2014); nevertheless, insufficient information is available on the isolates from aquaculture products, such as various shrimps in China (e.g., Chen et al. 2012;Song et al. 2013;Xu et al. 2014;Albuquerque Costa et al. 2015;Xie et al. 2015). Thus, in this study, we aimed to determine antibiotic and heavy-metal resistance of the 400 V. parahaemolyticus strains isolated from four types of fresh shrimps commonly consumed in Shanghai, China, in order to address the lack of molecular ecological data of the bacterium in aquaculture products.
Sample collection
The fresh shrimps, including L. vannamei, M. rosenbergii, P. monodon, and E. carinicauda, were collected monthly from Shanghai fish markets in Shanghai, China from June to November in 2013 and 2014. The former three are widely cultured in the southeast littoral provinces in China, while the latter is a type of small shrimp grown in Shanghai and neighboring areas. The L. vannamei (known as Pacific white shrimp) is the most widely cultured and productive alien crustacean worldwide. It is native to the western Pacific coast of Latin America and introduced commercially since 1996 into China and several countries in Asia. The freshwater culture of L. vannamei has proven even more successful than brackish water culture conditions (Tang et al. 2014). M. rosenbergii (known as the giant river prawn) is the most important cultured freshwater prawn in the world. It is native to the Indo-Pacific region, northern Australia, and Southeast Asia, and now farmed on a large scale in many countries (Sahul Hameed and Bonami 2012). P. monodon (known as the black tiger shrimp) is a marine crustacean especially widely cultured in its natural distribution region of Indo-Pacific (Nunan et al. 2005). E. carinicauda is widely distributed in the East China Sea. It is one of the major economic shrimp species cultured in China . The samples stored in sterile plastic bags (Shanghai Sangon Biological Engineering Technology and Services Co., Ltd., Shanghai, China) were immediately transported in icebox to our laboratory at Shanghai Ocean University in Shanghai, China for experiments.
Isolation and identification of V. parahaemolyticus isolates V. parahaemolyticus was isolated and identified according to the instructions of the Chinese Government Standard (GB17378-2007) and the Standard of the Bacteriological Analytical Manual of the US Food and Drug Administration (8th Edition, Revision A, 1998) (Song et al. 2013). Briefly, aliquots (25 g) of each shrimp sample were individually homogenized in appropriate volumes of alkaline peptone water (APW, Beijing Land Bridge Technology Co., Ltd., Beijing, China) using the lab blender BagMixer (Interscience, Paris, France). Microbial cells in supernatant were appropriately dilut ed and spr ead on the CH ROM agar T M Vib ri o (CHROMagar, Paris, France) or thiosulfate citrate bile salts sucrose (TCBS, Beijing Land Bridge Technology Co., Ltd., Beijing, China) agar plates. The plates were incubated at 37°C for 24 h. Colonies were picked out, screened, and identified according to the method described previously (Song et al. 2013). Genomic DNA preparation, oligonucleotide primer synthesis, PCR reactions and sequence analysis were performed as previously described (Song et al. 2013;Tang et al. 2014). The virulence genes (tdh and trh) were detected by PCR as previously described (Song et al. 2013). V. parahaemolyticus ATCC33847 (tdh + trh − ) (Fujino et al. 1965) and ATCC17802 (tdh − trh + ) (Baumann et al. 1973), isolated from clinical and food-poisoning cases, respectively, were used as positive control strains as described previously (He et al. 2015).
Susceptibility to antimicrobial agents and heavy metals
V. parahaemolyticus isolates were measured for in vitro susceptibility to ten antimicrobial agents using Kirby-Bauer disk diffusion method according to the Clinical and Laboratory Standards Institute (CLSI, 2006, Approved Standard-Ninth Edition, M2-A9, Vol. 26 No. 1) (Song et al. 2013). Mueller-Hinton agar medium (Oxoid, UK) and the disks with antimicrobial agents (Oxoid, UK) were used in this study, including 10-μg ampicillin (AMP), 30-μg chloramphenicol (CHL), 10-μg streptomycin (STR), 10-μg gentamicin (CN), 30-μg kanamycin (KAN), 5-μg rifampicin (RIF), 100-μg spectinomycin (SPT), 30-μg tetracycline (TET), 5-μg trimethoprim (TM), and 25-μg SXT (sulfamethoxazole (23.75 μg)-trimethoprim (1.25 μg)). Susceptible, intermediate, and or resistant phenotypes were reported according to the established breakpoints for V. parahaemolyticus. In the case of the lacking of the established breakpoints of some antimicrobial agents for the bacterium, the values for Vibrio cholerae or enterobacteriaceae were referred. To date, no standard method is available to measure bacterial susceptibility to heavy metals. Tolerance of the isolates to heavy metals was determined according to the method described previously (Malik and Aleem 2011;Song et al. 2013). The minimal inhibitory concentration (MIC) in vitro of the tested heavy metals against the isolates was measured quantitatively using Broth Dilution Testing (microdilution) (CLSI, 2006). The heavy metals used in this study included NiCl 2 , CrCl 3 , CdCl 2 , PbCl 2 , CuCl 2 , ZnCl 2 , MnCl 2 , and HgCl 2 [Analytical Reagent (AR), Sinopharm Chemical Reagent Co., Ltd, Shanghai, China]. The assays were performed in triplicate experiments, and quality control strains of Escherichia coli ATCC25922 and K12 were purchased from the Institute of Industrial Microbiology (Shanghai, China), and used in the antibiotic and heavy-metal resistance tests, respectively (Malik and Aleem 2011;Song et al. 2013).
PFGE-based genotyping analysis
The PFGE analysis was performed according to the method described previously (He et al. 2015). Genomic DNA fragments digested with the restriction endonuclease NotI (Japan TaKaRa BIO, Dalian Company, Dalian, China) were resolved in a CHEF Mapper system (Bio-Rad Laboratories, Hercules, Calif., USA). Chromosome DNA of Salmonella enterica strain H9812 was digested with the restriction endonuclease XbaI (Japan TaKaRa BIO, Dalian Company, Dalian, China) and used as DNA molecular markers ranging from 20.5 to 1,135 kb. PFGE patterns were analyzed using the NTSYSpc 2.10e Software according to the unweighted pair group method with arithmetic mean based on Dice coefficients.
Results and discussion
Virulence of the V. parahaemolyticus isolates L. vannamei, M. rosenbergii, P. monodon, and E. carinicauda are very common shrimps consumed in Shanghai, China. Pure culture of randomly selected 100 V. parahaemolyticus strains isolated from each type of the shrimps was analyzed in this study. Pathogenic V. parahaemolyticus produces two major toxic proteins, thermostable direct haemolysin (TDH) and TDH-related haemolysin (TRH), which play a crucial role in the diarrhea disease elicited by the bacterium (Boyd et al. 2008). In this study, a total of 400 V. parahaemolyticus strains were subjected to the detection of the two virulence-associated genes by PCR. The results revealed that all the isolates were featured with no toxic tdh gene. However, the trh gene was detected positive from two isolates derived from L. vannamei and P. monodon, respectively. The very low occurrence of pathogenic V. parahaemolyticus has also been reported from the majority of non-clinical samples previously (e.g., Chao et al. 2009;Song et al. 2013;Haley et al. 2014).
Susceptibility of the V. parahaemolyticus isolates to antimicrobial agents Antimicrobial susceptibility of the 400 V. parahaemolyticus isolates was determined, and ten antimicrobial agents were tested. As illustrated in Fig. 1, all the isolates were susceptible to CHL and TET. Of these, a total of 35 strains showed non-resistance to all the ten drugs. Since 2002, CHL, its salts and esters (including cholramphenicol succinate) have been banned to use in breeding industry in China (China Department of Agriculture,Bulletin No. 193), which may serve as an explanation of the result in this study. Our observation correlated with a recent report (Xie et al. 2015), showing that all 150 V. parahaemolyticus isolates in aquatic products collected from South China markets were also susceptible to CHL.Albeit the wide usage of TET, sulfonamides and quinolones in aquaculture has been reported (Holmström et al. 2003), the resistance to TET was not detected from all the V. parahaemolyticus isolates in this study. In contrast, consistent with the previous studies (Matyar 2012;Song et al. 2013;Kang et al. 2015), AMP resistance was the most predominant (99 %) among the isolates examined in this study. Moreover, the Fig. 1 Antimicrobial susceptibility of the four hundred V. parahaemolyticus strains isolated from the fresh shrimp samples collected in Shanghai fish markets in 2013-2014. AMP ampicillin, CHL chloramphenicol, CN gentamicin, KAN kanamycin, RIF rifampicin, SPT spectinomycin, STR streptomycin, SXT sulfamethoxazole-trimethoprim, TET tetracycline, TM trimethoprim resistance of the isolates to the other antimicrobial agents was also observed, including STR (45.3 %), RIF (38.3 %), and SPT (25.5 %). Meanwhile, the isolates showed high levels of intermediate susceptibilities to these three drugs (Fig. 1). High incidences of resistance to STR (88.7, 50.7 %) have recently been reported in V. parahaemolyticus isolates originating from aquatic products in China (Xie et al. 2015) and Korea (Kang et al. 2015) as well. The STR-resistant V. parahaemolyticus isolates in Korea also showed RIF resistant phenotype (50.7 %). As a broad-spectrum antibiotic, SPT is often used in livestock and poultry breeding industry. In this study, about 25.5 % of the isolates exhibited strong resistance phenotype against SPT, which was not detected previously. In addition, very few isolates exhibited tolerance to TM (1.25 %), KAN (1.00 %), CN (1.00 %), and SXT (0.75 %). Nevertheless, a high percentage of intermediate susceptibility to KAN was detectedinthe isolates (70%),suggestingapotentialresistance trend of this drug.
As shown in Fig. 2, our results also revealed distinct resistance patterns yielded by the V. parahaemolyticus isolates of different shrimp origins. AMP resistance was the most predominant among all the samples (97-100 %). High percentages of AMP resistance have also been observed in the bacterium isolated from P. monodon in India (Bhattacharya et al. 2000) and L. vannamei in Brazil (Rodrigues de Melo et al. 2011). In this study, the isolates derived from the P. monodon sample had the highest resistance levels against STR (73 %) and RIF (65 %) and also exhibited resistance to the maximum number of antimicrobial agents (8/10), whereas those from M. rosenbergii showed an opposite pattern. In addition, the highest percentage of SPT resistance was detected from the isolates of E. carinicauda origin (52 %), which was notably higher than those from the other three samples (11-21 %). Moreover, the resistance to STR and RIF was the second abundant in the E. carinicauda strains, when compared to the other samples. To our knowledge, the comparative antibiotic resistance patterns of V. parahaemolyticus isolates have not been described in the four species of shrimps thus far.
Moreover, this study constituted the first investigation of V. parahaemolyticus strains originated from M. rosenbergii and E. carinicauda.
Multidrug resistance (MDR) was defined as nonsusceptibility to at least one agent in three or more antimicrobial categories (Thapa Shrestha et al. 2015). MDR phenotypes have been observed in Vibrios derived from L. vannamei and P. monodon (de Melo et al. 2011;Albuquerque Costa et al. 2015). In this study, approximately 15.3 % of the tested isolates exhibited MDR phenotypes, which varied depending on the shrimp samples. The strains derived from the P. monodon sample showed the highest occurrence of MDR (24 %), followed by 19, 12, and 6 % from the E. carinicauda, L. vannamei, and M. rosenbergii samples, respectively (Fig. 3). Taken together, our data revealed the most prevalent antibiotic resistance among the V. parahaemolyticus isolates originating from P. monodon, which could be a result of serious contamination in this sample source. P. monodon is cultured in brackish water conditions, it will be interesting to trace back and investigate the possible reasons for the high prevalence of antibiotic resistance in future research.
Tolerance of the V. parahaemolyticus isolates to heavy metals
In this study, based on the antibiotic resistant results, a total of 90 selected antibiotic resistant V. parahaemolyticus isolates of the shrimp origin were further examined for their susceptibilities to heavy metals, including Cd 2+ , Cr 3+ , Cu 2+ , Hg 2+ , Mn 2+ , Ni 2+ , Pb 2+ , and Zn 2+ . As shown in Table 1, a maximum MIC of 3200 μg/mL for Cd 2+ , Cr 3+ , Cu 2+ , Mn 2+ , Ni 2+ , Pb 2+ , and 800 μg/ml for Zn 2+ and 50 μg/ml for Hg 2+ were observed, when compared to the quality control strain E. coli K12 (Malik and Aleem 2011). All the V. parahaemolyticus isolates were resistant to Cr 3+ and Zn 2+ , the majority of which also displayed resistance to Cu 2+ (93.3 %), Pb 2+ (87.8 %), and Cd 2+ (73.3 %). In addition, about 6.7 % of the isolates showed resistance to Ni 2+ . It has been reported that the Yangtze River Estuary area has suffered heavy metal contamination, being located in one of the highest density of population and fastest economic developing areas in China Zhao et al. 2012). Heavy-metal resistance has also been observed in Vibrios isolated from aquatic products and environment in the Yangze River estuary in Shanghai (Song et al. 2013). In this study, almost all the isolates were susceptible to Mn 2+ and Hg 2+ , except those isolated from P. monodon and L. vannamei samples, where very low percentage of the isolates (2.2 %) was detected resistant to the two heavy metals (Fig. 4).
As shown in Fig. 4, the V. parahaemolyticus isolates derived from different shrimp sources had similar heavy-metal resistance patterns, most of which displayed resistance to Cr 3+ , Cu 2+ , and Zn 2+ (90-100 %). Moreover, about 65.0-96.3 % of the isolates showed resistance to Pb 2+ and Cd 2+ , except the lower percentage of Cd 2+ resistance in E. carinicauda (37.5 %). The results indicated that the sample sources appeared not greatly impact on the major heavy metal resistant patterns of V. parahaemolyticus. One possibility was that inappropriate release of industrial wastes may influence on different aquaculture environments.
In this study, our data indicated that the tolerance to heavy metals was very prevalent in the V. parahaemolyticus strains with more than two antibiotic resistance phenotypes. Industrial pollutants were supposed to enhance the selection for antibiotic resistance and vice versa (Bhattacharya et al. 2000;Baker-Austin et al. 2006;Malik and Aleem 2011). The abundant double-resistant bacteria could be a cause of serious concern due to the potential health impacts of consuming contaminated products (Holmström et al. 2003;Sharma et al. 2007).
Phylogenetic relationships of the resistant V. parahaemolyticus isolates
To track the relatedness of the 90 resistant isolates, we obtained their genome fingerprinting profiles (Fig. 5). Only three isolates could not be examined by the NotI-PFGE analysis in this study. Given the significant difference of a single DNA band in size ranging from 20.5 to 1135 kb on the PFGE gels, cluster analysis of the NotI-PFGE profiles revealed a total of 71 pulsotypes. Five pairs of isolates and one group of six isolates clustered at ≥87 % similarity, which is a cut-off value that has been suggested for use in identifying isolates belonging to the same epidemic strain (Seifert et al. 2005). The majority of the isolates (81.6 %) shared 60-87 % similarity in this study. In addition, all the isolates were assigned into eight distinct clusters, among which the majority of the isolates (89.7 %) into Cluster A to G, whereas nine isolates into Cluster H, which was more distantly related with the formers (Fig. 5). These results demonstrated that the V. parahaemolyticus isolates varied considerably, with remarkable genetic diversity existing in the tested shrimp samples.
Notably, all the isolates originating from the L. vannamei sample fell into Clusters A to C, except one into Cluster D. Similarly, the majority of the isolates (80.8 %) derived from P. monodon were grouped into Clusters A to D, with the remaining isolates belonging to Cluster H. Moreover, nine and two isolates from P. monodon exhibited 100 % similarity and fell into the same pulsotypes of Vpchn00033 and Vpchn00037, respectively. These results indicated more closely relationships of V.parahaemolyticus betweenL.vannameiandP. monodon,when compared to the other samples, in which the isolates belonging to seven of the eight PFGE clusters were identified (Fig. 5).
In addition, the isolates with MDR phenotypes that were derived from all the tested samples were distributed among the PFGE clusters. Based on the value of Simpson's diversity index (0.9872), these isolates appeared to have the greater diversity in the V. parahaemolyticus population. As described above, the 90 isolates were resistant to Cr 3+ and Zn 2+ , the majority of which also displayed resistance to Cu 2+ (93.3 %), Pb 2+ (87.8 %), and Cd 2+ (73.3 %). The antibiotic and heavy-metal resistance phenotypes were widely distributed among the PFGE clusters with no significant relevance with the PFGE clusters, suggesting that resistance determinants perhapsspreadamongmanygeneticlineages within the V. parahaemolyticus population, regardless of different sample origins.
V. parahaemolyticus harbors two chromosomes (Makino et al. 2003). Mobile genetic elements carrying resistance genes have been identified from the bacterium (e.g., Song et al. 2013), which might be responsible for the large degree of variation in genotypes and resistance phenotypes among the isolates. It will be interesting to elucidate the precise mechanisms underlying the transmission of resistance determinants in V. parahaemolyticus population in the future research.
Conclusions
In this study, a total of 400 V. parahaemolyticus isolates from commonly consumed fresh shrimps in Shanghai fish markets, China in 2013-2014 were isolated and characterized. Our data revealed an extremely low incidence of pathogenic V. parahaemolyticus carrying the two genes coding for the major virulence factors (tdh and trh, 0.0 and 0.5 %). However, high levels of antibiotic resistance were observed among the isolates against ampicillin (99 %), streptomycin (45.25 %), rifampicin (38.25 %), and spectinomycin (25.50 %). Moreover, approximately 15.3 % of the isolates exhibited MDR phenotypes. In addition, tolerance to heavy metals of Cr 3+ and Zn 2+ was observed in 90 antibiotic-resistant isolates, the majority of which also displayed resistance to Cu 2+ (93.3 %), Pb 2+ (87.8 %), and Cd 2+ (73.3 %), when compared to E. coli K12. The PFGE-based genotyping of these isolates revealed a total of 71 pulsotypes, demonstrating remarkable genetic diversity of V. parahaemolyticus population in the shrimp samples, with the co-existence and wide distribution of a number of resistant isolates. The results also revealed the most contaminated reservoir of MDR V. parahaemolyticus in the P. monodon sample. The data in this study will refine our grasp of V. parahaemolyticus molecular ecology in aquaculture products and enable appropriate food-borne disease-control in aquaculture industry.
|
2018-04-03T04:28:07.149Z
|
2016-04-16T00:00:00.000
|
{
"year": 2016,
"sha1": "3d1651140242918a9b0a2ef02c6f0e9d8b5a0029",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-016-6614-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d1651140242918a9b0a2ef02c6f0e9d8b5a0029",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
54473524
|
pes2o/s2orc
|
v3-fos-license
|
No evidence of attentional bias toward angry faces in patients with obsessive-compulsive disorder
Objective: Although attentional bias (AB) toward angry faces is well established in patients with anxiety disorders, it is still poorly studied in obsessive-compulsive disorder (OCD). We investigated whether OCD patients present AB toward angry faces, whether AB is related to symptom severity and whether AB scores are associated with specific OCD symptom dimensions. Method: Forty-eight OCD patients were assessed in clinical evaluations, intelligence testing and a dot-probe AB paradigm that used neutral and angry faces as stimuli. Analyses were performed with a one-sample t-test, Pearson correlations and linear regression. Results: No evidence of AB was observed in OCD patients, nor was there any association between AB and symptom severity or dimension. Psychiatric comorbidity did not affect our results. Conclusion: In accordance with previous studies, we were unable to detect AB in OCD patients. To investigate whether OCD patients have different brain activation patterns from anxiety disorder patients, future studies using a transdiagnostic approach should evaluate AB in OCD and anxiety disorder patients as they perform AB tasks under functional neuroimaging protocols.
Inclusion and exclusion criteria
The inclusion criteria for participants with OCD were: 1-age between 18 and 65 years; 2-to have a primary diagnosis of OCD according to DSM-IV-TR, confirmed by the Structured Clinical Interview for DSM-IV Axis I disorders (SCID) (SCID-I;First et al., 1995); 3-Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) (Goodman et al., 1989) score greater than 16; 4-if medicated, medication should be stable for the last twelve weeks. Exclusion criteria were: comorbidity with schizophrenia or bipolar disorder.
Gender, comorbidities and medication
Although the proportion of male participants was almost half of female's in our sample, this difference was not statistically significant (chi-square = 3.00, p-value = 0.083). Moreover, psychiatric comorbidity was a factor that did not affect our results: having comorbid depression, an anxiety disorder (other than GAD) or ADHD (most frequent comorbidity of this sample) did not contribute to AB scores. Finally, medication status could also influence our results, but patients were stable for at least 12 weeks.
Task -Dot Probe Paradigm
Each trial began with a fixation cross ("+") that was presented in the center of the screen for 500 milliseconds (ms). Next, two facial stimuli were shown at the same time (500 ms) and then pursued by a probe (< or >). The participants needed to press the right bottom of the mouse when the probe indicated">" and left bottom when appeared "<". The probe aroused in the location occupied by one of the faces stimuli, at the top or bottom of the screen. The task varied randomly between congruent (probe location behind threat stimuli), incongruent (probe location behind neutral stimuli) and neutral trials (probe location appeared behind any of two neutral stimuli). Of the 120 trials presented, 80 were threat-neutral (40 congruent and 40 incongruent) and 40 neutral-neutral. Response time (RT) was measured in ms and the existence of attentional bias (AB) was calculated by the difference between the RT on congruent and incongruent trials for each participant.
The response latencies provide a 'snapshot' of the distribution of the participants' attention, with faster responses to probe evidently in the attended location to the unattended location. Thus, AB toward angry faces is revealed when participants were faster to respond to probes that replace the threat-related stimuli rather the neutral stimuli. To this end, AB was calculated by subtracting the average congruent RT from the average incongruent stimuli RT from each individual.
Statistical analysis (preprocessing of the data)
Regarding AB scores we have ran habitual steps for cleaning the data, as further described. Initially, we removed trials with incorrect responses; response-times (RT) shorter than 150ms or higher than 2000ms and RT±2.5 standard deviations (SDs) from participant's mean. Neutral-neutral pairs were not included in the analysis. Probes appear with equal probability at the location of threat and neutral stimuli (congruent and incongruent, respectively) and, to measure AB, we subtract congruent from incongruent trials. http://dx.doi.org/10.1590/1516-4446-2018-0130 No evidence of attentional bias toward angry faces in patients with obsessive-compulsive disorder 17 (35.4%) PTSD = post-traumatic stress disorder; GAD = generalized anxiety disorder; BDD = body dysmorphic disorder; ADHD = attention deficit hyperactivity disorder.
|
2018-12-16T18:46:00.995Z
|
2018-12-06T00:00:00.000
|
{
"year": 2018,
"sha1": "0a8f716d1340075f9c5ad0b87356aa50dccd3cef",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/rbp/v41n3/1516-4446-rbp-1516444620180130.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ef8f6da78d982d30d57cf3e46a54d221a2ce4b7",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
220946904
|
pes2o/s2orc
|
v3-fos-license
|
Beyond ‘Safeguarding’ and ‘Empowerment’ in Hong Kong: Towards a Relational Model for Supporting Women Who Have Left their Abusive Partners
This project explores the post-separation needs of Chinese women in Hong Kong who have left their abusive partners and how they might be addressed The project aims to provide insights for improving the local domestic violence service, whose main focus is on crisis intervention. Cooperative Grounded Inquiry (CGI) was developed as a novel participatory action research methodology (PAR) for fostering collaboration between social work practitioner-researchers and women service users. Its purpose is to generate useful knowledge and provide support for abused women and their children. The project involved 7 Hong Kong Chinese women as participant-researchers. The inquiry group met at least once a week for 6 months to explore the post-separation needs of the women and their children, and to implement and evaluate the practices/services developed through this project together in a participatory manner. Women participants identified the problems of doing either ‘victim’ or ‘survivor’ that respectively underpin the ‘safeguarding’ and the ‘empowerment’ models; and they developed practices for ‘doing being oneself’ beyond the victim-survivor dichotomy. This paper presents the changing self-narratives of women participants over the research project, from victimhood to survivorhood and from survivorhood to survivor-becoming. These narratives demonstrate the importance of safeguarding women’s space for undertaking symbolic action and of empowering them through using volcabulary that can help them describe themselves/their experiences differently from mainstream discourses. Women’s narratives highlight the existing ‘planetary difference’ between the safeguarding model, which treats women as helpless and vulnerable and in need of external support, and the empowerment model which treats them as powerful, resilient and with resources and solutions to problems. The study transcends the victim-survivor dichotomy and service models by proposing an alternative relational model that emphasises power sharing in making sense of abusive experiences and finding one’s own voice in a supportive community.
Introduction
The Tin Shui Wai tragedy in 2004 saw a mother, Kam Shuk Ying, and her two twin daughters stabbed to death by her abusive husband. The tragedy unfolded even though Kam Shuk Ying and her friend had routinely reported his abuse to police. Public feeling was one of shock and also alarm at how fatal domestic violence could be. The government came under immediate pressure to deal with domestic violence more effectively and in 2005 the first domestic violence prevalence study was published by Hong Kong's Social Welfare Department (Chan 2005). The study showed that more than 1 in 7 spouses have been battered by their intimate partners at some point in their lives. More than 1 in 5 households have spouses who have been battered by their partners. In Chan's study, Conflict Tactics Scale 2 (CTS2) and Parent-Child Conflict Tactics Scale (CTSPC) were used to estimate the annual and ever prevalence rates of different types of domestic violence. Around 1 in 10 of the interviewed spouses had committed or experienced physical assaults in their intimate relationship, among which about 4% would lead to physical injuries. The same study shows that over 50% of the interviewed spouses had committed or experienced psychological aggressions in the spousal relationship. Around 6% of the interviewed spouses had either committed or experienced sexual coercion. Although the gender difference is not large in Chan's study, partially due to the tendency to produce gender neutral results through CTSs (see Johnson 2006 on debates around gender symmetry and conflict tactics scale), it still shows that women are more likely to be affected by domestic violence (by spouse: male -12%; female −15.7%; p value -0.000*; by respondent: male -14.9%; female −15.3%; p value -0.746). The gender asymmetry in intimate partner violence is further confirmed if we look at the cases reported to the government systems, majorly the police, social services and hospitals. Female victims consistently account for more than 80% of the total number of intimate partner violence cases (it used to be called spousal violence in Hong Kong, and has changed to reflect violence happens between intimate partners who are not spouses of each other) in the past 10 years in Hong Kong (the Central Information System of Battered Spouse and Sexual Violence, Social Welfare Department of HKSAR).
Concerns over service effectiveness and accountability continued to draw the attention of the Hong Kong public as the inquiry into the Tin Shui Wai tragedy unfolded. Repeated calls for help, both recorded and unrecorded, by the migrant mother of two revealed there had been a lack of adherence to the guidelines on the part of social service professionals (Review Panel on Family Services in Tin Shui Wai, 2004). Research also revealed patriarchal values embedded in the Police Force, as well as the discriminatory attitudes of social service professionals towards new immigrants from China (Hong Kong Christian Service 2004;Wu 2004). The strong public demand for effective social intervention in the field of domestic violence (Review Panel on Family Service in Tin Shui Wai 2004) resulted in a series of changes such as revisions to the procedural guidelines for handling cases of domestic violence and child abuse, streamlining crossdepartmental coordination, setting up a central information system and a shorter application period for injunction orders. While procedural and system changes are critical for protecting 'victims' of abuse, these changes also seemed to focus primarily on 'crisis intervention' with little attention to the preventive and supportive work that is meant to be part of Hong Kong's three-pronged approach to tackling domestic violence (Legislative Council Secretariat 2013). A brief analysis of the working principles of the Procedural Guidelines for Handling Cases of Intimate Partner Violence (revised 2011) (Fig. 2) clearly shows the tendency to focus`on crisis intervention.
We can see that the Guidelines for facilitating multidisciplinary collaboration is majorly set around crisis intervention by targeting risk reduction and avoidance of re-victimization. The involvement of victims in assessment and action planning is restricted to 'direct communication'. Once abused women are rehoused, either in private rental housing or public housing, cases are more likely to be terminated for the reason that 'spouse battering elements have subsided'. Abused women who have left the matrimonial home and petitioned for divorce have become the most unattended in Hong Kong's intimate partner violence services. Harmony House, the first shelter in Hong Kong, was aware of the lack of support for abused women who had separated from abusers. They developed 'after shelter services' to take care of the emotional and adaptation needs of women who had left the shelter, including those who return to and permanently left the matrimonial home. These good intentions were not effectively translated into practice however due to the government's austerity measures (Harmony House 2007). While some project-based initiatives for rebuilding self-esteem and self-confidence are identified (Hong Kong Family Welfare Society 2017), government-funded long-term post-separation services for abused women and their children are still virtually absent in Hong Kong. The absence of post-separation services for abused women in Hong Kong reflects the assumption that women's post-separation lives are problem-free; that they are strong enough to handle 'their problems'; and that having left their abusive partners they are survivors rather than victims.
This study is concerned with the post-separation needs of abused women. It aimed to (1) understand the needs of abused women who have left the abusive relationship; (2) co-design a social work practice/service that could meet the identified need(s); and (3) run and evaluate the practice/service developed in this research with all the research participants. The process helped unpacking women's experiences of living with and without the perpetrators of abusea process that has been understood in association with women's conceptions of choice, gain and loss of personal agency, as well as the availability of social and financial support (Ben-Ari et al. 2003;Dobash and Dobash 1979;Kirkwood 1993). Narratives and pictures co-produced with women participants problematize the 'either victim or survivor' dichotomy evident in the design of Hong Kong's intimate partner violence service, as well as elsewhere in the world (see Dunn 2004;Thapar-Björkert et al. 2016). Drawing on these data, this paper engages with the debate about the appropriateness of 'victimhood' or 'survivorhood' in representing abused women's lived experiences. The former often underpins 'safeguarding' work and the latter is used for promoting 'empowerment' practices. This paper seeks an alternative in making sense of abused women's agency and vulnerabilities. By looking at how women cope with post-separation challenges and seek ways of 'doing' daughters, mothers, sisters, activists and citizens differently (Gueta et al. 2016;Katz 2015;Zufferey et al. 2016), we have found that women engage in identity work that 'troubles' the 'either victim or survivor dichotomy'. 'Doing being' is central to the analysis of the identity work carried out by women participants as it links identities to social behaviours and social practices performed by the actors. It sees them as products and constituents of social orders (Sacks 1985). Women's identity work that is captured in this study shows the need for supported autonomy and agency and a relational model (Smart 2007) for working with women who have experienced intimate partner violence.
The Discourse of 'Either Victim or Survivor' and its Limitations Victimhood was extensively employed by feminists in the U.S. for mobilising civil society towards bringing domestic violence to the public agenda (Dunn 2004). The 'battered women movement' in the US started out from public tolerance and silence towards the problem, and the women it aided were 'beaten women, whether at home or on the run, need much and can give little' (Tierney 1982, p. 212). Experiences of abused women in the early years of the movement supported the construction of pure victimhood, where women had low personal agency to resist the violence against them needed external supportfor remediating the problem. This particular form of 'blameless' and 'innocent' victim identity served as a 'politicized collective identity' for mobilizing public resources and brokering public sympathy for raising the profile of this emerging social problem of 'wife battering' (Dunn 2004;Nissim-Sabbat 2009;Thapar-Björkert et al. 2016;Tierney 1982).
Soon after its emergence in mid-1970s and 1980s, the pure victim identity was criticised for being too reductive and failing to capture the complexity of abused women's experiences. Dunn (2004) analysed the victimizing stories told by battered women and the media, and discovered four types of victims: 'precipitating victims', 'ideal victims', 'stigmatized victims', and 'heroic victims'. Leisenring (2006) also discussed the different victim claims made by abused women themselves, consisting of the 'pure victim' claims, victim empowerment framework, the responsibility claims, and victim-survivor claims. These studies have challenged the monopoly of the traditional weak and blameless 'ideal victim/pure victim' identity in understanding the abused women's lived experiences. Some studies carried out in the U.K. and the U.S. also found that the victim identity can cause harms to women's post-separation recovery and self-efficacy as it jarred with the self-perception of abused partners (Donovan and Hester 2010) and contributed to victim mentality (Leisenring 2006). Dunn (2004) further argued that 'victimhood', to be justified in western culture which emphasizes autonomy and agency, inevitably connoted a power differential between the sympathizers and the sympathizees (p.239). In Chinese culture, making victim claims can be stigmatising because Chinese family virtues would require moral women 'not to spread the family shame' (家醜不出外傳) including violence against them. Women can be blamed for breaking and shaming their own and their husband's families. Despite the cultural differences, it is conceivable that women in Hong Kong, the UK and the US can feel reluctant to identify themselves as victims of abuse, or to label their experiences as abuse, because of the potential damage that the victim identity can do to their own self-image (Muehlenhard and Kimes 1999;Leisenring 2006;Donovan and Hester 2010;Brosi and Rolling 2010).
To address the stigmatizing dimension of 'victim' identity, a discourse of survivorhood has been created that emphasizes women's resistance, their ability to cope, and the choices they made for surviving the abuse (Hydén 1999;Herbert et al. 1991;Davis, 2002;Johnson 1992). The focus of intimate partner violence studies has also shifted from 'staying, leaving, and returning' to 'resisting, coping, and surviving' (Leisenring 2006;Humphreys 2000). The recognition of strengths and resistance is also a sign of moving away from victimhood (Glumbíková and Gojová 2020) and the start of post-abuse recovery (Brosi and Rolling 2010). Women's strengths and capabilities are recognized and made explicit through the construction of survivorhood. Their positional knowledge has also gained more appreciation and recognition in policy and service design (Mullender and Hague 2005;Beresford and Croft 2000).
When the rise of survivorhood marginalises victimhood, it can also become stigmatizing however (Sweet 2019). While victimhood might restrain women from articulating their experiences and personhood differently from being blamelessly weak and powerless, it can enable women to explicate their needs and garner sympathy and assistance (Leisenring 2006). This enabling and restraining property of victimhood creates a paradox where survivorhood and survivor-based practice are developed as a counter narrative or practice against victimhood. Women who express their needs, sufferings and life challenges are considered inferior to the strong and fearless 'survivors'. They are considered viewed as 'lingering' to the relationship with the abusers and personally not willing to end the victimization. As such, the monopolisation of survivorhood is as problematic as the monopolisation of victimhood, revealing limitations of the 'either victim or survivor' dichotomy.
Women's Experiences of Leaving as a Critique of the Dichotomy
Women's experiences of 'leaving' the abusive partners further highlight the problematic nature of the 'either victim or survivor' dichotomy. Leaving is not a clear cut process of separation marked by moving out or divorce, but a back and forth process that involves loops of staying-leaving-returning (Kirkwood 1993). In cases of 'successful leaving', each loop of staying-leaving-returning is carried out on the basis of the strengths gained during previous loops. Leaving is therefore a continuous process of intertwined choices and entrapment, and resisting, coping and subordinating. Even for abused women, who 'successfully' leave the abusive partners, they suffer extensively in the help-seeking process such as through the bureaucratic welfare systems and insensitive police responses (Lutenbacher et al. 2003;Wolf et al. 2003). The disinterest of helpers and difficulty meeting their financial, housing and emotional needs are also factors that contribute to women's feelings of re-victimization, decisions to stop fleeing, and returning to the relationship (Wuest and Merritt-Gray 1999, p. 112). While being punched, slapped, terrorized with weapons, stalked and humiliated in public may not be physically fatal, the history of these 'traumas' remain influential to their lives and wellbeing after separation. This is shown in psychological studies of post-traumatic growth, which identify women's ongoing suffering from past abuse as posttraumatic stress (Joseph and Linley 2008).
With a better understanding of women's experiences of staying, leaving and returning to the abusive partners, 'victim-survivor' or 'victim/survivor' is now seen more often in the literature as a linguistic response to the problematic 'either victim or survivor' dichotomy and as an acknowledgement of the complexity of women's experiences of abuse. While the hyphen space employed in the existing literature helps acknowledge the uncertainty or the hybridity of abused women's experiences, we still know little about how women negotiate their identities around the dominant discourses of victimhood and survivorhood. For women who have left the perpetrators and become labelled as 'survivors' in Hong Kong, how do they negotiate their identities for expressing their needs and sufferings, and to re-discover and reappropriate their strengths? How do negotiated identities help women who have experienced intimate partner violence face their post-separation challenges? This paper presents the narratives, textual and pictorial data co-created with women who have left their abusive partners and unpacks women's identity work as negotiated in the context of participatory action research (PAR) project.
Methodology
Cooperative Grounded Inquiry (CGI) (Kong 2016), a new methodology belonging to the larger umbrella of participatory action research (PAR) (Heron and Reason 1997), was developed specifically in this project for fostering collaboration between social work practitioner-researchers and service users to build useful knowledge and services grounded in each other's positional knowledges. The project involved women who have left their abusive partners and it involved their teenage children. Participants sought to make sense of and find ways to meet their post-separation challenges. These experiences suggested that women suffer from the categorical application of 'victim' or 'survivor' identities imposed onto them by social welfare systems, their friends and colleagues. The lack of space for negotiating these identities led to a strong sense of helplessness and (professional and personal) relationship breakdown. This project identified ruptured/troubled identities of abused women in the post-separation stage, and provided space for new identities to emerge and serve as a critical voice for expanding the restrictive 'either victim or survivor' dichotomy (Gueta et al. 2016).
Data Collection, Analysis and Action
This CGI involved a social work practitioner-researcher (the author), women who have left their abusive partners, and their teenage children, in weekly meetings (around 6 h per meeting). These lasted for 6 months and involved designing, delivering and improving post-separation services. We applied the modified reflection-action-reflection cycles (Fig. 2), as part of the CGI approach to make sense of post-separation challenges (propositional knowing) and to formulate and deliver appropriate services/practices for addressing them (practical knowing). The different forms of data, including conversations, observational data and interactive data generated in the problem-solving processes (experiential knowing) were captured by photos, drawings, reflective logs, fieldnotes and videos and audio recordings (presentational knowing) produced individually and collectively. These data were presented back to the inquiry group for reflection and collective analysis (propositional knowing). The analysis of data was aided by constant comparative analysis (Glaser 1978) which is a technique borrowed from Grounded Theory Methodology to help co-inquirers compare and contrast experiences, emotions, behaviours, interactions and attitudes of each other. It allowed us to develop an understanding of the diverse experiences of women in coping with their post-separation challenges. This paper presents only the findings on how women participants' identity work was carried out in this project while details of the methodology and the other findings have been presented elsewhere (Kong 2016;Kong 2018).
CGI demands the cultivation of reflexivity among each coinquirer so that we could identify how the construction of oneself affects the construction of understanding about the 'outside world' and our interactions with 'others'. This variation of reflexivity is named as 'relational reflexivity' (D'Cruz et al. 2007). To achieve this, a reflective session was timetabled in each meeting. Through writing reflective notes and ongoing analysis of personal performances in different social occasions, co-inquirers acquired deeper awareness of themselves in relation to others. As the initiating researcher, I also kept fieldnotes and reflective notes throughout the inquiry. These notes highlighted my dual insider-outsider identity as a 'historically disenthralled sister' that shaped both my self-disclosure and those of others (Author 2015).
Women Participants
The practitioner-researcher recruited the women participants from a local survivors' group. 7 women participants (practitioner-researcher, HL, NF, PF, YY, KW and YT) were officially involved in this inquiry (Table 1). For reasons of safety, we recruited women who had separated with their abusive partner for at least a year and a half and who had no record of physical violence for at least half a year at the point they joined the group. In this project, 'separation' does not reflect the marital status but the physical distance that women participants had with the abusers by living apart from them and being safely rehoused in private/public housing estates. This recruitment strategy helped address the service gaps presented in Hong Kong's domestic violence service (the lack of postseparation services).
All women participants had experienced both physical and psychological abuse for at least 5 years, while most of them, except NF and the researcher-participant, had children aged between 12 and 17 when the inquiry began. Among women participants, one was undergoing divorce proceedings and two were still fighting for custody during the inquiry. The inclusion of both local (n = 2) and mainland Chinese (n = 5) women in this sample also enabled us to delineate the impact of migration on women's search for social recognition and their journey of recovering from the trauma of abuse (see Reconstructing the Survivor Identity). The age difference among women participants, ranging from their late twenties to mid-sixties, created unique group dynamics that resembled Chinese filial piety and respect for the seniors (Kong 2018). Ethical approval was obtained from the University of York before the enquiry commenced.
Findings
Troubled 'Pure Survivor': 'You Cannot Leave Us Uncared' The group meetings at the first phase of the inquiry were filled with complaints about the 'poorly performing' domestic vio-lence (both intimate partner violence and child abuse) service in Hong Kong. The assumption that abused women were 'problem-free' or able to 'stand on their own two feet' after leaving their abusive partners was contested by women participants' experiences in their post-separation lives. All women participants agreed that they had been living with the impacts of intimate partner violence, including poverty, isolation, traumas, sadness, and unfair treatment imposed by abusers and the social care system and family court. As women talked about their own sufferings in the inquiry group, they found these resonated with the experiences of other group members and a sense of solidarity began to develop in the group (Kong et al. 2020). Phrases like 'we need help', 'you cannot leave us uncared' and 'if I can do it myself, then I won't…' were frequently used by women participants to assert their needs for support and care, and to foster empathetic forms of solidarity that helped coordinating between one another for producing change (Banks 2014). These shared experiences of suffering created a strong rupture with the 'pure survivor' identity that had been assumed, reinforced and prompted by the domestic violence service in Hong Kong. The dissonance between the pure survivor discourse and the lived experiences of abused women triggered the reengagement with the 'victim' identity by women themselves in order to explicate their need for support and care by the state and individuals. One of the many sufferings that continued to affect women after their separation with the perpetrators is the long-term physical impact of intimate partner violence, e.g. disposition of bones, headache, dizziness and poor health. Women often talked about their emotional vulnerability, such as feeling depressed, angry or agitated, which did not cease with separation or over time.
'Our sisters [women who have experienced domestic violence] just can't be happy. Every of us were the same. When we had just left the bad guy (abusive partner), we were very unhappy. Even though people around us were celebrating for the Luna New Year we were impervious to the vibrant atmosphere. You just can't be happy.' Said YY. HL added, 'I had exactly the same experience. I met NF for the first time in a Luna New Year celebration. Sisters in the shelter took me there. I hadn't felt thankful for their kindness; instead, I found them annoying and offensive. I thought, "I am now very depressed, why are you so happy when I am so miserable?" You will feel even worse.' Emotional instability, in the form of sudden outbursts of anger and mood fluctuations, were evident in the inquiry meetings. Displays were particularly heightened when abused women were expected to be 'happy' or 'cheered up' by other 'survivors' in the group when their lived experiences did not resonate with these expectations. Women sometimes reacted to these expectations with withdrawal behaviours or microaggression directed towards other women participants, or even their own children.
'You may not know her temperament. She (one of our participants) scolds and yells at me whenever I can't perform according to her expectation. It is very difficult to stand it. It is stressful.' Said PF. 'I was so sad when I heard my daughter repeatedly calling me "useless"! I locked up myself in the toilet and she came over to check if I were good…I was nearly driven mad, so mad that I was scared of beating her up! I burst into tears and ran away from home… That was at night.' Said YT.
Women participants claimed that emotional disturbances were commonly shared by those who have left the abusive partners and that these firmly stood in the way of their 'recovery' (Abrahams 2007). Women participants also came to see their long-term exposure to coercive control had undermined their ability to control anger, especially in the face of people's comments and criticisms. Anger, as a way to create psychological distance, is a common and a relatively safer way to resist the violence and micromanagement exercised by the abusive partners (Ben-Ari et al. 2003). However, when outbursts of anger have become a conditioned response to criticisms it can affect women's abilities to rebuild social networks and intimacy in their post-separation lives. The psychosocial wellbeing of women can also be worsened through financial deprivation, which is a commonly identified consequence of coercive control (Stark 2007;Stark 2013;WHO 2013). The dearth of support and resources continued to undermine women's selfimage through shaming and blaming them for 'failing to protect' their children. For example, KW had been relying on food banks for months and expressed a great sense of remorse for her 'incapability' to provide.
'My son and I have been eating instant noodles and canned food for 2 months already. They were all preserved food, just not healthy for a boy in puberty.' KW said.
By viewing themselves as 'victims' of domestic violence in the post-separation context, women participants found the vocabulary to challenge the notion that 'leaving the abuser can cure all the problems of abused women', and to justify the need for post-separation support and care. By drawing on the linguistic resources of 'victimhood' to garner care and support for the currently under-served separated abused women, 'pure survivor' identity was troubled/problematized.
Reconstructing the Survivor Identity: 'Chungsangje' as a Discourse of Rebirth
Re-appropriating the 'victim identity' amidst the overwhelming expectation of becoming 'survivors' seemed to help women participants identify their needs for care and support. The reflection-action-reflection cycle drove the group to further reflect on what action they could take to address these concerns, and it led subsequently to the service responses developed within the inquiry group. These included 'personal problem-solving conferences', 'emotional support' and 'health boosting exercises' run by women participants to serve other abused women. When the strengths of women participants became visible in the collective problem-solving processes, more participants would recall the 'good old days' when they had lived with confidence, dignity, and pride. Most of these positive moments in life had taken place before migrating to Hong Kong. They were times when their qualifications were recognized, their jobs were secure, and their abilities appreciated. PF, YT and YY recognized how their self-worth was undermined through the process of migrating to Hong Kong. In a conversation (2nd session), This became a reference point for reclaiming the strength women participants had before entering the abusive relationship. Women participants examined the term 'survivor', which, if translated in Chinese, can carry two different meanings -'chungsangje' (重生者) which means someone who has died but returned to life with new strengths; and 'hengchuenje' (倖存者) which means someone who has survived a disastrous experience by sheer luck. The latter does not only carry a negative connotation as one of passiveness and powerlessness, it also hints to the fact that women might not be as lucky next time. Women participants rejected the term 'hengchuenje' to describe themselves and the group, and unanimously committed to the term 'chungsangje' to represent the positive personal qualities and strengths rediscovered by living through the trauma. 'Chungsangje' also captures the 'born-die-reborn' sequence of their lived experiences of going through and, most importantly, breaking away from the abusive partner. The establishment and continuous employment of 'chungsangje' as an organizing concept for actions, such as care and service delivery in the group, helped justify and drive the development of new skills for supporting other women who were still trying to break away from the abusive partners or dealing with the post-separation challenges outside the inquiry group. Chungsangje identity captured the personal and collective strengths that were grounded in women participants' own histories, cultures and experiences, and increased the linguistic stock for accessing and mobilizing these strengths and skills in making plans and devising action. The new identity supported women participants undergo a transition from women being helped to women who were helpers. A need arose in the inquiry group to redistribute responsibilities (and power) among themselves for ensuring each member enjoyed a more equal opportunity to serve and to be served.
'I think it is good that we could start redistributing responsibilities to others (sisters)… in the past I always played the role of organizer… when we were still in the association (a local survivor group), it's always me and SW who did the work and other sisters just came and enjoyed the time. Shopping for groceries was actually a lot of hard work… and they all made excuses not to participate. We should encourage them to participate more next time (when we organize events)' said PF.
In revisiting the positive past of women, the impact of migration on women's sense of entrapment and vulnerability was also revealed. Migration from mainland China to Hong Kong disrupted formal recognition and informal recognition in the lives of women participants. Formal recognition was about having a 'leading role at work', a 'professional qualification', a 'professional role at work' and an 'advanced educational qualification'. Informal recognition meant 'being trusted', 'being appreciated' and 'being included in social networks'. Migration and intimate partner violence aggravated the social disconnect experienced by women participants, depriving them the social relationships and networks in which their strengths and abilities could be recognised and appreciated. The absence of recognition also impacted women participants' self-image, making these more susceptible to destruction and manipulation by abusive partners.
'At the time I left, I still thought he was so right that I was useless. I was never good for anything. I used to truly believe in such description about myself…' I said, echoed by all other women participants. (2 nd session) Thereby, confidence boosting words, such as 'you are great', 'all thanks to you, we can successfully…', 'we will make it through', (clapping) and (thumbs up) (from WhatsApp), became one of the commonest responses to participants' commitment and achievement. Where women failed to obtain formal recognition because of new qualifying criteria (specifically English proficiency in a former British colony), women participants relied heavily on confidence boosting words as a form of psychological compensation. 'Chungsangje' identity recognises the co-existing strengths and vulnerability in women's history, memories and life practices, and creates internal contradictions in their selfunderstanding that foster reflection on how some personal qualities/ways of living had been framed as either vulnerabilities or strengths by the dominant discourses of individualism, survivorhood and victimhood. These reflections led to the recognition of heterogenous understandings and experiences that women participants had in performing their identity as 'chungsangje', including the different ways they 'related to the abuser' and 'related to society'. For those who selfidentified as chungsangje, they felt they were more ready to re-engage with and contribute to the community. Instead of personalized and particularized services, they preferred services that promote the general well-being of people (i.e. health boosting and socializing activities) and those that offer learning opportunities (i.e. community outreach, skill fostering sessions and health knowledge). However, the way a 'chungsangje' related to the abuser also shaped the extent to which they would like to 'relate to the society'. For example, those who maintained a relationship with their abusive partners were reluctant to turn themselves into public figures when advocating for the rights of abused women. YT who left the abuser physically, but not psychologically, retained a desire to reconnect with the abuser. She then wanted to conceal the socially undesirable behaviours of her ex-husband so as to pave a way for possible reconciliation in the future.
YY asked if YT still loved her ex-husband, YT defended her ex-husband and said that their relationship would not have deteriorated if he had never gambled. YT even said she would not have divorced him if he had not initiated it. The 'love', 'desire for reconnection', 'desire for care' etc. were found to be concepts representing YT's way of relating to the ex-husband. These concepts contradicted with 'anger', 'feeling indifferent', 'fear', 'desire for separation' etc. found in many other participants. (Field note, 25 May 2013) For the participant HL, who physically and psychologically left their abusive partner but remained in contact with him as a friend, she would also consider public action inappropriate because it could antagonize their friendship.
Identity Construction as an Ongoing Process
Chungsangje identity as understood and performed differently by different women is an illustration of women's identity work during their post-separation lives. By revisiting the mixed and messy experiences in relation to their different social positions, women participants creatively deployed and re-appropriated symbols embedded in victimhood and survivorhood to seek ways to describe, justify and resource their actions in order to resist coercive control and address its impacts (Fontes 2015). Chungsangje, despite diverse understandings and performances in the group, could still risk disempowering women particularly when some participants came to realize that they were 'not yet' their ideal 'chungsangje'.
The construction of 'chungsangje-becoming' arose through participants' realize that 'from victim to chungsangje' was a process in which they might not be able to constantly fulfil all the qualities of a 'chungsangje'.
'I have known a number of sisters who have been living apart from the abusers for more than ten years. However, they are still suffering…they have not yet gone through the thing. It is not a matter of time, but your psychological state…if you can break through the psychological barriers that inhibit you, it is your success. Success does not necessarily mean one in advocating a policy or making changes in services.' said NF. Replied YT, 'I think I am not there yet.' (18th session) The barriers and problems standing in women participants' way of becoming their ideal chungsangje were identified in the group conversations. For example, 'being unable to get over the experiences of being abused', 'not ready to disclose their history of abuse' and 'fluctuating psychosocial status'. Chungsangje-becoming identity helped women participants understand why they generally felt good about their situation but still suffered from occasional emotional fluctuations, depressive moods and social disengagement.
'Yes! I was just like MM…I would say yes at this moment and say no at the next. I just couldn't understand my fluctuations. Honestly, in these 3 years, I have never been back to Tuen Mun where I used to live with the bad guy.' KW said when we were exploring the persisting influence of abuse experiences on us. (19 th session) Women participants in the group, who were all living apart from their abusers, recognized that physical separation was effective for removing the cause of victimization and for allowing time for recovery and 'rebirth/chungsang (重生)'. Those who had not left physically would be considered by women participants as 'victims' as the cause of victimization was still present. Alternatively, for those who had physically left but psychologically stayed/affected, the group would locate them as 'chungsangje-becoming (重生緊)'. For those who were described by the group as chungsangje, but by themselves as chungsangje-becoming, such as PF and YY, their identity negotiation prevailed throughout the latter half of the inquiry.
'Chungsangje-becoming' was invented by women participants to maintain the strength-based undertone of their identity. At the same time it enabled them to show the need for care and help without returning to victimhood. Chungsangjebecoming perceived themselves as different from 'victims' in terms of financial and psychological stability and the frequency of their vulnerabilities on display. In terms of living conditions, chungsangje-becoming were less unsettled, for example by being permanently housed and financially secure. 'Chungsangje-becoming' therefore focused more on hands-on skills training and relevant policy learning in order to prepare them for helping others. For instance, an emotional support workshop and policy statement writing sessions were held for polishing skills and increasing knowledge in running services for 'victims' and 'chungsangje-becoming'. The creation of 'chungsangje-becoming' in the inquiry group also suggested that rebirth could be successful only when there were supportive empathetic others. These others can support individuals in seeking and performing their identities differently from those shaped by the dominant 'victim or survivor' dichotomy.
Discussion
When abused women sought ways of meeting the postseparation challenges in the inquiry group they also contested the categorical application of victim or survivor identities. Categorical application of either victim or survivor identities clearly failed to capture both women's need for support in their post-separation lives and their eagerness to support other women. Their identity work demonstrated that victim and survivor are not static identities, but clusters of symbolic resources that abused women could draw on for articulating their heterogeneous, messy and dynamic lived experiences and needs. Refuting the categorical application also further problematised the either 'victim or survivor' dichotomy when women participants started to examine the discrepancies between their life challenges. For example, mothering in the context of domestic violence (Radford and Hester 2006;Fauci and Goodman 2019) and poverty, and the identity categories imposed onto them by domestic violence services in Hong Kong. The discrepancy forced women to seek and perform alternative identities that could better reflect their life circumstances and which were more useful for re-organising the human and material resources for solving their problems.
Identity work therefore provides an adaptive function for people in a community (Fowler 2010). It challenges and revises the restrictive narratives/discourses that restrain people from understanding and solve their problems. In a similar vein, the different ways of living out the chungsangje identity in terms of how one relates to the abusive partner and the society, enabled women participants to recognize alternative possibilities of resisting violence and violent husbands other than leaving. These experiences challenge the discourse that privileges 'leaving' among other 'choices' of relating to the abusive partners. In the inquiry, the experiences of HL and YT did not conform to the formula stories of 'pure victim' and 'villain abuser' (Loseke 2001) because both chose to maintain relationships with their ex-husbands, in the form of either friendship or romance, while refusing to be abused again. Social expectations that abused women display a consistent identity, either victim or survivor, could impede both their capacity to make sense of their lived experiences and their agency for adapting or solving their everyday life problems. Narratives of abused women collected in this study clearly contested the either 'victim or survivor' dichotomy reproduced and reinforced by Hong Kong's domestic violence service framework. The identity of 'chungsangje-becoming' represents the 'victim and survivor' experiences of abused women as well as their aspirations to leave victimhood at the post-separation stage.
Safeguarding and Empowerment Revisited: Towards a Relational Model of Women Support
Safeguarding work is central to protecting 'victims' of domestic violence. It enables 'victims who were previously ignored, belittled, and blamed [to become] assisted, advised, advocated for, sheltered, and supported' (Mcdermott andGarofalo 2004: 1246). Despite the aim of victim safeguarding to remove the blame women feel for the abuse directed towards them, safeguarding also rests on the concept of 'innocent' or 'blameless' victims that are unable to resist or stop the violence against them. These underlying assumptions of pure victimhood suggest the potential for disempowerment, particularly when safeguarding work is 1) brought closer to risk management and risk reduction (Donovan 2013;Robbins et al. 2014) because of the increasingly managerial culture in social care services (see also Fig.1) and 2) based on the individualistic understanding of women's wellbeing independent of the wellbeing of their children and their mother-child relationship (Kong 2018). These disempowering elements are evidenced in the procedural guidelines for handling cases of intimate partner violence in Hong Kong (see fig.1) which puts riskreduction at the centre of safeguarding work and tends to stop the support services once the 'violence subsides'. The Guidelines clearly reveal the government's expectation that women should be able to deal with their problems once the source of oppression (the perpetrators) is removed from their lives. This expectation on women explains the lack of government support available women who have left their abusive partners, even though long-term physical, psychological, social and financial impacts are well documented in WHO's multi-country study (2013). This approach to adult safeguarding is in contradiction with women participants' experiences as 'chungsangje' and 'chungsangje-becoming': women's resistance, strengths, vulnerabilities and needs for help co-exist and shift over time in response to the emerging challenges in their post-separation lives. Identity work is therefore a continuous performance of self that helps justifying and reorganising relationships and life practices so that women can garner the support/help needed from others and utilise personal and public resources to solve their problems.
As an alternative to safeguarding, women's empowerment has been considered an important preventive and supportive model for tackling intimate partner violence. It aims at addressing the root causes of gender inequality and the physical, psychological, social and financial impact that violence could have on women (Tiwari et al. 2005;Tiwari et al. 2012;Hester, 2013). While the UK has been through waves of feminist movements to challenge patriarchy, the women's movement in Hong Kong began by rejecting the western import of gender equality that could be 'threatening [to] the integrity of the family or trampling on men' (p.104). These early years of the women's movement shaped empowerment practices, making women in Hong Kong focus on personal skill enhancement and individual capacity building (Ibid). This focus aligns very well with other empowerment models practised with Hong Kong Chinese abused pregnant women (Tiwari et al. 2005) and community-dwelling abused women (Tiwari et al. 2012). These seek to 'engage with individual abused women to empower them and link them to community services, with ongoing support, informal counselling, or both as required' (Ibid:537). The individualistic undertone of empowerment practices in Hong Kong persists and is reconfirmed in the Strategies and Measures in Tackling Domestic Violence in Selected Places published by the Hong Kong Legislative Council Secretariat (Lee 2008:35): 'Empowerment and a victim-centred approachservices must ensure that victims identify and express their needs and make decisions in a supportive and nonjudgemental environment, that victims are treated with dignity, respect and sensitivity; and promote serviceuser involvement in the development and delivery of the service' The combination of empowerment with victim-centred approach, adopted by the HKSAR government, reflects the assumption that women are autonomous individuals who can make rational decisions, act for their best interest and be themselves when they are not coerced by another person.
The Cartesian model of self that underpins the empowerment models in Hong Kong, however, contradicts abused women's experiences. Women make sense of their reality and life preferences in the context of relationships. For example, in this project, women's shifting relationships with their children, the ex-partners and other women participants create and limit the space in which women's sufferings and strengths can be recognised, validated and acted upon. To re-create a space for recognising marginalised stories and hence identities, women participants undertook identity work that problematised the categorical application of either victim or survivor onto them. They creatively constructed culturally more appropriate identities that enabled them to express both strengths and vulnerabilities. The resulting identities would often offer a revised socio-relational space in which women's marginalised experiences could be recognised, validated and acted upon as resources for solving emerging life challenges. Identity work, therefore, is not the project of an individual seeking true/authentic self but a 'relational project' (Combs and Freedman 2016) of reconstructing relationships. It can be among women themselves and between women and significant others, such as their children and abusive ex-partners. The relational approach emerging from the project is about acknowledging the fluidity and multiplicity of identities performed by abused women at different times and space. It is to see identity work a crucial practice for bringing women's marginalised stories to the surface and re-organising social relationships in ways to address power differences, such as that between sympathizees and sympathizers, dependents and independents and victims and survivors. Identity work can be empowering only when we see it as a relational project, and when we cultivate supportive, empathetic and egalitarian relationships (see Kong 2018) for it to take place. This study therefore suggests a new service direction for supporting women who have left their abusive partners. That is, investing in and cultivating social relationships that enable women to revisit, reappraise and re-appropriate their experiences to cope with their challenging post-separation lives and to live out their preferred identities as a person beyond victim or survivor (Nissim-Sabat 2009).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2020-08-04T14:26:31.280Z
|
2020-08-04T00:00:00.000
|
{
"year": 2020,
"sha1": "bfadf7f99e04116ec2ca598187ceea24f00a8874",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10896-020-00185-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "bfadf7f99e04116ec2ca598187ceea24f00a8874",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
2594142
|
pes2o/s2orc
|
v3-fos-license
|
Dislodgement of circular mapping catheter electrode in the left atrium: A near miss
Introduction Use of a circular mapping catheter (CMC) in pulmonary vein (PV) isolation procedures is standard practice and generally considered to be safe in patients with atrial fibrillation (AF). However, we report a case where a metallic electrode displaced from a distal CMCwas seen free-floating in the left atrium (LA) on fluoroscopy. Unexpectedly, the electrode traveled retrograde and became lodged in a terminal branch of the right inferior pulmonary vein (RIPV).
Introduction
Use of a circular mapping catheter (CMC) in pulmonary vein (PV) isolation procedures is standard practice and generally considered to be safe in patients with atrial fibrillation (AF). 1 However, we report a case where a metallic electrode displaced from a distal CMC was seen free-floating in the left atrium (LA) on fluoroscopy. Unexpectedly, the electrode traveled retrograde and became lodged in a terminal branch of the right inferior pulmonary vein (RIPV).
Case report
A 66-year-old Chinese man with paroxysmal AF underwent isolation of all 4 PVs using a cryoablation balloon. All 4 PVs were successfully isolated. A CMC (AFocus II; 7F, 15 mm diameter; St. Jude Medical, St. Paul, MN) was placed through a steerable sheath (FlexCath Advance; 12F inner diameter; Medtronic, Minneapolis, MN) to confirm PV isolation at the end of the procedure. Upon removal of the CMC from the LA with significant exertion owing to resistance, a freefloating metallic electrode that had displaced from the CMC was seen on fluoroscopy free-floating within the LA ( Figure 1A-D, Supplemental video 1). This catheter was removed and the absence of the distal pole was observed. Potential for cerebral embolization was of utmost concern and the patient was counseled on options and risks. The patient agreed to proceed with urgent cardiac surgery to retrieve the electrode. While we were waiting for the operating theatre to be ready, the electrode had surprisingly traveled retrograde and become lodged in a terminal branch of the RIPV, as seen on fluoroscopy ( Figure 1E, F, Supplemental video 2) and computed tomography angiography ( Figure 1G, H). Repeat imaging during 12-month follow-up confirmed that the electrode had not
Discussion
A variety of complications have been described during AF ablation. 2 Wu and colleagues 3 reported CMC entrapment in the mitral valve apparatus in a patient during AF ablation. The case of catheter fragments being displaced and retained in patients is not a commonly reported occurrence. The case reported by Calvo and colleagues 4 is similar, except that it involved a proximal electrode from a different CMC (Reflexion Spiral, St. Jude Medical). 4 The floating metallic electrodes did not induce serious complications because they lodged in a terminal branch of the RIPV.
Catheter fragment displacement involves the important sheath-electrode relationship during catheter removal. Some combinations of CMC and sheath design are more prone to this complication and should warrant careful removal of the CMC. Particular care should be given when the proximal and distal portions of the CMC interface with the sheath tip. The mechanism of dislodged electrode ending up in the RIPV is unclear but may be owing to the weight of the catheter fragments and the posterior anatomy of the RIPV. This may allow for retrograde travel into the RIPV as opposed to the more deleterious route into the left ventricle or aorta. We considered this incident a near miss with potential for systemic embolization, and the appropriate regulatory authorities have been notified.
KEY TEACHING POINTS
It is important to be aware of possible complications involving electrode dislodgement occurring during circular mapping catheter removal.
Serial imaging should be performed to monitor location of retained fragments while preparing for possible intervention.
Given the risk of distal embolism, careful consideration should be given to urgent intervention to remove a mobile, retained foreign object.
|
2018-04-03T01:01:39.259Z
|
2017-07-25T00:00:00.000
|
{
"year": 2017,
"sha1": "e8c56c3f5db3f8aa9c913f4a5906fc65eaf7d163",
"oa_license": "CCBYNCND",
"oa_url": "http://www.heartrhythmcasereports.com/article/S2214027117301057/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8c56c3f5db3f8aa9c913f4a5906fc65eaf7d163",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270794757
|
pes2o/s2orc
|
v3-fos-license
|
Observation of discrete-light temporal refraction by moving potentials with broken Galilean invariance
Refraction is a basic beam bending effect at two media’s interface. While traditional studies focus on stationary boundaries, moving boundaries or potentials could enable new laws of refractions. Meanwhile, media’s discretization plays a pivotal role in refraction owing to Galilean invariance breaking principle in discrete-wave mechanics, making refraction highly moving-speed dependent. Here, by harnessing a synthetic temporal lattice in a fiber-loop circuit, we observe discrete time refraction by a moving gauge-potential barrier. We unveil the selection rules for the potential moving speed, which can only take an integer v = 1 or fractional v = 1/q (odd q) value to guarantee a well-defined refraction. We observe reflectionless/reflective refractions for v = 1 and v = 1/3 speeds, transparent potentials with vanishing refraction/reflection, refraction of dynamic moving potential and refraction for relativistic Zitterbewegung effect. Our findings may feature applications in versatile time control and measurement for optical communications and signal processing.
the interface or potential moves on the lattice with a non-relativistic speed v (at the group velocity scale), the discrete refraction becomes highly dependent on the moving speed [29][30][31][32][33] , in contrast to the continuous refraction cases.The reason behind this is that refraction analysis at a moving interface involves a change of reference frame, i.e., Galilean transformation from laboratory to moving frame where Snell's law is applicable.Here Galilean rather than Lorentz transformation is applied considering the non-relativistic moving speed v ≪ c.One striking effect of space discretization in non-relativistic wave mechanics [34][35][36] is the breaking of Galilean covariance for discrete Schrödinger equation, making scattering highly dependent on the moving speed [29][30][31][32][33] .By contrast, in the continuous-space limit, Schrödinger equation possesses Galilean invariance, and hence scattering features are not affected by a drift of the interface or potential.Galilean invariance breaking has led to some puzzling scattering phenomena theoretically predicted in recent works.For example, it has been shown that any fast-moving potential becomes reflectionless or even invisible for discretized waves [29][30][31] , whereas the number of bound states sustained by a potential well on a lattice depends on the moving speed owing to a mass renormalization effect 32 .Anderson localization in a moving disordered potential on a lattice also dependents on the drift speed 33 , providing another remarkable signature of Galilean invariance breaking in discretewave mechanics.From application perspectives, the rich scattering properties enabled by potential moving provide unique discretelight control strategies going beyond traditional continuous-light schemes of using complementary matching layers 37,38 , Kramers-Kronig potentials [39][40][41] , parity-time symmetry 42,43 , and transformation optics 44,45 , etc.While above studies are based on theoretical analysis, experimental demonstrations on discrete refractions by moving interface or potential remain to date elusive, mainly due to technological difficulties in realizing and controlling moving potentials on a lattice.Additionally, there emerges research interest in pushing discrete-wave mechanics into the optical analog of relativistic regime, where the scalar-wavefunction Schrödinger equation describing single-band dynamics is replaced by the spinorwavefunction Dirac equation for two-band dynamics, and typical relativistic effects including Klein tunneling [46][47][48] and Zitterbewegung 49,50 have been realized.However, how the relativistic packets are refracted by moving potentials with broken Galilean invariance also remain unexplored both in theory and experiment.
In this work, we theoretically propose and experimentally demonstrate discrete temporal refraction with broken Galilean invariance by a moving gauge-potential barrier in a synthetic lattice created using pulse evolution in two coupled fiber loops [51][52][53][54][55][56] , demonstrating that the scattering features of the barrier are highly velocity-dependent.Firstly, we reveal that due to the two-miniband nature of the lattice, the beam-splitting-free refraction requires the potential to move at a quantized speed of integer v = 1 or fractional values v = 1/q (odd q) under reflectionless conditions.We also measure the relative barrier-crossing beam delay as the clear signature of single-beam refraction, which manifests asymmetric momentum-dependence feature.Zero beam delay can also be reached for each Bloch momentum via judicious design of gaugepotential difference, showing directional transparency of moving potentials.Symmetric momentum-dependence features can also be attained by using dynamically-modulated moving potentials.Finally, by considering the relativistic limit of light dynamics [46][47][48][49][50] , we also observe the refraction of Zitterbewegung motion, which exhibits momentum-independent beam delay and omnidirectional transparency condition rooted in the linear band nature of Dirac equation.Our study establishes and demonstrates basic laws for discrete refraction by quantized moving potentials, which may find applications in delay-line designing, precise measurements and signal processing.
Discrete temporal refraction and Galilean invariance breaking
We consider temporal beam refraction by a moving potential in a synthetic temporal mesh lattice, which is realized by considering light pulse dynamics in two coupled fiber loops [51][52][53][54][55][56] .As shown in Fig. 1a, when a single pulse is injected from one loop, it will evolve into a pulse train after successive pulse splitting at central coupler, circulating in two loops and interference at the coupler again.For two loops with lengths L ± ΔL, the pulse physical time is t m n = mT + nΔt, where T = L/c g is mean travel time and Δt = ΔL/c g ≪ T is travel-time difference in two loops, c g = c/n g is pulse's group velocity, n g = 1.5 is the group index in fiber and c is vacuum light speed.The pulse dynamics can thus be mapped into a "link-node" lattice model (n, m) [Fig.1c], where n, m denote transverse lattice site and longitudinal evolution step.The leftward/rightward links towards the node correspond to pulse circulations in short/long loops and scattering at each node corresponds to pulse interference at the coupler.Light evolution in the lattice is thus governed by the discretized coupled-mode equations where u m n , v m n denote light amplitudes in leftward/rightward links towards node (n, m), β is the coupling angle (0 <β < π/2), corresponding to a splitting ratio of cos 2 (β)/sin 2 (β) at central coupler.A moving, square-shaped gauge-potential barrier can be created by introducing a non-uniform phase shift distribution (tilted gray ribbon region in Fig. 1c) into the lattice via applying a sliding gate voltage modulation (Fig. 1b) (see also Fig. 2 for detailed experimental realizations) where n − vm = n 1,2 denote two moving boundaries, n 1 , n 2 are left and right boundary positions at initial step m = 0 and W = n 2 − n 1 is barrier width.v = p/q is a rational number moving speed, characterizing the barrier moves p sites in every q steps, with p, q being two integers.This rational speed constraint stems from the lattice's double discretization nature both in n, m axes, in contrast to moving potentials in waveguide arrays 29,30 , where only transverse axis n is discretized but the longitudinal axis z is still continuously varying.As we shall prove below, further selection rules for the moving speed v will be established based on the intrinsic requirements of discrete refraction.The additional phase shifts ϕ u,v (n−vm) are physically associated to an effective scalar φ and vector potential A [57][58][59] .Since φ and A accumulate phase in time and space 57 , ϕ = ∫φdt, ϕ = ∫Adx, we can relate the phase shifts to φ and A through 2 refers to the region outside and inside the barrier.It shows that φ corresponds to a direct-independent common phase shift in the leftward/rightward links while A corresponds to a direction-dependent phase shift contrast in these two links, both of which are reminiscent of their original physical meanings in electrodynamics 57 .Throughout the paper, we choose a vanishing gauge-potential reference (φ 1 , A 1 ) = (0, 0) outside the barrier and (φ 2 , A 2 ) = (Δφ, ΔA) inside it, where Δφ and ΔA are the scalar and vector potential difference.
To analyze the refraction by the moving barrier, we firstly need to choose an appropriate reference frame to apply Snell's law.As shown in Figs.1c and 1d, there exist two reference frames: the laboratory frame (n, m) where the potential is moving and the moving frame (n', m') where the potential is at rest, which are related to each other through Galilean transformation n'= n − vm, m' = m.In the (n, m) frame, since the boundaries are tilted that are not parallel to m axis, Snell's law, i.e., conservation of tangential propagation constants is not applicable.By contrast, in the (n', m') frame, the two boundaries become vertical and Snell's law is applicable.The most unique feature of a moving potential is that its refraction property is highly dependent on the moving speed v, owing to Galilean invariance breaking of the underlying equations.To illustrate this point, we consider the coupledmode equations in the (n', m') frame obtained via Galilean transformation from Eq. (1) [see Supplementary Materials (SM).Section (Sec) 2 for detailed derivation] where f ðn 0 ,m 0 + 1Þ = u m 0 + 1 n 0 + vm 0 ,gðn 0 ,m 0 + 1Þ = v m 0 + 1 n 0 + vm 0 .Note that Eq. (3) depends on potential moving speed v in a nontrivial way, and such a dependence cannot be eliminated by any gauge transformation of wave functions f and g.Clearly, the form of Eq. (3) in the moving frame is different from Eq. (1) in the laboratory frame, meaning that the discrete coupled-mode equations are not covariant under Galilean transformation.Therefore, refraction by a moving potential depends on the moving speed v, which also differs largely from that of the same potential at rest (v = 0).Breakdown of Galilean invariance stems from the non-parabolic nature of the band structure [29][30][31][32][33][34][35][36][60][61][62] , which can be physically explained in two important limiting cases. In th strong coupling limit β → π/2, Eq. ( 1) can be reduced to two decoupled nonrelativistic discrete Schrödinger equations 63 , where Galilean invariance is breaking owing to space discretization [34][35][36] .On the other hand, in the weak coupling limit β → 0, Eq. ( 1) at the long-wavelength limit reduces to a relativistic Dirac equation [see Eq. ( 12) below] [46][47][48][49][50] , which is clearly invariant for a Lorentz (rather than a Galilean) boost.
The refraction should be analyzed using band structure matching approach.In the (n, m) frame, the eigen Floquet-Bloch mode in each (k) are transverse Bloch momentum and longitudinal propagation constant of l-th Floquet band, l = 0, ±1, ±2, … is Floquet order, "±" denote positive and negative minibands.The physical effect of scalar and vector potential is thus to induce Bloch momentum and propagation constant shift.Applying Galilean transformation ( and comparing with the eigen mode (U',V') T exp(ik'n')expð À iθ 0 m 0 Þ in the (n', m') frame, we can obtain the Floquet band structure in the (n', m') frame and k' = k, (U',V') T = (U, V) T .The Galilean transformation does not change the Bloch momentum and eigenstate but can modify the band structure.Specifically, the band structure in the (n', m') frame acquires a ramped term −vk, i.e., a tilt, compared to that in the (n, m) frame, making the band structure matching also v dependent for refraction analysis.The group velocity in indicating that the packet acquires an additional velocity term −v in the moving frame, which is a direct consequence of Galilean transformation.Since the refraction is v-dependent due to Galilean invariance breaking, below we will identify the selection rules for v to enable a well-defined discrete refraction.
Selection rules of potential moving speed v for a well-defined refraction
Now we identify the selection rules for the potential moving speed v to ensure a well-defined refraction.Such selection rules basically arise from multiband nature of the Floquet synthetic lattice.For a generic v, refraction always exists but reflection may vanish 29,30 .Meanwhile, due to the multiple Floquet bands and two minibands nature of the lattice, an incident packet will generally split into multiple refracted beams.To ensure a well-defined refraction, such beam splitting should be eliminated.Below, we shall clarify the required condition of v to eliminate both reflection and beam splitting.Let us take the first refraction at right boundary as an example, which is also applicable to second refraction at left boundary.In our analysis, we only consider propagative waves with real number Bloch momenta.The more rigorous analysis involving evanescent waves is outlined in SM.Sec 2. Note that the evanescent waves decay to zero away from the interface, making no observable contributions to the refraction.Consider a Bloch-wave packet atðk i ,θ 0 i Þ = ðk 1, + ,θ 0 ð0Þ 1, + Þ in "+" miniband incident from right side of the barrier (blue circles in Fig. 3a).β 1 = β 2 = β is chosen in the two regions.For a large integer speed v = p or fractional v = p/q satisfying v > |v g,± (k)| max = cos(β), Floquet bands are tilted down monotonically with v' g,± (k) = v g,± (k) − v < 0 for every k.Since the barrier moves faster than group velocity upper bound, beam reflection is forbidden for any incident θ i '.This case is shown in Fig. 3a and Fig. S1 for v = 1, β = π/3.Meanwhile, only one refracted packet is matched at ðk ðlÞ 2, + ,θ 0 ðlÞ 2, + Þ or ðk ðlÞ 2,À ,θ 0 ðlÞ 2,À Þ for "+" or "−" miniband in l-th Floquet order (red circles in Fig. 3a).On the contrary, for a slow, fractional moving speed v = p/q < |v g,± (k)| max = cos(β), reflection is eliminated only in specific ranges of θ i ' (white regions outside gray ribbons in Fig. 4a and Fig. S2 for v = 1/3, β = π/3).While in other ranges of θ i ' (gray ribbons), reflection occurs at specific k with v' g,± (k) > 0 (black circles in Fig. 4a).Meanwhile, multiple refracted packets can be matched for each miniband, which are spaced in k other than 2π and possess different v g to cause beam splitting.The reflective ranges can be further tuned by β, varying from full to partial reflectionless, ultimately to full reflective as β decreases (see Fig. S3).For a given β, the full reflective regime can be always reached by choosing a sufficiently slow moving speed v → 0.
Under the reflectionless condition, let's reveal the selection rules for v to eliminate beam splitting from "±" minibands and different Floquet orders.By applying Snell's lawθ 0 ðlÞ 2, ± ðk ðlÞ No beam splitting requires group velocity degeneracy for "±" minibands:v g, + ðk ðlÞ which leads to vqπ = q'π, q' = 1, 3, 5… is also an odd number.Since k 2,À are two closest solutions of Eq. ( 6) in l-th Floquet order, q' = 1 should be chosen, which yields the quantization condition for The speed selection rule stems from two-miniband nature of the lattice, which doesn't exist in single-band waveguide arrays 29,30 .Under this condition, the refracted packets in adjacent Floquet orders satisfy k ðl + 1Þ 2, ± À k ðlÞ 2, ± = 2π=v = 2qπ, meaning that they also share the same v g to cause no beam splitting.On the contrary, for an unpermitted integer speed v = p ≠ 1 or fractional one v = p/q 1 ≠1/q (odd q), we can get k Refraction from a potential barrier with quantized moving speed: experimental results In this section, we present experimental demonstrations of velocitydependent temporal beam refraction using two permitted moving speeds of integer v = 1 and fractional v = 1/3.In the experiment, the observation of a single transmitted wave packet is the clear signature of beam-splitting elimination.The refraction process can be quantitively characterized by a relative beam delay, denoting the packet's transverse propagation difference with and without the barrier.The packet's transit time in the barrier is τ = W/|v' g,+ (k 2,+ )| = W/|cos(β) sin(k 2,+ −ΔA) − v|, corresponding to a relative beam delay Hereafter we will omit Floquet index l by choosing l = 0. d > 0 (d < 0) denotes the cases of beam delay and advance.Since d can be tuned continuously from positive to negative values by varying k 1,+ , there must exist a specific k 1,+ where d = 0 is reached.This case corresponds to a transparency condition, where both refraction and reflection are eliminated.d = 0 requires the group velocity degeneracy inside and outside the barrier, v g,+ (k 2,+ ) = v g,− (k 2,− ) = v g,+ (k 1,+ ), which can only be fulfilled provided that k 2,− −ΔA = −k 1,+ , k 2,+ −ΔA = π − k 1,+ .By combining with Eq. ( 8), we can obtain the transparency condition therefore, we can achieve a directional transparent moving potential for any targeted incident k 1,+ by designing an appropriate gaugepotential combination of Δφ + vΔA.
Both the beam delay and transparency condition have been verified by our refraction experiments.The experiment setup is shown in Fig. 2a, which consists of two fiber loops with lengths of L ± ΔL ~5 km ± 15 m, corresponding to T ~25 μs and Δt ~75 ns.The initial pulse is generated from a 1550 nm distributed-feedback continuous laser, which is cut into a ~50 ns duration pulse by a Mach-Zehnder modulator (MZM) and injected from the long loop.To record the pulse intensity evolution, we couple a portion of light from both loops and detect them using photodiodes (PDs) after each circulation step.The phase shifts are applied through phase modulators (PMs) driven by the arbitrary wave generators (AWGs) with programmable modulation signals.The required moving speed v is attained by precisely controlling the relative delay of the sliding gate voltage between adjacent modulation periods (Fig. 2b).More details about experimental setup and measurement techniques are discussed in "Methods".
Refraction by a dynamically-modulated, moving gaugepotential barrier
In previous sections, we consider a sliding potential where (φ 2 , A 2 ) doesn't vary with evolution step m while moving.Here we apply an additional dynamic modulation [φ 2 (m), A 2 (m)] to the moving potential to modify its refraction properties.A simultaneous spatiallydistributed and time-modulated potential is referred to as spacetime potential, which can usually possess distinct scattering properties beyond the static counterpart arising from Galilean invariance violation solely.
As illustrative examples, we choose two typical dc-and ac-driving cases with linearly-varying and periodically-oscillating vector potential A 2 (m) = A 2 (0) + αm and A 2 (m) = A m cos(ωm + φ m ), where α = 2π/T dc is dc-driving force and A 2 (0) is the initial phase.A m , ω = 2π/T ac and φ m are ac-driving amplitude, frequency, and initial phase.The dc-or acdriving A 2 (m) corresponds to a constant or a time-oscillating electric field, which can induce Bloch oscillations (BOs) or directional transport.The refracted Bloch momenta thus evolve as k 2,± (m) = k 2,± (0) -A 2 (m), corresponding to the instantaneous group velocities v g,± [k 2,± (m)] = ±cos(β)sin[k 2,± (0)-A 2 (m)], where k 2,± (0) are initial Bloch momenta.When we choose the permitted v in Eq. ( 7), the "±" minibands always share the same v g as m varies and beam splitting is still eliminated as in the unmodulated case.
Usually, for a slow driving frequency, i.e., with small α or ω, the adiabaticity and continuous-time approximation are valid, we can define a m-independent averaging group velocity in one driving period where J 0 is 0-th Bessel function.For ac-driving case, we are interested in the dynamic localization (DL) effect 54 occurring as J 0 (A m ) = 0, which leads to 〈v g,± (k 2,± )〉 = 0.Meanwhile, to guarantee a well-defined beam delay, the barrier's transit time should be an integer multiple of the driving period,τ = W/v = sT dc = sT ac , where s is an integer.Under BOs or DL, the beam delay can be uniformly written as Unlike above static moving potential where d is asymmetric for ±k 1,+ , the dynamically-modulated case enables symmetric d for ±k 1,+ .This symmetric momentum-dependent feature is due to periodic nature of BOs (or DL) and hence no net transport in the barrier.The transparency condition is achieved at k 1,+ = 0 and v-independent, which is also different from the static case in Eq. ( 9).4d.The reason is that for θ i ' initially locating in the reflective range, the driving shifts it out of the reflective regime to eliminate beam reflection.The elimination of reflection by dynamicallymodulated moving potential expands our control capabilities over refractions, which is firstly realized in this work.
Light dynamics in the relativistic regime: the refraction of Zitterbewegung motion Finally, we shall push the wave dynamics into optical analog of relativistic limit and study the unique refraction features in this regime [46][47][48][49][50] .To this end, we choose the weak coupling limit β → 0, where we can get a very narrow band gap Δ g = |θ + (0) − θ − (0)| = 2β between "±" minibands.The two bands also become linear in the whole Brillouin zone, θ' ± (k) = ±cos −1 [cos(β)cos(k)]−vk = (±1 − v)k, possessing two constant group velocities of v' g,± (k) ≡ ±1 − v and nearly touch at k = 0. Accordingly, a packet excited at k = 0 will manifest relativistic Zitterbewegung (ZB) 49,50 , an oscillatory trembling motion due to the interference (beat) between "±" minibands.How does the refraction behave in the relativistic limit by the moving potential is an interesting question, because in the continuous (longwavelength) limit k → 0, Galilean invariance is still broken since the two-miniband lattice model described by Eq. ( 1) reduces to a Dirac (see below) rather than two decoupled Schrödinger equations, which is covariant for Lorentz rather than Galilean boost.
In the presence of moving barrier, the input packet displaying ZB at k = 0 (denoted by blue circles) can match two refracted packets (red circles) locating in the linear regime k ≠ 0 far away from the band gap (Fig. 6a).As a result, ZB comes to a complete halt in the barrier and turns into directional transport with a constant group velocity v g,+ (k 2,+ ) ≡ −1.After crossing the barrier, ZB is restored.The transit time is τ = W/|v' g,+ (k 2,+ )| ≡ W/(1+v), corresponding to a momentumindependent relative beam delay Usually for a broad packet, |v 0 | = 1/(βw 0 ) 2 ≪ 1 can be neglected, we get d = −W/(1+v).The beam delay of ZB is thus independent of gauge- potential difference Δφ or ΔA.This is due to the linear band nature of Dirac equation, which is in stark contrast to previous nonrelativistic cases.
In experiment, we firstly choose the case without the barrier and β = π/15, W = 40, w 0 = 10. Figure 6b shows the packet evolution pattern, manifesting the characteristic trembling motion of ZB with T ZB = 15, A ZB = 2.38.In the presence of moving barrier with v = 1, (Δφ, ΔA) = (−π/ 2, −π/2) (Fig. 6d), the packet exhibits ZB outside the barrier and turns into directional transport inside it, restoring to ZB after crossing it, giving rise to d = −W/(1+v) = −20.For k = −π/4 away from ZB point at k = 0 (Fig. 6c), the packet tunnels directly through the barrier, showing a transparent potential rather than the occurrence of ZB.The transparency condition is momentum-independent and fulfilled with broadband feature for any k ≠ 0, in contrast to above momentumdependent features in non-relativistic cases.This feature is also rooted in broad linear band nature of Dirac equation.While for v = 1/3 (Fig. 6e), the packet excited at k = 0 experiences nearly total reflection at first boundary (Fig. 6f), making ZB and refraction break down.This is because for very small β, the full reflective case is reached more easily (Fig. S3), leading to the nearly total beam reflection and breakdown of refraction.
Discussion
In conclusion, we suggested and experimentally demonstrated discrete temporal refractions by a moving gauge-potential barrier on a lattice with broken Galilean invariance.To achieve beam-splitting-free refraction, a quantization condition of potential moving speed v is revealed, namely v can only take integer v = 1 or fractional values v = 1/q (odd q).Zero temporal delay is observed for each specific input Bloch momentum, corresponding to directional transparent moving potentials with simultaneous refractionless and reflectionless features.We also demonstrate the refraction effect of Zitterbewegung motion by moving potential.Remarkably, in this regime we observe potentialdifference-independent temporal delay and momentum-independent transparency by harnessing the linear band nature of Dirac equation.Our work establishes and experimentally demonstrates fundamental laws governing discrete light refraction by moving potentials.This paradigm may find applications in versatile temporal beam steering (see SM. Sec. 5 for typical examples), precise time-delay control and measurement for optical communications and signal processing.
Apart from controlling discrete refraction, moving potentials on a lattice demonstrated in our work can be harnessed to realize many other exotic scattering phenomena rooted in the violation of Galilean invariance.For example, by drifting a disordered potential on a lattice, it is possible to modify the Anderson localization and even wash it out completely with appropriate choice of moving speed 33 .Moreover, by using specially-tailored potentials, such as parity-time-symmetric potentials 51 , Kramers-Kronig potentials [39][40][41] , and many others with engineered spatial spectra 61,62 beyond the simplest squared potential barrier, our experimental platform may enable the observation of more exciting discrete-wave mechanics, such as the mass renormalization effects 32 , otherwise inaccessible by using stationary potentials in continuous-wave systems.
Experimental setup and measurement techniques
The main experimental setup has been discussed in the main text, let's summarize other experimental details and key measurement techniques.The pulse propagation losses are compensated by EDFAs.To suppress the transient process of EDFA, the signal pulse is mixed with a high-power 1530 nm pilot light before entering EDFA.After EDFA, the pilot light and associated spontaneous emission noise are removed by BPF.PBS and PCs are used to control light polarization in two loops since both MZM and PMs are polarization sensitive devices.ISOs are used to ensure unidirectional circulation in both loops.Key techniques in our experiments include preparation of a Bloch-wave packet with a required Bloch momentum k and the generation of a sliding gate voltage with a required moving speed v.
Preparation of a Bloch-wave packet with a required Bloch momentum
A Bloch-wave packet in the temporal lattice corresponds to a pulse train, with amplitude described by (U, V) T exp[−(n − n 0 ) 2 /(Δn) 2 ]exp(ikn), where Δn is the Gaussian envelop width and k is the initial Bloch momentum.This pulse train is generated from a preliminary evolution of a single pulse injected from the long loop to guarantee the coherence for interference purpose at the coupler.During the preliminary evolution, the pulses circulating in the short loop are attenuated every other round trip by MZM while those in the long loop are kept constant in each round trip 54 .With this approach, we can obtain a pulse train with Δn ≈ 10 after ~m = 100 circulation times.The Bloch momentum of packet is k = (π−ϕ)/2, where ϕ is the short-loop phase modulation in each round trip.Then, by applying appropriate phase and intensity modulation in the 101st step, the required eigenstate (U, V) T can be imparted to the packet.
Generation of a sliding gate voltage with a required moving speed v
The moving gauge-potential barrier is created by controlling the modulation waveform generated from AWGs.Let's take v = 1 and v = 1/ 3 as examples.For v = 1, the sliding gate voltage (with a real width of WΔt) needs to delayed exactly by one lattice site during one evolution step, corresponding to a time delay of Δt in each modulation period T. While for v = 1/q, (q = 3, 5,…), this speed is an average speed, characterizing the averaging site number the gate voltage moves in one period.For example, v = 1/3 can be realized by introducing a time delay Δt in every three periods 3 T. Since T/Δt = 25μs/75 ns ~333, the fiber loop can accommodate 333 evolution steps, which are enough to observe the refraction effect.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Fig. 1 |
Fig. 1 | Schematic experimental setup, modulation waveforms, and synthetic lattices.a Schematic experimental setup of two coupled fiber loops.Blue and red arrows denote light circulation directions in short and long loops.PM: phase modulator; AWG: arbitrary wave generator; PD: photon detector; OC: optical coupler; VOC: variable optical coupler.b Schematic modulation waveforms in AWG to generate a moving barrier.The squared gate voltages (red) denote the modulation signals in a period T for initial and three successive steps.v and v g denote the barrier's moving speed and the packet's group velocity c A moving gaugepotential barrier with moving speed v = 1/3 in a discrete-time temporal lattice made of leftward (blue) and rightward (red) links towards the node (n, m).The phase modulations are (ϕ u2 , ϕ v2 ) and (ϕ u1 , ϕ v1 ) inside and outside the barrier, corresponding to a gauge-potential distribution of (φ 2 , A 2 ) and (φ 1 , A 1 ).The two moving boundaries are n − vm = n 1,2 , where v is moving speed, n 1 , n 2 are initial positions of two boundaries and W = n 2 − n 1 is barrier width.A Bloch-wave packet (purple) is incident from the right side of barrier.d Schematic of the lattice and gaugepotential distribution in the moving frame (n', m') obtained through Galilean transformation from laboratory frame (n, m).
Fig. 2 |
Fig. 2 | Layout of experimental setup and schematic modulation waveforms to generate a moving gauge-potential barrier.a All optical and electrical components are as follows: Polarization controller (PC); Mach-Zehnder modulator (MZM); Optical coupler (OC) with a constant splitting ratio and variable optical coupler (VOC) with a programable splitting ratio; Arbitrary waveform generator (AWG); Single mode fiber (SMF); Photodiode (PD); Variable optical attenuator (VOA); Wavelength division multiplexer (WDM); Erbium-doped fiber amplifier (EDFA); Band-pass filter (BPF) and phase modulator (PM).b Upper and lower waveforms show the sliding gate voltages in long loop to generate a moving gauge-potential barrier with moving speeds v = 1 and v = 1/3.Δt, T correspond to the time interval between adjacent lattice sites and modulation period, and t 1 denotes the timing of rising edge of gate voltage in the first period.The barrier width is W, corresponding to a duration time of WΔt for the gate voltage signal.
|
2024-06-29T06:17:19.297Z
|
2024-06-27T00:00:00.000
|
{
"year": 2024,
"sha1": "52939350766c55ecfc5e545557dcce6738d5d4a0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1f8900b9cf8ed25a1b90ae97fa1b7d6379de7507",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237605231
|
pes2o/s2orc
|
v3-fos-license
|
Tuning magnetic antiskyrmion stability in tetragonal inverse Heusler alloys
The identification of materials supporting complex, tunable magnetic order at ambient temperatures is foundational to the development of new magnetic device architectures. We report the design of Mn2XY tetragonal inverse Heusler alloys that are capable of hosting magnetic antiskyrmions whose stability is sensitive to elastic strain. We first construct a universal magnetic Hamiltonian capturing the short- and long- range magnetic order which can be expected in these materials. This model reveals critical combinations of magnetic interactions that are necessary to approach a magnetic phase boundary, where the magnetic structure is highly susceptible to small perturbations such as elastic strain. We then computationally search for quaternary Mn2(X1,X2)Y alloys where these critical interactions may be realized and which are likely to be synthesizable in the inverse Heusler structure. We identify the Mn2Pt(1-z)X(z)Ga family of materials with X=Au, Ir, Ni as an ideal system for accessing all possible magnetic phases, with several critical compositions where magnetic phase transitions may be actuated mechanically.
A substantial component of spintronic device development is the discovery of materials that are capable of hosting exotic spin textures over precisely tuned field and temperature ranges [1]. While spin textures can be controlled by magnetic fields, dynamically coupling these magnetic phases to other variables such as electric fields or mechanical perturbations allows for new control paradigms and device architectures [2]. Magnetic skyrmion and antiskyrmion textures have in particular attracted attention due to their combination of thermodynamic stability, unique topological properties, and efficient transport behavior [3][4][5][6][7][8]. A number of bulk material systems capable of hosting equilibrium skyrmions [5,[9][10][11][12][13][14] or antiskyrmions [15,16] have been discovered, and recent reports have indicated that certain materials may support both topologies as metastable states [17,18]. However, tuning the geometry and stability windows of these topological phases, at the synthesis stage or in situ, remains a challenge. This is due to the lack of a theoretical understanding of flexible material systems that are capable of hosting (anti)skyrmion phases at room temperature, as well as the irreversibility of the structural deformations typically necessary to elicit a substantial magnetic response [14,19]. An attractive model system for realizing chemicallyand mechanically-tunable (anti)skyrmion states are the tetragonal inverse Heusler alloys [15,16,20]. These materials have the Mn 2 XY chemical formula where the X sublattice generally consists of late transition metal elements and Y = Ga, Sn, In [20,21]. Below their martensitic transformation temperature, they possess D 2d symmetry. This symmetry is compatible with the formation of thermodynamically-stable antiskyrmions [22,23], while metastable skyrmions can be nucleated with an appropriate history of applied magnetic fields [17,18]. Critically, this symmetry also ensures that the topological phases may remain stable from 0 K to T c (Curie * dkitch@alum.mit.edu temperature) [3,23,24], which in these materials is often well above room temperature [21]. Furthermore, the Heusler alloys allow for immense chemical flexibility, which has been previously used to tune their structural [25], electronic [26,27] and magnetic [28,29] properties. The combination of chemical flexibility in the X and Y sublattices and thermal stability of the topological magnetic states means that the topological magnetism seen in Mn 2 XY inverse Heusler alloys may in principle be tuned and observed at room temperature, as is necessary for device applications.
Here, we implement a general design strategy for realizing chemically and mechanically tunable antiskyrmions using the Mn 2 XY tetragonal inverse Heuslers as a model system. We first derive a universal model for the shortand long-range magnetic order of materials with the inverse Heusler structure in terms of computable magnetic interactions. We then enumerate known and hypothetical Mn 2 XY inverse Heusler materials and characterize the impact of varying chemistry on the magnetic interactions and chemical stability of the alloys. We show that with an appropriate choice of composition on the X and Y sublattices, one can realize all possible magnetic phases and create materials where magnetic phase transitions may be actuated with small, purely elastic mechanical perturbations. Finally, we identify Mn 2 Pt 1−z X z Ga with X =Au, Ir, Ni and z ≈ 0.1 − 0.2 as an ideal system for realizing this behavior, combining flexible roomtemperature magnetism with chemical and structural stability.
RESULTS
Our approach to realizing chemically-and mechanically-tunable antiskyrmion states is to relate the form of the magnetic phase diagram to variations in atomistic magnetic interactions, and then characterize how these interactions may be tuned by chemical changes and elastic perturbations. This approach is shown schematically in Figure 1. We first construct an FIG. 1. Schematic summary of our materials design strategy for obtaining tunable antiskyrmion behavior. We first map all hypothetical compounds A, B, C to interaction parameters J1, J2, . . . of a quasi-classical atomistic Hamiltonian to determine their short-range spin order. We repeat this analysis at longer length scales by coarse-graining the magnetocrystalline anisotropy K and Dzyaloshinskii-Moriya interaction D (DMI). We identify the parameter space where antiskyrmions (ASk) may be expected and construct alloys A1−xBx which fall in the region of ASk stability. Finally, we identify critical compositions xc falling on magnetic phase boundaries as compounds where the magnetic phase transition may be actuated by small perturbations, e.g. reversible elastic strain.
atomistic quasi-classical spin Hamiltonian applicable to all tetragonal inverse Heuslers, accounting for local exchange, Dzyaloshinskii-Moriya (DMI) and anisotropy interactions. Using this model, we parametrically enumerate the local spin structures that can be stabilized by various combinations of exchange strengths. Next, we coarse-grain these atomistic interactions to produce a continuum free energy functional that enables a parametric exploration of long-range magnetic structures such as antiskyrmions. We deduce the magnetic behavior of candidate materials A, B, C by fitting their interaction parameters to density functional theory (DFT) data. We then construct alloys A 1−x B x between compatible materials A and B that share the same local spin order. In the A 1−x B x alloy, coarse-grained magnetic interactions vary continuously with composition, making it possible to identify critical compositions x c that reside on magnetic phase boundaries where magnetic phase transitions may be actuated by small perturbations such as reversible elastic strain.
Short-and long-range magnetic order in tetragonal inverse Heuslers
The Mn 2 XY tetragonal inverse Heusler alloys are defined by the idealized crystal structure shown in Figure 2a. This structure consists of four tetragonally distorted interpenetrating face-centered-cubic sublattices. Two of these sublattices, Mn(1) and Mn(2), have localized magnetic moments in the range of 2-3 µ B per atom. The Y sublattice generally contains one of Ga, Sn, or In and is non-magnetic. The X sublattice can be occupied by a range of late transition-metal elements, with previously reported compounds having X = Fe, Co, Ni, Rh, Pd, or Pt [21,28,30,31]. In this study, we supplement these elements with other transition metals which could potentially be doped onto the X sublattice: Ru, W, Os, Ir, Au [21] However, we exclude Fe as it introduces a large magnetic moment on the X sublattice and cannot be treated with the same magnetic model as systems with non-magnetic X elements. As both the chemical stability and the degree of chemical order vary substantially between these chemistries, we will discuss which compositions are most likely to be synthetically accessible in a later section.
We represent the magnetic behavior of Mn 2 XY inverse Heuslers with a combination of exchange interactions, which are the dominant energy scale and control the local spin structure, and coarse-grained spinorbit effects that control the long-range modulation of Long-range phases generated as modulations of the FiM order at low-T , which include spin helices (Hx), antiskyrmion lattices (ASk) and conical helices (Cx). Long-range structure is governed by the relative strength of uniaxial anisotropy K, Dzyaloshinskii-Moriya coupling D, spin-stiffness A and applied field H along the c-axis. The phase diagram is evaluated for J2 = J3 = 0 (red circle in a.) and an equilibrium helical wavelength of 24 unit cells. c. Extension of the K = 0 region of the long-range phase diagram to finite temperature. Color denotes the expected number of antiskyrmions per 24x24 unit cell as measured by the topological index density t. Solid lines denote first-order phase transitions while dotted lines denote second-order or continuous phase boundaries. Tc denotes the Curie temperature and fd refers to the fluctuation-disordered Brazovskii region [24,32].
the local spin structure. We consider the atomic exchange interactions in conventional Heisenberg model form, H exchange = ij∈α J α (− S i · S j ) where the summation includes couplings up to the 3rd nearest neighbor as shown in Figure 2b. J 1 represents the strongly antiferromagnetic direct exchange between the Mn(1) and Mn (2) sublattices, while J 2 and J 3 capture the weaker interactions within the two sublattices. To further simplify the model, we set J 2 = J so that the geometrically-identical interactions within the Mn(1) and Mn (2) sublattices are assumed to have the same interaction strength. The complete form of this spin Hamiltonian is given in Supplementary Data 1. Despite the simplicity of this model, we find that it is sufficient to capture the energetics of collinear spin configurations in the Mn 2 XY compounds considered in this work, reproducing both the ground state and excited state spectrum as computed with density functional theory (DFT). The results of this fitting procedure and the correspondence between the model and the electronic structure data is quantified in Supplementary Data 2 and 3.
The competition between the exchange interactions J 1 , J 2 and J 3 gives rise to several local spin orderings, as shown in Figure 3a. When J 2 and J 3 are ferromagnetic, or negligible compared to J 1 , the spins adopt a ferrimagnetic structure (FiM) with the Mn(1) and Mn(2) sublattices antialigned with each other. This structure has a net moment as the local moment on Mn(1) is typically larger than that on Mn(2). Antiferromagnetic J 2 and J 3 interactions frustrate this order and can lead to a region of non-collinear order (NCL), or collinear antiferromagnetic structures with spins either alternating in the xy plane or along the z axis (AFM xy and AFM z respectively). Of these structures, we focus on the ferrimagnetic phase as it is the only spin structure with a net magnetic moment at low temperature and field.
The long-range magnetic texture is defined by a gradual rotation of the local spin structure driven by the Dzyaloshinskii-Moriya component of spin-orbit coupling and suppressed by the magnetocrystalline anisotropy and spin-stiffness. These phenomena are conventionally described by a coarse-grained magnetic Hamiltonian for the D 2d point group [33,34]: where m is the unit vector direction of the local magnetization. A is the spin-stiffness parameter and represents the coarse-grained exchange strength. The relationship between A and the atomistic J 1 , J 2 , J 3 parameters is given in Supplementary Data 4. D and w kn = ijk m i ∂m j /∂r n represent the strength and form of the coarse-grained Dzyaloshinskii-Moriya interaction, where ijk is the Levi-Civita tensor and repeated indices imply summation [33]. K parametrizes the uniaxial anisotropy with respect to the crystal axes given in Figure 2a. While higher-order anisotropies are necessary to accurately capture the DFT energetics of some Heusler compounds including the Pt and Ir-based systems considered here, we have found that these terms are never large enough to alter the final magnetic phase diagrams in our analysis. Here, all spatial dimensions are taken to be in units of the lattice parameter of the conventional structure shown in Figure 2a (a for the xy directions, c for the z direction). Whether or not the local spin structure develops a long-range texture at equilibrium is determined by the competition between the Dzyaloshinskii-Moriya and magnetocrystalline anisotropy components of spin-orbit coupling (D/A and K/A respectively) [3,24]. These spin textures include spin helices (Hx), spin cones (Cx) and antiskyrmions (ASk) which all have a characteristic energy scale of D 2 /2A and form the phase diagram shown in Figure 3b in the low temperature limit. This phase diagram shows that as a function of the normalized anisotropy (2KA/D 2 ) and magnetic field along the c-axis (2HA/D 2 ), spin helices and conical structures are stabilized for −2 ≤ 2KA/D 2 ≤ 3. Antiskyrmions are favored under a small applied field for −1 ≤ 2KA/D 2 ≤ 1.7. The change in this phase diagram at elevated temperature is shown in Figure 3c for the case of vanishing anisotropy K and J 2 = J 3 = 0. The helical and antiskyrmion phases persist at all temperatures up to T c with minimal change in the phase boundary between them, although the maximum field at which antiskyrmions are stable decreases. Variation of J 2 and J 3 within the FiM region do not alter the overall shape of this phase diagram, but do significantly rescale T c as shown in Supplementary Data 4.
Magnetic structure and chemical stability of Mn2XY tetragonal inverse Heuslers
We now examine where known and hypothetical ternary Mn 2 XY inverse Heuslers fall on the magnetic phase diagrams shown in Figure 3. In Figure 4a we plot the exchange interactions in a range of compounds and deduce their local spin structure, focusing only on those compositions that thermodynamically favor the tetragonal inverse Heusler structure at the Mn 2 XY composition. The majority of these compounds fall in the FiM region, with frustrated non-collinear order expected in Mn 2 PtSn, Mn 2 PtIn and Mn 2 RhSn consistent with experimental reports [20,[35][36][37][38]. For the remaining materials that favor FiM order, we compare the coarsegrained spin-orbit coupling to the regions where helical, conical, or antiskyrmion long-range phases can be expected. Here we also include the hypothetical compounds Mn 2 AuGa and Mn 2 WSn for which the inverse Heusler chemical order is metastable. As can be seen in Figure 4b, most compositions are easy-axis ferrimagnets (FiM with K < 0), with only the hypothetical Mn 2 AuGa and Mn 2 WSn materials falling in the easy-plane ferrimagnet region (FiM, K > 0). Non-collinear spin textures can be expected in Mn 2 PtGa, Mn 2 IrSn, Mn 2 PdSn and Mn 2 NiSn, where the Dzyaloshinskii-Moriya interaction is sufficiently large to fall in the antiskyrmion stability region (−1 ≤ 2KA/D 2 ≤ 1.7).
Of the various compounds mapped out in Figure 4, we focus on Mn 2 XGa for X = Pt, Ni, Ir, as they are the most likely to be synthesizable at equilibrium as stoichiometric, well-ordered inverse-Heusler compounds. The synthesis of any inverse Heusler compound can be challenging, as the finite-temperature phase diagrams of the binary endpoints are often very complicated and the phase diagrams of the full ternary systems are not known. For example, the prototypical compound for the tetragonal inverse Heusler structure, Mn 3 Ga, forms by a lowtemperature peritectic reaction with numerous competing phases that need to be avoided to produce a highquality material [39,40]. Furthermore, the formation of chemical order is complicated by the coupling between chemical order and the structural transformation between the high-temperature austenite and low-T martensite phase [21,41,42]. While assessing the full finitetemperature phase diagrams and ordering kinetics of the chemistries described here is prohibitive, we can evaluate the likelihood that any given Mn 2 XY may be formed by the conventional process of high-temperature mixing followed by a long low-temperature anneal. We assume that the high-temperature precursor is a disordered state prepared at the correct stoichiometry [20]. As this precursor is cooled, the formation of an ordered product is characterized by a driving force ∆E order-disorder , which we approximate using the difference in energy between the ordered inverse Heusler product and most favorable disordered state among the common disorder models proposed for these systems (L2 1b (Mn(1)/X), BiF 3 (Mn(1)/Mn(2)/X)) [30]. This ordering reaction competes with phase separation, whose likelihood is correlated with the energy of formation ∆E formation of the target compound from competing phases in each Mn-X-Y ternary space [43]. The equilibrium phases used to determine chemical stability are given in Supplementary Data 5. Figure 5 charts the driving forces for the competing order-disorder and decomposition reactions for the Mn 2 XY chemistries discussed here, excluding the Wbased compounds and Mn 2 AuSn as they are exceptionally unstable. All In-based and most Sn-based compounds have a strong driving force for phase separation and thus are not likely to retain the desired stoichiometry after a long anneal. Furthermore, a number of compounds have a minimal driving force to order, or in the case of Pd-based systems and Mn 2 AuGa do not favor the ordered states we have considered at all. The systems which favor the ordered inverse Heusler structure at low-T equilibrium are Mn 2 XGa for X =Ir, Pt, Rh, Ru, Ni, Co and Mn 2 XSn for X =Ru, Rh. From these, we exclude the Ru-based systems and Mn 2 RhGa as experimental reports of these compounds indicate that the ordered configuration is difficult to obtain in practice [30], Mn 2 RhSn as it does not favor the locally-collinear FiM phase, and Mn 2 CoGa as it does not produce a tetragonal distortion. We now focus on the remaining synthesizable chemistries to identify combinations which, when alloyed, may generate magnetic phase transitions.
Designing tunable magnetism in Mn2Pt1−zXzGa alloys
Having enumerated the magnetic behavior and chemical stability of the ternary Mn 2 XY inverse Heuslers, we turn to quaternary alloys in this space to tune magnetic properties and fully explore the magnetic phase diagram shown in Figure 3b. In a solid solution between two compounds Mn 2 X (1) Y (1) and Mn 2 X (2) Y (2) with the same local chemical and spin structure, the coarse-grained magnetic parameters D, K and A must vary continuously with composition. Graphically, this continuous variation means that the magnetic parameters of the alloy will fall on a smooth curve connecting the endpoint compounds in Figure 4b. By alloying two materials which are separated by a magnetic phase boundary in Figure 4b, we can switch the magnetic behavior of the alloy between the Likelihood that an ordered inverse Heusler compound may be obtained by an equilibrium synthesis method at the Mn2XY stoichiometry. ∆E order-disorder measures the driving force for ordering and is defined as the minimum difference in energy between the ordered structure and the common types of disorder observed in inverse Heuslers (L2 1b (Mn(1)/X), BiF3 (Mn(1)/Mn(2)/X)) [30]. ∆E formation measures the likelihood of phase separation into other phases in the Mn-X-Y chemical space and is defined as the difference in zero-T energy between the ordered Mn2XY phase and an equilibrium combination of competing phases given in Supplementary Data 5. two phases by varying composition. Furthermore, we can identify the alloy composition that lies on the magnetic phase boundary to create a material where the magnetic phase transition can be actuated by a small elastic strain.
On the basis of the magnetic interaction parameters shown in Figure 4 and synthesizability metrics discussed in Figure 5, we identify the Mn 2 Pt 1−z X z Ga system for X = Au, Ir, Ni as a model system for tunably accessing all regions of the magnetic phase diagram. The Mn 2 PtGa endpoint lies in the ASk region of the long-range magnetic phase diagram in Figure 4b. The X =Au, Ir, Ni endpoints of the alloy are separated from this region by the ASk/Hx, ASk/Cx, Hx/FiM and Cx/FiM phase boundaries. Thus by varying the dopant element and composition we expect the alloy to move across the ASk, Hx, Cx, and FiM regions of the magnetic phase diagram, with several critical compositions corresponding to magnetic phase boundaries. Furthermore, the majority phase Mn 2 PtGa is one of the few compositions that thermodynamically favors the tetragonal inverse Heusler structure, which imparts chemical stability to this alloy. Thus, while Mn 2 AuGa for example is metastable in the inverse Heusler structure, a modest amount of Au doping into Mn 2 PtGa retains the desired structure. Figure 6 shows a quantitative evaluation of the magnetic and chemical behavior of the Mn 2 Pt 1−z X z Ga family of alloys for X = Au, Ir, Ni. Assuming that a solidsolution forms across these compositions, the micromagnetic parameters K, D and A must vary smoothly be- tween the alloy endpoints. For the magnetocrystalline anisotropy K, we compare three models for how this quantity varies with alloy composition z, as shown in Figure 6a. The simplest model is a linear interpolation K(z) = K(0) + z(K(1) − K(0)) which neglects any new magnetochemical interactions that may appear at intermediate compositions of the alloy. A more refined model is a cluster expansion (DFT-CE) model, K = α J (K) α σ α , where α represent two-, three-, and four-body clusters of sites with chemical occupancy denoted by σ, and J (K) α are interaction coefficients which capture the contribution of each cluster to the total magnetocrystalline anisotropy. This model is analogous to a conventional cluster expansion of the total energy [44,45] and, parametrized using DFT data, captures the impact of distinct chemical environments on the magnetocrystalline anisotropy. The final model is an explicit DFT calculation of the magnetic anisotropy at select compositions of the alloy using a special quasi-random structure (SQS) [46]. As can be seen in Figure 6a, while the DFT-CE and DFT-SQS models consistently indicate a degree of non-linearity in K(z), the deviation from the simple linear interpolation is small. Thus, in the case of D and A, we assume a simple linear interpolation with composition z, D(z) = D(0) + z(D(1) − D(0)) as accounting for any non-linear contribution to these terms is very computationally expensive and unlikely to substantially affect our conclusions.
In Figure 6b, we combine the DFT-CE model for K(z) and linear interpolation models of A(z) and D(z) to evaluate the magnetic phase diagram of Mn 2 Pt 1−z X z Ga alloys. Starting from the ASk region for z = 0, the wavelength λ = 2πA/D of the helimagnetic phases increases until the alloy crosses into new regions of magnetic phase space at z ≈ 0.1 − 0.2, depending on the choice of X element. In the case of X = Ir, we expect a transition to Hx-type behavior at z = 0.22 and easy-axis FiM at z = 0.28. For X = Au, the ASk region instead transitions to Cx-type behavior at z = 0.09 and easy-plane FiM at z = 0.18. The X = Ni space contains 3 transitions, from ASk to Cx at z = 0.14, Cx to easy-plane FiM at z = 0.22 and easy-plane to easy-axis FiM at z = 0.85. Close to the critical z-values for these phase transitions, the magnetic behavior is likely to be highly susceptible to mechanical perturbations that would alter K and D, such as uniaxial strain along the crystallographic c-axis. Such a strain could move the material to either side of the magnetic phase boundary and thus actuate the magnetic phase transition.
Finally, in Figure 6cd we estimate the synthetic accessibility of the Mn 2 Pt 1−z X z Ga alloys at the compositions of interest. We compute the pseudo-binary binodal and spinodal curves of these alloys to determine the regions of thermodynamic stability and metastability of the solid solution. We obtain the mixing enthalpy of the X = Au, Ir, Ni alloys from the DFT-CE and DFT-SQS models analogously to the evaluation of the magnetocrystalline anisotropy K. As shown in Figure 6c, the X = Ir case shows a small negative mixing enthalpy, indicating that Mn 2 PtGa and Mn 2 IrGa are likely to be miscible in the tetragonal inverse Heusler structure at all temperatures and compositions. The X = Au, Ni alloys have a positive mixing enthalpy indicating that these compositions form miscibility gaps. Figure 6d shows the binodal and spinodal regions of these miscibility gaps, assuming ideal solution entropy for the X sublattice. The compositions of interest z ≈ 0.1 − 0.2 are accessible in both X =Au, Ni spaces but require a relatively high processing temperature of 800-900 o C for initial mixing. These alloys can then be annealed to induce ordering at lower temperatures as they resist spinodal decomposition above ≈ 600 o C. Thus, the Mn 2 (Pt,Ir)Ga system is readily miscible and synthetically limited primarily by the large mismatch in the melting temperatures of Ga-rich and Ir-rich precursors. In contrast, synthesizing the Mn 2 (Pt,Au)Ga and Mn 2 (Pt,Ni)Ga alloys is likely to require a careful optimization of the processing temperature to form the ordered inverse Heusler structure while suppressing phase separation into the ternary endpoints.
DISCUSSION
We have surveyed the magnetic phase space of tetragonal inverse Heusler alloys, focusing on controlling the stability of long-range spin textures such as antiskyrmions. By constructing solid-solutions between endpoints with the same short-range spin structure, we are able to tune the effective Dzyaloshinskii-Moriya interaction and magnetocrystalline anisotropy in the alloy to vary the longrange magnetic structure. Specific compositions of this solid solution which place the magnetic interactions near a magnetic phase boundary maximize the magnetoelastic coupling of the material, as here mechanically-induced perturbations can actuate a magnetic phase transition.
We demonstrate the power of this design principle by identifying the Mn 2 Pt 1−z X z Ga alloy with X = Au, Ir, Ni as a candidate for realizing chemically-and mechanicallytunable antiskyrmions. In this material, we predict that moderate levels of doping (z ≈ 0.1 − 0.2) can induce numerous magnetic phase transitions, and couple antiskyrmion stability to small elastic strains at several critical values of z. The specific doping levels where these phase transitions occur are sensitive to the precise evolution of the coarse-grained interaction parameters with composition and short-range order, which we estimate with several state-of-the-art computational methods. However, independent of these parametrizations, as long as the alloy forms a true solid solution and connects compounds lying on opposite sides of a magnetic phase boundary, a critical value of z is guaranteed to exist. This fact suggests that tunable magnetic alloys can be designed even without detailed knowledge of their magnetic interaction parameters. As long as candidate alloy endpoints can be assigned to distinct regions of the magnetic phase diagram shown in Figure 4, a critical composition for realizing the magnetic phase transition and large magnetoelastic coupling is guaranteed to exist.
The primary difficulty with implementing this design principle for tunable magnetism is ensuring that the alloy maintains the desired crystal structure and chemical order at intermediate compositions. We have assumed that the Mn 2 XY tetragonal inverse Heuslers maintain the structure shown in Figure 2a, with negligible mixing between the four sublattices. While small amounts of intermixing between the sublattices will slightly alter the effective magnetic interactions and would not af-fect our broad conclusions [47], many Mn 2 XY compositions are susceptible to substantial disorder and require optimized processing to induce ordering. For example, mixing between the Mn and X sublattices creates an inversion center in the material and eliminates the Dzyaloshinskii-Moriya interaction that drives the formation of antiskyrmions in this system. Chemical disorder may also suppress the martensitic transition into the tetragonal phase that is necessary for D and K to be non-zero. We have identified Mn 2 XGa for X =Ir, Pt, Rh, Ru, Ni and Mn 2 XSn for X =Ru, Rh as compositions that are most likely to form the correct structure and chemical order after annealing at moderate temperature as they have a large driving force to order and minimal driving force to decompose. [27,35], the apparent order observed in Mn 2 PtIn and Mn 2 IrSn [37], and disorder reported in Mn 2 RhGa, Mn 2 RuGa and Mn 2 RuSn [30] suggest that other processes may need to be considered. Ultimately, a substantially more detailed understanding of the synthesis process is necessary to quantitatively evaluate the synthesizability of these structures and the feasibility of controlling their chemical order [52,53]. CONCLUSION We have reported a systematic first-principles derivation of tunable magnetic order in the family of Mn 2 XY tetragonal inverse Heusler alloys, focusing on designing a robust coupling between room-temperature antiskyrmion stability and elastic deformation. To do so, we first constructed a universal phase diagram for the lattice shared by all tetragonal inverse Heuslers, focusing on the long-range modulation of the common ferrimagnetic spin structure. We characterized the magnetic behavior of all known stable compounds in this space and identified combinations which, when alloyed, may produce magnetic phase transitions as a function of chemical composition and mechanical deformation. Finally, we performed an in-depth characterization of the magnetic and chemical behavior of Mn 2 Pt 1−z X z Ga with X = Au, Ir, Ni to demonstrate that for z ≈ 0.1 − 0.2, this family of alloys can transition between all possible long-range equilibrium spin textures, including antiskyrmions, helices and conical phases. At several critical compositions, these magnetic phase transitions may be driven by elastic strain, suggesting that this alloy may exhibit giant magnetoelastic coupling and serve as a mechanical actuator for the formation of complex magnetic order.
METHODS
Electronic structure calculations were performed with the Vienna Ab-Initio Simulation Package (VASP) [54] using the Projector-Augmented Wave method [55]. All magnetic interactions (Figure 4, Figure 6) were determined using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional [56], accounting for spinorbit coupling. Following previously reported benchmarks, a dense reciprocal-space mesh of 400 k-points per A −3 was used [21,31,43], making sure that all magnetic calculations of the same chemistry and supercell used exactly the same k-point mesh [14,24] and converging the total energy to 10 −7 eV.
The relative stabilities of the ordered and disordered phases ( Figure 5) were determined using the same computational parameters, but neglecting spin-orbit coupling. Disordered phases were modeled using special quasi-random structure (SQS) [46] representations of the common L2 1b (Mn(1)/X) and BiF 3 (Mn(1)/Mn(2)/X) disorder types in these systems [30], where each SQS representation is relaxed assuming a ferrimagnetic spin configuration. To compute global chemical stability within the Mn-X-Y chemical spaces ( Figure 5), we rely on structures reported in the ICSD [57], Materials Project [58], and OQMD [59] databases, with energies computed using the SCAN exchange-correlation functional [60] as we found that this functional uniquely reproduces the low-T behavior of the known binary phase diagrams and avoids previously reported pathological behavior in e.g. the Pt-based binaries [61]. These chemical stability calculations are converged to 10 −5 eV in total energy and 0.02 eV/Å in forces, and are optimized over likely collinear ferromagnetic and antiferromagnetic configurations for all phases. The equilibrium phases used to determine formation energies are given Supplementery Data 5.
Magnetic Hamiltonians were obtained following previously described methods for generating a complete basis for quasi-classical spin interactions within a cluster expansion formalism [24,33,45,62]. Briefly, for each symmetrically-distinct group of magnetic sites, we construct interaction basis functions consisting of symmetrized products of spherical Harmonics, e.g.
for a pair of spins |l 1 , are Clebsch-Gordan coefficients. The L = 0 terms correspond to exchange-interactions, L = 1 correspond to Dzyaloshinskii-Moriya couplings, and L = 2, 4, ... correspond to magnetocrystalline anisotropies [24]. Here, we consider L = 0 (exchange) two-spin interactions for first, second and third nearestneighbor interactions shown in Figure 2b. L = 1 (Dzyaloshinskii-Moriya) terms are included for the nearest-neighbor interaction (J 1 pair in Figure 2a). L = 2, 4, ... terms are included as an average single-site anisotropy summed over the Mn(1) and Mn(2) sublattices. A full listing of these basis functions is given in Supplementary Data 1.
We parametrize this Hamiltonian to reproduce the energies obtained from DFT. We group the basis functions by their L-value and fit these groups independently to maximally cancel out numerical noise in the DFT calculations: (1) we fit the L = 0 interactions to symmetrically distinct collinear spin configurations, (2) the L = 1 interactions to differences in energy between right-and lefthanded helical superstructures of the local ferrimagnetic spin structure, and (3) the L = 2, 4, ... interactions to the energy associated with rotating the ground-state ferrimagnetic spin structure with respect to the crystal axes. Finally, we fit the coarse-grained magnetic parameters A and D in the low-T limit to the energy of spin helix configurations near the equilibrium wavelength implied by the balance of Dzyaloshinskii-Moriya and exchange forces, where the spin helix energies are evaluated using the parametrized atomistic cluster expansion.
Configurational cluster expansions for the total energy and magnetocrystalline anisotropy (Figure 6ac) were constructed and parametrized following standard techniques [45], including 2-, 3-, and 4-body interactions. Special quasi-random structures (SQS) [46] based on these cluster expansions were obtained by Monte Carlo optimization targeting the correlations observed in a random alloy at the desired composition within a 3x3x2 supercell of the conventional cell shown in Figure 2a.
To determine the finite-T phase diagram (Figure 3c) as well as identify the ground states of the magnetic Hamiltonian (Figure 3ab), we rely on auxiliary-spin dynamics Hamiltonian Monte Carlo [24,63] with the No U-Turn Sampling technique [64], as well as simulated annealing and conjugate-gradient optimization. The Monte Carlo runs sample 1,000 and 10,000 independent configurations for equilibration and production respectively, where the time between independent samples is estimated from the decay rate of the energy autocorrelation function. Finite-T runs are performed for an equilibrium helical wavelength equal to 24 unit cells, and using a 24x42x3 supercell of the conventional structure, approximately commensurate with a hexagonal antiskyrmion lattice.
|
2021-09-24T01:15:31.262Z
|
2021-09-23T00:00:00.000
|
{
"year": 2021,
"sha1": "fc4ddf9d9a128537ab92189cecbb56cfed9bfee0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fc4ddf9d9a128537ab92189cecbb56cfed9bfee0",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
251664677
|
pes2o/s2orc
|
v3-fos-license
|
The Relationships between Psychological Well-Being, Emotions and Coping in COVID-19 Environment: The Gender Aspect for Postgraduate Students
Background: Postgraduate students were exposed to the Coronavirus pandemic, and their study process changed from face-to-face to online. The purpose of this study was to analyze the impact of gender differences on emotions, coping strategies and psychological well-being (PWB) in the environment of the Coronavirus pandemic second wave (11 July 2020–30 June 2021). Methods: Ryff scale, MEQ Multidimensional emotion questionnaire, and brief COPE scale. The participants’ consisted of postgraduate students (74 female and 54 male). The study was conducted from 21 June 2022 to 28 June 2022. Results: Postgraduate students rated their PWB levels insignificantly in terms of gender. However, the individual components of this construct were evaluated as being significantly different in terms of gender. Females were more likely to feel negative emotions and had a harder time regulating these emotions than males. Female students were less likely than males to use problem-focused and avoidant-focused coping strategies. Conclusions: Postgraduate females were more affected than males by the Coronavirus pandemic. Females’ PWB was more concerned with emotions than males. Females were less likely than males to use problem-focused coping strategies.
Introduction
Psychological well-being can be defined as a multidimensional construct that includes six components: "self-acceptance, positive relation with others, autonomy, environmental mastery, purpose in life and personal growth" [1] (pp. 1070-1071).
The scholars defined the term coping as "the cognitions and behaviors, adopted by the individual following the recognition of a stressful encounter, that are in some way designed to deal with that encounter or its consequences" [2] (p. 7).
There is no common agreement on the definition of the term emotions [3]. However, many researchers agree that emotions involve a limited number of components and characteristics [4]. In this study, we will use the concept of the emotions construct and the totality of components, as presented by Klonsky et al. [5]. Thus, the construct of emotions includes ten discrete emotions, their frequency, intensity, and duration, emotion regulation, and the idea that emotions are subjectively and consciously perceived [5].
The COVID-19 pandemic changed many students' living circumstances: contact with friends, overall substance use, and physical activity were decreased, but levels of depression, academic stress, and dissatisfaction with studies were increased, and symptoms of depression were significantly more pronounced in women [6]. Male students were less fortunate, with lower levels of psychological well-being compared to female students [7].
In the context of the Coronavirus pandemic, improved psychological well-being is influenced by a higher perception of meaningful living, and this differs in terms of gender [7]. Students with more work (business activity) and life experience after the COVID-19 pandemic reported lower levels of well-being and higher levels of depression and anxiety compared to pre-pandemic levels, but higher levels compared to all students [8].
The results of a previous study showed that females had poorer overall health perceptions, lower quality of life, more stress, and worse sleep quality than male students [9]. It has been revealed that coping strategies significantly predict both positive and negative emotions. Adaptive coping strategies help to maintain positive emotions and regulate denial of emotions [10,11].
Active coping strategies and family support appear among undergraduates as protective factors against a stressful environment, but passive coping strategies can exacerbate psychological problems for subjects [12].
During the COVID-19 pandemic, physical activity decreased, and daily habits and diet changed; this may have had a negative impact on psychological well-being, and female students rated these indicators significantly higher than male students [13].
The results of a previous study showed that two-thirds of students had sufficient knowledge of COVID-19 to be aware of the preventive measures associated with the pandemic but indicated that the COVID-19 outbreak affected their social, mental, and psychological well-being, and females were more affected than males [14].
The previous studies revealed positive associations between COVID-19-related difficulties and decreased levels of perceived coping with the pandemic. Coping strategies were also influenced by media coverage of the pandemic, and denial and substance use were significantly associated with poor communication, poor time planning, and disrupted studies during the Coronavirus pandemic [15,16].
In recent years, undergraduate students were more likely to use instrumental and emotional support and coping strategies than lower-year students. Their psychological well-being was at a higher level, and this was related to coping strategies such as positive reframing and humor [17].
In critical situations, a person can concentrate his or her mental and physical powers and balance challenging circumstances, and this balance is not easily disturbed by external events [18]. Achieving such a balance helps to combat the effects of a pandemic. Equilibrium is positively related to the components of the psychological well-being construct, such as autonomy, personal growth, purpose in life, self-acceptance, positive relations, and environmental mastery [19].
The previous study confirmed a negative association between COVID-19 fears and levels of psychological well-being. Students perceived that social support for well-being is very important in combating the effects of the Coronavirus pandemic [20].
It was found in previous research that, during the pandemic, students most commonly used the following coping strategies: acceptance, active coping, and physical activity. Female and graduate students used more coping strategies than male and bachelor's students. Coping strategies were significantly correlated with psychological well-being [21].
However, the other researchers' results do not sufficiently reveal the emotional expression of postgraduate students', the peculiarities of their coping strategies, and their psychological well-being during the Coronavirus pandemic.
The purpose of the present study was to analyze the impact that gender differences have on emotions, coping strategies, and the psychological well-being of postgraduate students in the Coronavirus pandemic environment.
Hypothesis 1 (H1).
Postgraduate female students are more affected by the stressors associated with the Coronavirus pandemic, but their level of psychological well-being is higher than that of male students.
Participants and Procedures
The participants for this study were selected using a purposive sampling method from postgraduate students of social science study programs. The sample consisted of 74 female and 54 male (a total of 128) postgraduate students after the end of lockdown (after 30 June 2021) from three state universities operating in the city of Kaunas, Republic of Lithuania. All students participated in the study voluntarily, with no financial incentive, and they were informed of their right to terminate their participation in this study at any time. The research was conducted following the principles of reliability, honesty, respect, and accountability. The Ethics Committee of Social Sciences Research of the Lithuanian Sports University issued a permit to conduct this research, and it meets the ethical and legal requirements in Lithuania, where the research was conducted. Study participants were informed about the purpose of the study, the study organization, and the study process. Participants were also informed that their personal data would not be collected, and data collected through research instruments would be processed and stored following the requirements of the Personal Data Protection Code. Participants completed a questionnaire, which initially required a response to the statements "I agree to participate" or "I disagree to participate", and there was no time limit. When asked about completing the questionnaire, the researcher provided personal counseling, but not when choosing answers to items in the questionnaire. The questionnaire survey was carried out during classes after prior coordination of the research time with students and teachers. Before this study, there were two waves of COVID-19 in Lithuania. The first lockdown was from 16 March 2020 to 17 June 2020, and the second lockdown was from 7 November 2020 to 30 June 2021. These dates were set by the government. This survey was conducted from 21 June 2022 to 28 June 2022.
Methods
The study questionnaire consisted of a sociodemographic part (gender, age, year of study), a Ryff Scale for measuring psychological well-being [1], the multidimensional emotion questionnaire [5], and a brief COPE scale to evaluate coping strategies [22,23].
Psychological Well-Being Scale
The scale for measuring psychological well-being consists of 54 items, divided into nine items in each of the six subscales: autonomy, environmental mastery, personal growth, positive relationships with others, purpose in life, and self-acceptance.
Subjects were asked to rate the items on the Likert scale from 1 to 6:1 = strongly disagree, 2 = disagree somewhat, 3 = disagree slightly, 4 = agree slightly, 5 = agree somewhat, and 6 = strongly agree. The higher the total score, the higher the subject's psychological well-being. The overall psychological well-being score was calculated by obtaining the average of the outcomes in the items evaluated by the participants [1].
Cronbach alpha in this study of subscales was autonomy 0.70, environmental mastery 0.63, personal growth 0.63, purpose in life 0.64, and self-acceptance 0.71.
Multidimensional Emotion Assessment Questionnaire
The multidimensional emotion questionnaire includes five positive emotions (happy, excited, enthusiastic, proud, and inspired) and five negative emotions (sad, afraid, angry, ashamed, and anxious). The frequency, intensity, and persistence of emotions, as well as the difficulty of regulating emotions, were assessed for each of these discrete emotions on a 5-point scale: 1. How often? About once per month or less; About once per week; About once each day; 2 or 3 times each day; More than 3 times each day. 2. How intense? Very low; Low; Moderate; High; Very high. 3. How long-lasting? Less than 1 min; 1-10 min; 11-60 min; 1-4 h; Longer than 4 h. 4. How easy to regulate? Very easy; Easy; Moderate; Difficult; Very difficult.
The scores for indicators such as positive frequency, positive intensity, positive persistence, and positive overall were calculated by summing the respective estimates. The scores for indicators such as negative frequency, negative intensity, negative persistence, and negative overall were analogously calculated. Estimates of positive overall emotional regulation and negative overall emotional regulation were calculated by summing the scores for the difficulty of discrete emotional regulation. The Cronbach alpha of original scales ranged from 0.61 to 0.85 [5].
Coping Strategies Measure Scale
The brief COPE scale consists of 28 items and 14 subscales, with two items per subscale [22]. Scales singled out and verified by the developer (Cronbach's alpha coefficient of the original scales ranged from 0.50 to 0.82 [22]. The participants were asked to rate the items on the scale in a four-point system 1-I haven't been doing this at all; 2-A little bit; 3-A medium amount; and 4-I've been doing this a lot. The sub-scale scores were calculated as the averages of the estimates of the two appropriate items provided by the participants.
Scales can be divided into three higher-level super-scales: problem-focused coping, emotion-focused coping, and avoidance coping [24]. Scores for problem-focused coping were calculated as the average of the ratings of the eight corresponding items. Scores for emotion-focused coping were calculated as the average of the ratings for the 12 items in question. Scores for avoidant-focused coping were calculated as the average of the ratings of the eight corresponding items.
Data Analysis
Research data were analyzed using IBM SPSS for Windows 22.0. The values of skewness and kurtosis of all variables of scales or subscales ranged from 1.67 to −1.558 (the limiting values ranged from 2 to −2), so the distribution of all variables does not significantly differ from the normal distribution and the Student's t criterion can be used for comparisons between the means of scores [25]. Pearson correlation coefficients between the components of the psychological wellbeing, emotions, and coping strategies constructs were calculated. The statistical significance level was set at p < 0.05.
Results
The scores for the psychological well-being construct components are presented in Table 1. The psychological well-being construct of personal growth was rated the highest (3.71) and the environment mastery received the lowest score (3.37) from female students. Male students gave the highest score for the component of positive relations (3.77) and purpose in life was given the lowest score (2.98). There were statistically significant differences in the evaluations of all components by females and males (p < 0.05). Females rated the components of personal growth, autonomy, and purpose in life higher than males, and males rated positive relations, self-acceptance, and environmental mastery higher than females. The overall assessment of psychological well-being did not significantly differ between females and males. Female and male students rated coping strategies differently ( Table 2). Male students rated active coping, use of instrumental support, positive reframing, and planning statistically significantly higher (p < 0.05) than female students. The evaluations of the other coping strategies did not significantly differ according to gender. Male students rated problem-focused coping and avoidant-focused coping statistically significantly higher than female students. Female students rated emotion-focused coping strategy higher than male students, but the difference in scores was not significant. Male students rated avoiding coping significantly higher than female students. Higher scores indicated higher levels of these coping strategies.
Male students rated their positive emotions statistically significantly (p < 0.05) higher than female students, although the ratings for emotions such as happy and enthusiastic did not significantly differ (Table 3). Female students rated negative emotions such as afraid, angry, and anxious statistically significantly higher (p < 0.05) than male students, while male students only rated the emotion sad higher than female students. The scores for negative emotions and ashamed emotions did not significantly differ in terms of gender. Male students rated the components of positive frequency, positive intensity, and negative persistence statistically significantly higher (p < 0.05) than female students. The estimates of components such as positive persistence, negative frequency, and negative intensity did not significantly differ. Both positive and negative emotions were more difficult for female students to regulate. Female student scores for both cases of emotion regulation were statistically significantly higher (p < 0.05) than male student. The positive overall ratings for emotional components were higher for males and negative overall ratings were higher for female students than for male students, but not significantly. Estimates given by female students regarding difficulties regulating positive emotions (Table 4) decreased in the following order: excited, enthusiastic, inspired, happy, and proud. The regulation of positive emotions such as happy, enthusiastic, and proud was more difficult for female than male students, and the difference in means was statistically significant (p < 0.05). The estimates regarding difficulties in the regulation of the positive emotions excited and inspired did not significantly differ in terms of gender. Overall regulation of positive emotions was more difficult for female than male students, and the difference was statistically significant (p < 0.05). Negative emotion scores given by female students can be arranged in descending order according to the difficulty in their regulation: sad, afraid, anxious, angry, and ashamed. It was more difficult for female students to regulate negative emotions such as afraid, anxious, angry, and ashamed, than male students, and the difference in the corresponding emotion scores was statistically significant (p < 0.05). Overall regulation of negative emotions was more difficult for female than male students, and the difference was statistically significant (p < 0.05).
The components of the psychological well-being construct and the overall psychological well-being Pearson correlation coefficients were calculated, along with the components of the emotion construct and the coping strategy construct. Statistically significant Pearson correlation coefficients were found between the psychological well-being construct and its components and emotion and coping constructs components for female and male students, which are separately presented in Table 5. The psychological well-being construct component autonomy showed a statistically significant relationship with the coping strategy of self-blame (r = −0.326) in female students, but there was no significant coefficient for male students. The component of environment mastery showed a significant correlation coefficient with the emotion happy (r = −0.238) for female students and with the coping strategies of planning (r = 0.268), and venting (r = 0.379) for male students. Personal growth was only significantly related with positive reframing (r = −0.274) for male students. The component of positive relations was statistically significantly related to the regulation of anxiety (r = 0.297) and negative emotions (r = 0.287), positive reframing (r = 0.279), and emotion-focused coping for female students, and there was no significant coefficient for male students. The component of purpose in life was significantly related to the emotions sad (r = 0.230) and anxious (r = 0.245) for female students. The psychological well-being component of self-acceptance was significantly related to the emotion inspired (r = 0.284), indicative of the regulation of anger (r = −0.229), and was significantly related to the negative emotion regulation (r = −0.248) for female students, and significantly related to the emotion inspired (r = 0.382) for male students. Overall psychological well-being was significantly positively related to the emotion inspired (r = 0.239) and regulation of enthusiasm (r = 0.244) for female students, and negatively related to the emotion happy (r = −0.279), and positively related to the coping strategy of positive reframing (r = 0.311) for male students.
Discussion
This study investigated the peculiarities of the relationships between psychological well-being, emotions and coping in postgraduate students in the COVID-19 environment in term of gender.
The results of this study show that psychological well-being is higher in female students than male students, but the difference is insignificant (p > 0.05). The data obtained by various researchers are quite contradictory regarding the psychological well-being of females. This may be related to their multifaceted activities, as other researchers note that intensive activity can have a positive impact on psychological well-being [26].
A good level of communication is thought to help maintain a higher level of psychological well-being [27]. An effective coping strategy is vital to maintaining high levels of psychological well-being under extreme conditions [28]. However, the COVID-19 environment usually limited face-to-face communication between students.
Other researchers found that women's mental health rates were lower than those of the general population due to exposure to Coronavirus during the pandemic and many factors. The importance of social connections in combating the environmental impact of COVID-19 has been clarified and is particularly relevant for women [29,30].
The results of this study revealed that female students value their autonomy, personal growth, and purpose in life higher than male students. According to other researchers, the Coronavirus pandemic affected female students more, with an increased level of anxiety being one of the discrete emotions noticed among students during the Coronavirus pandemic, especially among female students, compared to the pre-pandemic period [31,32]. Students who felt high levels of psychological well-being were more likely to choose active coping styles, while those with lower PWB were more likely to choose avoidance-type coping strategies [33].
It has been observed that the longer the pandemic period lasted, the more frequent the depressive syndromes became, especially in the female groups [34]. Pandemic environmental factors are expected to affect student well-being and may have uncertain effects in the future [35]. Students' psychological well-being was also influenced by their social assistance, as shown in the previous study [36].
Satisfaction with well-being depends on many factors; it was found that the levels of satisfaction during the pandemic were very different when comparing the results of a survey of students in nine countries [37]. Postgraduate students indicated being satisfied with their living standards more often than undergraduates, and the weaker the effect of the Coronavirus pandemic, the more patented the students' well-being [37]. These differences can be explained by individual genetic differences that determine a more or less positive human condition, in addition to influential cultural norms [38].
It has been revealed that the psychological well-being of undergraduate students can affect their further professional success and future position in society, so it is very important to take care of students during pandemics [39].
Decreases in the level of psychological well-being in university and college students due to exposure to COVID-19 have been reported by other researchers [40,41].
Perceived life changes associated with the limited social contact due to COVID-19 resulted in increased psychosocial distress, anxiety, depressive symptoms, and lower levels of psychological well-being [42,43].
Competencies are very important for students' perceptive psychological well-being, but during the pandemic lockdown, there has been a transformation of studies from faceto-face to online. This required students to master new technologies for their studies, which contributed to a decline in their level of psychological well-being [28,44].
In this study, female students rated three of the six components of the psychological well-being construct higher than males, and male students rated the other three components of the construct higher (p < 0.05) than female students. However, the overall level was higher for female students, but not significantly higher than that of men. Other researchers also point out that summarizing the results of studies in 166 countries found that females only valued psychological well-being slightly higher than males [45].
The previous study showed that properly chosen coping strategies and positive emotions positively influence psychological well-being [46]. The results of this study confirmed the results of the above-mentioned study, showing that psychological well-being is related to emotions and coping strategies, as correlations of varying strength were revealed between the components of the constructs of psychological well-being, emotions, and coping strategies.
This study discloses the coping strategies used by female students and male students, providing an important complement to the pandemic knowledge, as other researchers point to the fight against stress and depressive symptoms and add to the lack of knowledge of coping strategies that are perceived to be effective in global pandemics [47]. The results of this study showed that female students rated emotion-focused coping strategies higher than male students and it was more difficult for female students to regulate both positive and negative emotions. Male students rated problem-focused coping strategies higher than female students, and it was easier for them to regulate their emotions. The results of this study revealed that female students rated the coping strategies of venting, using informational support, acceptance, and humor in descending order, while the lowest scores were given to the coping strategies of denial, substance use, self-distraction, positive reframing, and self-blame. Male students evaluated coping strategies in a different order. The top scores were active coping, use of informational support, planning, positive reframing, and venting. Thus, only the informational support coping strategy was among the top five strategies used by both female and male students.
The previous study showed that the most common and least frequently used coping strategies, without disaggregating them by gender, were self-acceptance, planning, and emotional support strategies for the most common, and substance use, denial, behavioral disengagement coping strategies for the least common [48]. Among the problem-focused coping strategies, females and males more often used the active coping strategy, and among emotion-focused coping strategies, they most often used the venting strategy. Those using the active coping strategy were found to have higher levels of psychological well-being [49].
In this study, females and males gave the denial coping strategy the lowest score, and thus found it the least appropriate, with scores of 3.46 and 3.65, respectively. Those using the denial coping strategy may experience signs of depression [50].
In this study, male students rated highest in problem-focused coping and females were rated highest in emotion-focused coping strategies. The avoidant coping strategy received the lowest estimates from both groups. It has been observed that the choice of coping strategy is related to assessment of the situation. Previous studies showed that problemfocused strategies are more commonly used in situations where factors can be controlled, and emotion-focused strategies are used when factors are difficult to manage [51].
This study found that postgraduate female and male students found it more difficult to regulate negative emotions than positive ones. Other authors suggest that the predominance of negative emotions may have been related to the restrictions imposed during the Coronavirus pandemic, including online studies, and difficulties regulating emotions can negatively affect psychological well-being [51][52][53].
Consistent with the results of this study, despite the Coronavirus pandemic environment, postgraduate students felt positive emotions more often and more intensely than negative emotions. However, negative emotions lasted longer than positive ones, especially for males. All emotions and coping constructs were associated with a stronger or weaker relationships with psychological well-being, but the correlation coefficients were of different strengths for females and males. Statistically significant relationships were found with the emotion inspired and the regulation of enthusiasm for female students, and for male students, there was a statistically significant relationship between the emotion happy and the coping strategy of positive reframing.
The results of this study partially confirmed the hypothesis. Female students rated their psychological well-being as being higher than that of male students, but not significantly. Female students used the emotion-focused coping strategy more often than males. Negative emotions were rated higher by female than male students, and female students also found it more difficult to regulate negative emotions.
Limitations of This Study
One of the limitations of this study was that it was a single-section study, and it was not possible to compare the data from this study with data from the pre-pandemic period. Although the lockdown has already been reversed, due to some remaining limitations, the number of subjects is limited, so the results of the study cannot be extrapolated, not only for postgraduate, but for all students nationwide. Another limitation of the survey results is the peculiarities of the data collection instruments, as the respondents themselves completed the questionnaire and scales. The study needed to be completed as soon as possible after the lockdown, as most people tend to forget about negative influences quickly and have a more positive view of the past. A third limitation of the study is the small sample size.
The strength of this study, from the researchers' perspective, lies in the fact that all components of the constructs of psychological well-being, emotions, and coping strategies were evaluated separately from a gender perspective.
Future Research
Future studies could examine how people's perceptions of the effects of the Coronavirus pandemic change over time, especially as new waves of COVID-19 strains are expected. Students will have gained experience in online study and living in a restricted social network. It would also be appropriate to extend the study to university and college students.
Conclusions
The effects of the Coronavirus pandemic on female and male postgraduate students differed according to gender. Assessments of the overall level of psychological wellbeing differed insignificantly between female and male students, but assessments of the components of this construct significantly differed in terms of gender. Three components (autonomy, personal growth, and purpose in life) were rated higher by female students, while the other three (environmental mastery, positive relation, and self-acceptance) were rated higher by male students. During the pandemic, male students were significantly more likely to use (overestimate) all problem-focused coping strategies (active coping, use informational support, positive reframing, and planning) than female students. However, female students were significantly less likely to use (underestimate) avoidant-focused coping strategies. Evaluations of emotion-focused coping strategies differed insignificantly in terms of gender.
Female students found it much harder to regulate negative emotions than male students. Female students' psychological well-being is more related to emotions than male students. Male students' psychological well-being is significantly associated with the coping strategy of positive reframing.
Funding: This research received no external funding.
Institutional Review Board Statement:
The study was conducted following the Declaration of Helsinki and approved by the Ethics Committee of the Lithuanian Sports University (approval number SMTEK-118).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The datasets collected and analyzed during the current study are available from the corresponding author on reasonable request. All survey data are password protected.
Conflicts of Interest:
The author declares no conflict of interest.
|
2022-08-19T15:18:21.870Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "1195bb1a56fdfe663de3076371b2f37f16f6b5df",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/16/10132/pdf?version=1660650914",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "092d2638f4f2a088ed0173f103b305239e9ec484",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
78344235
|
pes2o/s2orc
|
v3-fos-license
|
Employability of Graduates of Public and Private Management Education Institutes : A Case Study of Two Institutes in Sri Lanka
The rationale for carrying out this research project lies in the findings of a survey regarding the mismatch between the qualifications and the demand for employment in the job market in Sri Lanka. This study aims to identify the degree of employability of graduates in the public and private sector higher education institutes which offer Management Degree programmes. A combination of quantitative and qualitative methods has been applied to elicit data. Primary data was collected through a questionnaire survey and interviews with 121 selected graduates who had graduated from two selected education institutes to extract views and experiences of graduates who use Facebook and those who use Google+ sites by applying the ‘Snowball sampling’ method of sampling. The findings suggest that both institutes have paid attention in developing employability skills in their students, supported developing enterprise skills and interpersonal skills which were seen inadequate to fulfil the requirements of the job market. .
Introduction
The demand of higher education has increased considerably and about 12,000 students go abroad each year to pursue higher education as a result of lack of placements in the state universities, (ICEF Monitor 2013).Although, approximately 300,000 students sit for the General Certificate of Examination (Advanced Level), only 27,600 received admissions in universities (UGC 2016).This is not a trend that favours the economic development of Sri Lanka because this expenditure does not justify the benefits of educating students who go abroad (Nanayakkara 2010).Therefore, the government has invited the private sector to invest in education; about 100,000 seek further studies annually through the private sector education system and the state does not have the required funding to support this expansion (Dissanayake 2014).However, it is questionable whether these private institutes are adequately equipped to meet the expected quality standards of the country.Further, public universities are frequented with strikes, agitations, clashes and blood-shed and regular closing of faculties and campuses (Nanayakkara 2010) and these educational institutes are often criticized for not accommodating the volume and variety of students' demands, high unit costs arising from unproductive overheads, inflexible curricula and teaching methods, and the lack of research output (Vidanapathirana, 2000).
All the higher education providers in the private sector have established affiliations with different foreign universities and they follow their own quality assurance systems and standards.However, it is necessary to emphasise that there is no state body to supervise the private higher education institutes in Sri Lanka.Because of the necessity to fulfil the rapid demand for high quality higher education, both types of institutes are needed to upgrade the standards of creating employable graduates.Therefore, this study focuses on analysing the employability of graduates in public and private higher education institutes which offer Management Degree programmes.
Research Problem
The state universities in Sri Lanka are operated under the regulations of University Grants Commission (UGC).Therefore, these universities have to follow academic quality standards designed by the UGC and all academic work must be aligned to these standards.The private higher education institutes are affiliated universities of foreign countries.Therefore, these institutes basically follow the academic quality standards given by the mother universities.Many of these affiliated universities are from United Kingdom or Australia.Thus, the education provided by these universities can be different as a result of the quality and nature as maintained by the source university.It is surprising that there is no government authority to monitor the practises of private higher education in Sri Lanka.However, both sectors produce graduates and it is important to study the level of employability of these graduates as they come from two different environments.The study on the employability of both graduates might support both sectors to identify strengths and weaknesses of the education they provide.
Methods
The selection of the public and private institutes was based on convenience sampling by considering the availability of time and finance.The data was gathered through a survey conducted, based on the snowball sampling method where questionnaires were distributed using Facebook and google+ to graduates of selected institutes.Updated data bases were not available in these two institutes and it led to the selection of snowball sampling to collect participants.Further, online interviews were conducted via Skype to study employment transition, promotions and challenges after graduation.A total of 64 graduates from the public institute and 57 graduates from the private institute participated in this study.The response rate turned out to be 99%.All graduates possessed business management degrees with specializations in finance, accounting, marketing and human resource management.All of them had passed out between 2008 and 2010.They possessed more than four years of working experience when the data was gathered.The questionnaire method was identified as a convenient method for data gathering.In order to identify qualitative factors related to the employability; the researcher conducted interviews with selected graduates.Employability has the influence of dependant variables such as updated theoretical knowledge, soft skills, job specific skills, technical skills and independent variables such as government actions and policies, income and social status, economic growth of the country, and employers' expectations and attitudes of graduates.
Conceptual Framework
The higher education involves factors such as academic knowledge, interpersonal skills development, and exposure to extracurricular activities, economic growth, labour efficiency, job demand, legal framework and political influences.Therefore, these factors can be categorized as dependant and independent factors according to their content and nature of influence.This is illustrated in Figure 01.Offers and services from universities can be dependant and external factors like the government decisions, economic growth, social level and employers' expectations are independent factors in university activities.The efficiency in higher education depends on all of these factors, along with the availability and utilization of finances.The economic growth can be expected to rise through the development of the higher education system of a country.At the same time, there is a positive connection between the skilled labour market and economic growth (Chandrasiri, 2008).Therefore, the aforementioned foundation must be strong in delivering and developing skills.However, to support the development of higher education in public and private sector requires identifying the common factors affecting the development of the university system.This will support the delivery of balanced mix of academic discipline and practical skills in both public and private sector higher education in Sri Lanka.
Theoretical aspects of employability
Many authors have described employability as the personal aptitude to carry out work.This definition mainly focused on the actual employability of people.Feyter et.al (2001) defined employability as "The number of tasks a worker can be assigned to or the amount of assistance needed in the job".Peck and Theodore (2000) provided a definition to the Employability as 'all individual factors that influence the future positioning in a given segment of the labour market'.Employability has economic and social consequences on macro-and micro level as the government is required to allocate a sufficient proposition of finance to education and creating employment.In this case, the demand would be created more towards white collar jobs and there will be less demand for blue collar jobs.This will develop a gap in the fulfilment of blue collar job markets.Thus, the authorities have a responsibility in balancing both sides on the development of employability.In this, education has a responsibility to supply and fulfil the requirements of employability.
Basically, employability is reflected on three theoretical perspectives, namely the Human Capital Theory, Actor Theory and Career Anchor Theory.The knowledge and skills are in great part of the product of investment and, combined with other human investments, predominantly account for the productive superiority of the technically advanced countries (Schultz, 1961).The actor theory implies that the individual and the collective actors are predetermined re-producers of the socially constructed environment.This approach presumes that neither economy as a driving force, institutional norm systems, nor political power structures, define the identity of the individuals, but these forces exert an important influence on the individuals' reflexive and subjective ways of creating their identity (Silverman, 1970).Eight themes in the career anchor such as functional competence, general managerial competence, independence, security, and entrepreneurial creativity, sense of service, pure challenge and lifestyle influence in an individual capability (Schein 1978).The human capital theory, actor theory and career anchor theory argue on education's change of an individual and its preparation of him/her with skills required for the job market.But social, economic and political conditions should provide the foundation in order to convert education into an investment.The Signalling theory argues that the investment in education requires the provision of sufficient return through employment.
Importance of academic and practical knowledge in employability
Education is a major measure of development of a country.Also it reflects the wealth and prosperity of a country.The main objective of university education is to produce graduates with soft and hard skills for different careers expecting them to be in the process of growth in the country.Universities facilitate the production of intellectual needs of a community as regards both academic knowledge and professional training (Ariyawansa, 2008).Higher levels of education becoming more important than lower levels of education, supports the notion that economic activities are becoming more knowledge intensive over time, so that the return to knowledge-based skills is rising (Aturupane, 2012).Employability focuses on a 'rational' approach as there is a range of factors that mediate employment such as, type of higher education institute, mode of study, student location and mobility, subject of study, previous work experience, age, ethnicity, gender and social class (Harvey, 2001).Employability can be defined as the propensity of students to obtain a job.However, most explicit and implicit definitions elaborate this core notion in diverse ways: 1. Job type: it implies getting a graduate-level job.They may be referred to as 'fulfilling work', or as a job that 'requires graduate skills and abilities' or as a 'career-oriented' job.2. Timing: employability signaled by getting a job within a specified time after graduating.Attributes of recruitment: does employability signify an ability to demonstrate desired attributes at the point of recruitment?3. Further learning: one view of employability holds that 'the degree is not the end of learning' and values graduates who are ready for further development, while in other places more weight on achievement at graduation, in addition to recognizing the importance of 'willingness to learn and continue learning'.4. Employability skills: understood as the possession of basic 'core-skills', or an extended set of generic attributes, or attributes that a type of employer expects from an employee.(discipline-linked,sector-related, company-type) specifies (Flanders, 1995).
Effective employability is a collection of sufficient improvement in knowledge, the field of the subject, relevant experience and the development of positive attitudes and disciplines.These should be injected to individuals by the education institutes.
Economic growth and employability
The Higher education sector is in a position to supply more skilled labour and thereby promote economic growth (Chandrasiri, 2008).A Collaborative approach to higher education and an efficient labour market will lead a country to economic growth.In reality, it can be regarded as a high level or a specialised form of human capital contributes to the economic growth significantly.It is rightly regarded as the 'engine of development in the new world economy' (Castells 1994:15).
Social aspects of employability
Students entering a university may be immature in experience; they are exposed to a lot of freedom and independence without being prepared for the responsibility.This is coupled with a very brief orientation for only one week (sometimes, there is no orientation in private colleges) which in reality is missed by many; consequently, for many students it takes time to understand the system, especially those from rural schools who are coming to the city for the first time (Bunoti, 2011).The higher education environment should facilitate a proper orientation for students.In the meantime, they should pay attention to the preparation of their students for the demands of the future job market.Therefore, higher education institutes must work in collaboration with the respective industries.
Role of higher education institutes
People are unemployed because of the unemployment mismatch.And there are four main parties who are involved in this process namely, employers, candidates (graduates), state (government) and institutions (university).Most graduates do not have the required competencies, knowledge, skills and experience.Employers are the second party and in their view, graduates fail to fulfil requirements and core competencies.The third party is the educational institute and this system is criticized for not accommodating the volume and variety of students' demands, the high unit cost arising from unproductive overheads, inflexible curricula and teaching methods, and the lack of research output.The fourth party is the government which should also be involved in finding a solution to the problem (Vidanapathirana, 2000).The objectives of university education directly expect a "leading role from graduates" in different scales for the country's development.Therefore, for development, countries highly rely on their valuable human resources, particularly the essence of fresh intellectuals who are known as "university graduates".Hence, it can be argued that one of the universities' main obligations is to produce talented and competent graduates suitable for the development process of the country (Ariyawansa, 2008).Although there are a few established private higher education institutes, they are not labelled as 'universities' and there is little evidence that the education provided by them meets adequate standards.
Employers' perspectives
Employers reported that work related experience is an important consideration in recruitment (Weligamage & Siengthai, 2003).Sri Lankan Universities have already taken action on this issue and most of the study programmes have included internship component into their curricula.This programme is running successfully and all stakeholders involved in this process are being benefitted.However, Sri Lanka further requires the development of programmes such as enterprise training, leadership development, career development and interpersonal skill development (Weligamage, 2009).The purpose of having career guidance services was to improve the links between universities and the industry, and thereby enhance the employability of university graduates (Chandrasiri, 2008).
One of the responsibilities of public and private higher education providers is to produce employable graduates with the intention of balancing the supply of the job market.Various definitions and approaches as discussed under employability of graduates focus on the development of enterprise skills with academic knowledge in relation to the demands of the job market.The required employable skills can vary and are categorized in different job segments.Developing the employment skills of a graduate is a vital need and is recognized all over the world.Therefore, the university system plays an important role in producing suitable and employable graduates to meet the requirements in the economy.There can be a system to update and upgrade degree programmes according to the trends in global higher education and the emerging requirements of the job markets.Universities are places of developing new theories and new knowledge through research and development.Therefore, universities must recognize emerging trends in employment and adjusting their degree programmes, accordingly.
Results and Findings
The analysis was based on empirical evidence which focuses on a total of 64 graduates from a public institute and 57 graduates from a private institute.
Public -Mgmt
Private -Mgmt Majority of graduates stated that they were satisfied with the knowledge acquired from the public universities, while dissatisfaction has been shown by a relatively large part of graduates from the private institutes (Figure 2).In addition, 1/5 th of public university graduates were strongly dissatisfied with the inadequacy of knowledge imparted.Most of the graduates were not satisfied with the job specific skills provided by the institutes (Table 1).Based on the mean value, over 50% of graduates from both institutes agreed that the acquired knowledge had supported them in securing employment (Table 1).Majority of graduates are employed in fields which are related to their degree specialization.This indicates that graduates have considered the specialized field in applying and selecting employments.About 35% of public university graduates are employed in the public sector; public sector jobs are perceivably more secure than other jobs.They also provide higher benefits, such as a pension after retirement and require a lower work effort.Sometimes, they also carry more prestige (Rama, 2003).English Language fluency was the main challenge that graduates of the public university faced (Table 02).Graduates of private institutes mentioned that the institute had not provided enough opportunities to develop practical skills such as industrial training and internships.According to Figure 3, there is a positive correlation between the satisfactions of acquired knowledge and the practical skills provided by the Management Degree programme of the Public University.It could be said that satisfaction of acquired knowledge is positively caused by the increase in providing practical skills.Thus, it can be concluded that the impact of higher education must be present in both theoretical and practical knowledge development domains.In other words, these two variables are closely interrelated and an increase in one will make a considerable change in the other.Therefore, these two individual variables require equal attention in order to produce employable graduates.
According to Figure 4, there is a positive correlation between satisfaction gained from knowledge on practical skills provided by the Management Degree programme of the Private Institute.It could be said that satisfaction is positively caused by the extent of practical skills in private education.It can be identified that the satisfaction in both variables is caused by the increase in practical skills.As a result, it can be concluded that the private education considers developing the students' academic knowledge with the job specific skills.Therefore, these two individual variables require simultaneous development as well as equal attention in order to produce employable graduates.The Ministry of Higher Education has been surveying the employability of graduates annually until 2013.This survey produces data on employability of recent graduates by the discipline and the university.Figure 5 shows the employability of graduates by the relevant discipline; management graduates have an employability rate of 66% in all the state universities.The data was collected on the day of convocation, which may be 6 -12 after the completion of their degrees.Therefore, it also reflects the waiting time to receive a job.
According to Table 3, there were 20 graduates who participated in the interview from the Public University and 12 of them were female graduates.Currently, 11 out of the 20 interviewees are working in public sector organizations, while 8 females worked in that sector.However, there were no graduates from the Private Institute who work for the public sector.There are 7 graduates who work in the private sector and a further 3 of them run their own companies.Internship is compulsory in the curriculum of the Management stream in the Public University.Therefore, all graduates had gone through an internship programme.Some of them had converted the same internship to permanent employment after completing the required probation period (Table 4).Others received employment while they were on internship.One of the interviewees stated that she had to wait for three months after completing her studies to secure employment.Four interviewees out of the 30 are working for their own companies.The private university graduates were not required to stay long to receive jobs as they obtain jobs through personal contacts.As Table 5 shows, three interviewees of the Private Institute have started their own businesses just after their graduation receiving investment from their parents/family members/only one interviewee of the Public University is running his own business which was financed through a bank loan.He had started this business as a micro enterprise and has now developed it to a small/medium scale enterprise.
Incentives for Employment -Public sector and private sector companies
The sample consists mainly of those who graduated in 2008 and had received a chance to get public sector jobs in the late 2009-2010 years.During the interview, it was found that there were incentives in both sectors which graduates looked for.
Least important
Table 6 presents the ranking order for each incentive and it is very clear that two different views have come out in these two types of sectors.Salaries, career growth, qualification requirement, relevance to the field of study and family wellbeing are the top most important incentives of private sector employees However, job security, pension, family wellbeing, travelling distance, time saving and freedom are the main concerns of public sector employees.One participant from the public university mentioned that he had to shift his employment from the private sector to the public sector as there are many benefits provided for public sector workers including the pension.It was also very easy to work close to home rather than be lodged in Colombo away from home.The Figure 6 presents the spearman's rank correlation of the preference of selecting public sector employment.This is a positive correlation based on the incentives listed in the Table 6.The graduates of the public university have more preference of selecting private sector employment.
The results of identifying the preference of working in the private sector are presented by the Figure 7.This indicates a negative correlation with the ranked incentives presented in Table 6.These graduates expect a high salary and quick career growth by working in the private sector.One of the participants from the public university said that although she was entitled to a government offer, she declined as she wanted to continue in the private sector considering the salary package and the relevance of the degree for further education.While the majority of graduates stated their satisfaction of acquired knowledge from the Public University, dissatisfaction was stated by a greater number of graduates from the Private Institute.Most of the graduates were not satisfied with the practical and job specific skills provided by both institutes.Many graduates from both institutes, however, agreed that the acquired knowledge had supported them in their employment.
Conclusion
This study highlights the major factors affecting the employability of graduates of public and private higher education especially in the Management stream.Based on the results of this research relating to two institutes, it was observed that, academic knowledge, soft, practical and technical skill development are the major factors that prepare an undergraduate for future employment.These results also conclude that providing academic knowledge is not sufficient for effective employability of graduates.Findings have shown the dissatisfaction of graduates towards their academic programmes as inadequate in academic and practical skills development.It appears that higher education institutes must have a proper combination of academic knowledge and practical skills development which are expected by employers.As Weligamage and Siengthai mentioned in 2003, the Knowledge, skills, and talent will be crucial factors for growth in the future, while innovation and willingness to change will be driving forces in higher education.Therefore, these institutes have a major responsibility to improve their academic standards.
Ranking of incentives in working in Private sector
Graduates are the future leaders of the country and they have to be ready with the modern changes in the industry.There should be a system set up for undergraduates to engage in industrial activities during their time of study and help create relationships and network with industries.Therefore, the nexus between university and corporate entities needs nourishing.The study shows that public university graduates prefer to work for public sector.However, this liking may be used to locate them in their home towns to develop those areas and facilitate them to strengthen opportunities for local businesses.The objectives of university education directly emphasize a "leading role from graduates" in different scales for the country's development (Ariyawansa, 2012).Public universities have to play changing preferences and interests of their students to work for private sector.The findings of this study may have significant influence on planning and strategising higher education in Sri Lanka; it will help both sectors to understand how they need to develop and upgrade themselves in order to produce employable graduates.
Implications for Future Research
This study provides an example where there is some convincing evidence to investigate further the requirement of developing higher education sector of Sri Lanka.The results show the factors affecting the employability of graduates in Management stream based on two private and public institutes.Further research may be justified to investigate the impact of applying these factors in developing the employability skills among both undergraduate programmes.This study highlights only the employability of management graduates; therefore, this can be expanded further to investigate the employability of other streams in higher education.The results of this study can enhance the development of academic standards in both public and private higher education institutes in Sri Lanka.
Figure 2 .
Figure 2. Data distribution showing degree of satisfaction with the acquired knowledge
Figure 6 .
Figure 6.Spearman rank correlation of incentives in working in the public sector -Public University graduates
Figure 7 .
Figure 7. Spearman rank correlation of incentives in working in the private sector -Public University Graduates
Table 1 .
Analysis on the theoretical knowledge provided by the university
Table 2 .
Data distribution of challenges faced in the first job
Table 4 .
Waiting time to get employment
Table 5 .
Data analysis on self-employment (I = Interviewee)
Table 6 .
Ranking of incentives in working in the private/public sector
|
2019-03-16T13:11:01.380Z
|
2016-12-29T00:00:00.000
|
{
"year": 2016,
"sha1": "5b9dffe31a52217057a4e0cb0cda856e912e5283",
"oa_license": "CCBY",
"oa_url": "http://ouslj.sljol.info/articles/10.4038/ouslj.v11i0.7346/galley/3428/download/",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5b9dffe31a52217057a4e0cb0cda856e912e5283",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Business"
]
}
|
15583003
|
pes2o/s2orc
|
v3-fos-license
|
Image reconstruction of fluorescent molecular tomography based on the tree structured Schur complement decomposition
Background The inverse problem of fluorescent molecular tomography (FMT) often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG) and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.
Methods: The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem.
Results: Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG) and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem.
Conclusions: The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.
Background
Near-infrared (NIR) light can travel several centimeters through biological tissue, and hence has the potential to qualify the molecular information by fluorochromes in tissue [1]. Recently, there has been increasing interest in the molecularly-based medical imaging method, such as fluorescent molecular tomography (FMT) [2][3][4], in which the injected fluorophore may accumulate in diseased tissue. During the imaging process, the tissue surface is illuminated with excitation light. Then, the fluorophores are excited to emit the light, which is detected as fluorescence [5]. The process of fluorescent light generation and transportation through tissues can be described by a forward model, so that the surface measurements can be predicted on the basis of a guess of the system parameters and the given source positions. To reconstruct an image, it is necessary to calculate the internal optical and fluorescent properties with the given measured data and sources [6].
One of the major challenges in the reconstruction of FMT is its high computational complexity resulted from extremely large-scale matrix manipulations. Generally, the iterative solution approaches, such as CG method [7] and Gauss-Newton (GN) method [8], are more efficient than the direct solution approaches. Additionally, the iterative methods based on the reduced system can be more efficient than those based on the global system. One of such systems is the Schur complement system, which was firstly used by Haynsworth [9]. The condition number of the Schur complement of a matrix is never greater than that of the given matrix, and hence the convergence properties of iterative solving of linear systems can be significantly improved [7,10]. In this paper, we propose to adapt this idea for the FMT reconstruction. The most important innovation of our method lies in its tree structured level-by-level decomposition strategy, where decompositions in each level are performed in two ways. This strategy is quite different from that in [10] where only one component of the global solution is derived in the Schur complement system. The advantages of our method are obvious because a further improvement in the reconstruction accuracy and speed can be achieved with level-by-level Schur complement decomposition. Another contribution of this paper is that we propose a modified spatially variant regularization method incorporating the objective function to tackle the ill-posed nature of the inverse problem.
Forward Model and Finite Element Formulation
FMT acquisitions are obtained through a two-step image formation model [11]. In the first step, sources at several locations are used to illuminate the tissue. This step, in frequency domain, is driven by the diffusion equation [12] −∇ ⋅ ∇ where the subscript x denotes the excitation wavelength; ∇ is the gradient operator; S x (W/cm 3 ) is the excitation light source; Φ x (W/cm 2 ), D x (cm), and k x (cm -1 ) represent the photon fluence, the diffusion coefficient, and the decay coefficient, respectively; Ω denotes the bounded domain of reconstruction.
In the second step, the fluorophores are excited to emit the fluorescence. The second step can be modelled by a second diffusion equation where the subscript m indicates the emission wavelength, ω(rad/s) denotes the modulation frequency of the source. S m is the emission light source. The diffusion coefficient D x,m (cm), and the decay coefficient k x,m (cm -1 ) are defined, respectively, as [ where μ ax,mi (cm -1 ) denotes the absorption coefficient due to endogenous chromophores; μ ax,mf (cm -1 ) represents the absorption coefficient due to exogenous fluorophores; ′ ( ) − sx m , cm 1 is the reduced scattering coefficient; q is the quantum efficiency of the fluorophore; τ(s) is the lifetime of fluorescence; and finally, c(cm/s) is the speed of light in the medium.
Here, the Robin-type boundary conditions are implemented on the boundary ∂Ω of domain Ω to solve the above diffusion equations where n is a vector normal to the boundary ∂Ω, b x,m is the Robin boundary coefficient.
To solve the forward problem within the finite element method (FEM) framework, the domain Ω is divided into P elements and joined at N vertex nodes. The solution Φ x,m is approximated by the piecewise linear function Φ x m being basis functions [13]. Hence, equations (1) and (2) can be rewritten as The elements of finite element matrix A x,m can be obtained from the formula ∇ ⋅∇ + + ∫ ∫∫ ∫ ∫∫ (11) with Ω h and Γ h being the bounded domain and its boundary, respectively.
Inverse Process of FMT
The inverse process of FMT is to estimate the spatial distribution of the optical or fluorescent properties of the tissues from measurements [14]. In the discrete case, the reconstruction problem can be defined as the optimization of the objective function x (12) where G is the forward operator, || || is L 2 -norm, x and y are the calculated optical or fluorescent properties of the tissues and the detector readings, respectively.
Suppose that the objective function E attains its extremum at x + Δx, expanding the gradient of the objective function E' about x in a Taylor series and keeping up to the first-order term leads to Equation (13) can be further written as [15] J y J J H y x where T denotes the transpose, Δy = y -G(x) is the residual data between the measurements and the predicted data. The Jacobian matrix J is a measure of the rate of change in measurement with respect to the optical parameters. It describes the influence of a voxel on a detector reading. H is the Hessian matrix, whose entries are the secondorder partial derivatives of the function with respect to all unknown parameters describing the local curvature of the function with respect to many variables [16].
Introducing the Tikhonov regularization term to tackle the ill-posedness of the inverse problem and ignoring the Hessian matrix, the solution to the linearized reconstruction problem can be described as follows where l is a regularization parameter, which can be determined by the Morozov discrepancy principle [17], I is an identity matrix.
Adaptive Regularization Scheme
The problem of image reconstruction for FMT is ill-posed [18]. The Tikhonov regularization technique, as mentioned above, is one of the major methods to reduce the ill-posedness of the problem [19]. However, there exists one protrudent difficulty for this technique in the determination of the regularization parameter. A general unexpected characteristic of the NIR imaging is that the resolution and contrast of the reconstructed images degrade with the increased distance from the sources and the detectors [20]. Considering the fact that the value of the regularization parameter has important effect on the contrast and resolution of the resultant images, one strategy to solve this problem is to use a spatially variant regularization parameter. Meanwhile, it can be inferred that the objective function is related to the regularization parameters [15]. During the process of minimizing the objective function, decreasing l will speed up the convergence if the value of objective function is decreasing, otherwise increasing l can enlarge the searching area (trust-region). Upon the basis of these considerations, we propose a modified regularization method both adaptive to the spatial variations and the objective function.
Suppose that the number of measurements and the number of the vertex nodes are M and N, respectively. Thus, we have for the matrices in equation (15): To construct a spatially variant regularization framework, the inverse term of (J T J + lI) -1 in equation (15) is replaced with (J T J + l) -1 , which results in the following equation with Δx i (i = 1, 2, ..., N) being the component of the vector Δx. It can be easily seen that each node p i (i = 1, 2,...,N) in the reconstructed domain is regularized by a corresponding regularization parameter l i (i = 1, 2,...,N) respectively. Obviously, the above mentioned Tikhonov regularization can be regarded as a special case of equation (17) when l 1 = l 2 = ... l N = l.
It was pointed out in [21] that, the resolution and contrast of the images decrease with the increment of the regularization parameters and vice versa. Therefore, the adaptive regularization parameter l i can be defined as follows to compensate the decrease of the resolution and contrast with the increased distance from the sources and detectors: where r i is the position of node p i , r s and r m respectively denote the positions of the source and detector closest to the node p i , c 1 and c 2 are two positive parameters determined empirically in our paper.
To make the regularization parameter adaptive to the objective function as defined in equation (12), we propose to incorporate it in the regularization as follows In equation (19), the arctan function is used to guarantee a relatively small fluctuation range of the regularization parameters and avoid too large values of them. Obviously, regularization parameters determined from equation (19) relate to the objective function in a similar manner to that as pointed out before. In such a way, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to accelerate the convergence.
Reconstruction Based on the Schur Complement System
As has been pointed out previously, the iterative methods based on the Schur complement system can be more efficient to solve large-scale problems. Hence, we propose to reconstruct the tomographic image of FMT with level-by-level decomposition in the Schur complement system.
For convenience of discussions, equation (16) can be further rewritten in a more compact form as where k = J T J + l and b = J T Δy.
To solve the inverse problem of FMT in the Schur complement system, the solution space R n is firstly decomposed into two subspaces of U and V with dimensions m and n-m, respectively. Let [Γ Ψ] be an orthonormal basis of the solution space R n . The basis of the m-dimensional coarse subspace U is formed by the columns of Γ R n × m and the columns of Ψ R n × (n-m) form the basis of the (nm) dimensional subspace V.
Therefore, the solution to equation (20) can be expressed with the bases of the two subspaces as follows where u and v are the projections of Δx on the subspaces U and V, respectively. Because both the condition number and the scale of the system can be reduced after Schur complement decomposition, we propose to further decompose both the projections u and v level by level with the Schur complement decomposition along two paths in a tree structure, and then solve the subsystems in the Schur complement systems. Our approach is different from that proposed in [10], where only the projection v is solved in the Schur complement system. The level-by-level Schur complement decomposition can be schematically illustrated as in Figure 1. We derive the iterative system in the following discussions.
Suppose that the subsystem at the ith level is as follows where S (i, j) is the Schur complement matrix with the subscript (i, j) being the jth (j = 0, 1,..., 2 i ) term at the ith (i = 0, 1,..., L) level in the tree structure as illustrated in Figure 1. Particularly, S (0,0) is the global matrix k as defined in equation (20). To solve this system in the Schur complement system, equation (22) will be further decomposed at the i+1th level. Thus, the solution Δx (i,j) is firstly expressed with the bases of the two subspaces as where Δx (i+1,2j-1) and Δx (i+1,2j) are the projections of Δx (i,j) on the subspaces formed by the columns of Γ (i,j) and Ψ (i,j) , respectively.
Substituting equation (23) into equation (22) yields
Multiplying both sides of equation (24) from the left by [Γ (i,j) Ψ (i,j) ] T , we can obtain Thus, equation (25) can be further rewritten into a two-by-two block system where S , while the two components on the right-hand side (RHS) of equation (26) . From equation (26), it can be seen that S (i,j)11 and S (i,j)22 correspond to the equations for the unknowns of Δx (i+1,2j-1) and Δx (i+1,2j) , respectively, while S (i,j)12 and S (i,j)21 define the coupling between these two sets, which will be eliminated in the following discussions.
Applying block Gaussian elimination to equation (26) leads to [22] S S 0 S , which is called the Schur complement with respect to S (i,j)11 [7] It can be found that the condition number of matrix S (i+1,2j) is smaller than that of matrix S (i,j) [9]. Hence, solving the inverse problem in the Schur complement system at the i+1th level will be more efficient than solving it at the ith level. We herein solve equation (29) using the biconjugate gradient method [23]. Its advantage is that it does not square the condition number of the original equations [24]. Basically, the biconjugate gradient method can be used to solve the large-scale systems with the fastest speed among all the generalized conjugate gradient methods in many cases [25]. The algorithm for solving equation (29) can be summarized as follows Algorithm 1
Input an initial guess Δx
End for After the derivation of Δx (i+1,2j) from equation (29) with algorithm 1, the next task is to obtain the other component of Δx (i+1,2j-1) for the synthesis of the solution Δx (i,j) . Here, Δx (i+1,2j-1) is also solved in the Schur complement system due to its low condition number.
From equation (30), we can obtain Thus, the solution Δx (i+1,2j-1) can be obtained in a same manner as in Algorithm 1, and the only difference is that Δx (i+1,2j) , S (i+1,2j) , and b (i+1,2j) should be replaced with Δx (i+1,2j-1) , S (i+1,2j-1) , and b (i+1,2j-1) , respectively. Solving equation (31) is computationally efficient because of the reduced condition number in the Schur complement system [7]. Moreover, such a strategy of deriving both Δx (i+1,2j-1) and Δx (i+1,2j) in the Schur complement system can be implemented in a parallel manner, since equations (29) and (31) are decoupled. Therefore the subsystem at the ith level as in equation (22) can be decomposed into the two linear subsystems at the i+1th level, i.e., Schur complement systems as in equations (29) and (31). After obtaining Δx (i+1,2j-1) and Δx (i+1,2j) , they are then substituted into equation (23) to yield the solution Δx (i,j) at the ith level. The whole reconstruction algorithm is summarized as follows Algorithm 2 1. Set x 0 to an initial guess; 2. x x 0 , calculate b and k at x in equation (20) with the adaptive regularization scheme as in equation (19); 3. The global system of equation (20) is decomposed with the Schur complement system level by level in a same manner as the decomposition of equation (22) into equations (29) and (31) to obtain the subsystem S (i,j) Δx (i,j) = b (i,j) at the ith level for i =1,..., L and j =1,..., 2 i , the subspaces at the ith level are formed by the columns of Γ (i,j) and Ψ (i,j) , respectively;
As mentioned before, the Schur complement system has a smaller condition number than that of the system from which it is constructed [7]. As a result, iterative methods based on the Schur complement systems can be more efficient than the methods based on the global matrix as in equation (20) due to its reduced scale and the smaller condition number. Therefore, the proposed algorithm can be expected to be more efficient than the conventional ones, as the results demonstrated in the next section.
Results and Discussion
In this work, assuming that the scattering coefficients are known, we focus on the reconstruction of the absorption coefficient μ axf . Two phantoms as illustrated in Figure 2 are used to evaluate the proposed algorithm. Figure 2(a) contains one object, and Figure 2(b) contains two objects of different shapes. Table 1 and Table 2 outline the optical and fluorescent parameters in different regions of the simulated phantoms corresponding to Figures 2(a) and 2(b), respectively. Four sources and thirty detectors are equally distributed around the circumference of the simulated phantom. The simulated forward data are obtained from equations (1) and (2), in which Gaussian noise with a signal-to-noise ratio of 10dB is added to evaluate the noise robustness of the algorithms. The parameters c 1 and c 2 in equation (19) are, respectively, set to 0.2 and 2. The initial guesses for solutions Δx (i+1,2j) and Δx (i+1,2j-1) of equations (29) and (31) are set to 0. The initial value of x 0 is set to 5 mm -1 . The subspace created from the right singular vectors of the singular value decomposition (SVD) is optimal. Since SVD is computationally expensive, it is expected that a subspace close to SVD subspace will do almost as good. Thus, the choice of an oscillatory basis can be a basis created by sine or cosine functions with increasing frequency [26]. Here discrete cosine basis is employed in the simulations. To reliably evaluate the performance of different methods for the inverse problem, the best way is to use an independent forward model, which is different from the one employed in the inverse problem, to generate the synthetic data [27]. Therefore, in our case, a finer mesh as shown in Figure 3 with 169 nodes and 294 triangular elements is used to generate the forward simulated data. Table 2 Optical and fluorescent properties of two-object phantom It is well known that the most significant superiority of the anatomical imaging modality lies in the high spatial resolution. Hence, it will be helpful to improve the image quality and accelerate the reconstruction process if we use the anatomical image as prior information for mesh generation. The reconstructed domain is firstly uniformly discretized according to the Delaunay triangulation scheme, after which the uniform mesh is refined only for the areas with large variations of the pixel values. To simulate this idea, we employ the images shown in Figures 4(a) and 4(b) with a resolution of 100 × 100 pixels as the prior images corresponding to Figures 2(a) and 2(b), respectively. The meshes are generated as shown in Figure 5 for the inverse problem of FMT. The mesh with 122 nodes and 212 triangular elements ( Figure 5(a)), and the where N is the total number of nodes in the domain. The superscript calc denotes the values obtained using reconstruction algorithms; and actual denotes the actual distribution of μ axf which is used to generate the synthetic image data set. Table 3 lists the performance of the reconstruction algorithms in terms of MSE. It can be seen that the adaptive regularization scheme can significantly improve the quality of the reconstructed images and achieve a smaller MSE in either case. Figure 8 shows the reconstructed images of μ axf for one object phantom using the different algorithms after 1, 15, and 30 iterations, respectively. After 30 iterations, the reconstructed image from the proposed algorithm has a relatively higher contrast than those obtained from the other two algorithms. Figure 9 depicts the reconstructed images of μ axf for two objects phantom using the different algorithms, from which it can be seen that the proposed method can reconstruct the images more accurately than the other two methods even after the first iteration. According to the third column of Figure 9, the reconstructed image quality based on our algorithm is significantly improved as compared with that based on the other two methods. We investigated how the MSE changed against the number of iterations for different algorithms. Figure 10 shows a fast convergence of our algorithm with a less MSE than the other two algorithms. In addition, the CG method converges slower than the Schur CG method and our method, which means that solving the inverse problem based on the Schur complement system is superior to that based on the global system. The computation time of different algorithms is further investigated in our work to evaluate the convergence rate. Table 4 lists the computation time after 30 iterations for different algorithms. From this table, it can be seen that the time needed for our algorithm after 30 iterations is less than that of the Schur CG method. Although the former is a little bit longer than the time needed for the CG method, our algorithm needs only less than 5 iterations to achieve the precision of the CG method after 30 iterations. As a result, the CG method needs much more iterations to achieve a given precision of reconstruction than our method. Therefore, compared with the other two methods, the proposed algorithm is more efficient and stable. To further validate the proposed algorithm for 3D reconstruction, a phantom as illustrated in Figure 11 is used for simulations. Within this phantom, a small cylindrical object is suspended. In Figure 11, the dashed curves represent the planes of measurements. Four sources and sixteen measurements are used for each plane in the simulations. The mesh for reconstructing the 3D image is shown in Figure 12, which contains 858 nodes and 3208 tetrahedral elements. Figures 13 and 14 depict the reconstructed 2D cross sections of the 3D phantom shown in Figure 11 using the Schur CG method and the proposed algorithm, respectively. Table 5 lists the performance of the 40mm 20mm Figure 11 Simulated phantom for 3D reconstruction. The phantom of radius 10 mm and height 40 mm with a uniform background of μ axf = 0.005mm -1 , which is positioned at x = 10mm, y = 0mm and z = 20mm. The small cylindrical anomaly has a radius of 2 mm and height 6 mm with μ axf = 0.01mm -1 . The anomaly is positioned at x = 15mm, y = 0mm and z = 20mm. The dashed curves represent the measurement planes, at z = 15mm, z = 20mm, z = 25mm. Our algorithm 141s 179s above two methods for a quantitative comparison. From this table, we can conclude that our proposed algorithm can also speed up the reconstruction process and achieve high accuracy for the 3D case.
Conclusion
In this paper, we developed a novel image reconstruction method of FMT, based on the tree structured Schur complement decomposition in combination with the adaptive regularization scheme. The proposed approach decomposes the global inverse problem level by level with the Schur complement decomposition, and the resultant subsystems are solved with the biconjugate gradient method. The spatially variant regularization parameter is determined adaptively according to the objective function. Simulation results demonstrate that the proposed method outperforms the previous methods, such as the CG and the Schur CG methods, in both reconstruction accuracy and speed. Figure 14 Reconstructed images using the proposed algorithm. Reconstructed images using the proposed algorithm, which are 2D cross sections through the reconstructed 3D volume. The right-hand side corresponds to the top of the cylinder (z = 40 mm), and the left corresponds to the bottom of the cylinder (z = 0 mm), with each slice representing a 10 mm increment. Figure 13 Reconstructed images using the Schur CG method. Reconstructed images using the Schur CG method, which are 2D cross sections through the reconstructed 3D volume. The right-hand side corresponds to the top of the cylinder (z = 40 mm), and the left corresponds to the bottom of the cylinder (z = 0 mm), with each slice representing a 10 mm increment.
|
2014-10-01T00:00:00.000Z
|
2010-05-20T00:00:00.000
|
{
"year": 2010,
"sha1": "53ec78bff684408af56975fb8fbe023946f7b56f",
"oa_license": "CCBY",
"oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/1475-925X-9-20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ba356b54b962c5171a52d958a0d3960ec591530",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
}
|
221572121
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptome Analysis of the Chicken Follicular Theca Cells with miR-135a-5p Suppressed
As a class of transcription regulators, numerous miRNAs have been verified to participate in regulating ovary follicular development in chickens (Gallus gallus). Previously we showed that gga-miR-135a-5p has significant differential expression between high and low-yield chicken ovaries, and the abundance of gga-miR-135a-5p is significantly higher in follicular theca cells than in granulosa cells. However, the exact role of gga-miR-135a-5p in chicken follicular theca cells is unclear. In this study, primary chicken follicular theca cells were isolated and then transfected with gga-miR-135a-5p inhibitor. Transcriptome sequencing was performed in chicken follicular theca cells with or without transfection. Differentially expressed genes (DEGs) were analyzed using bioinformatics. A dual-luciferase reporter assay was used to verify the target relationship between gga-miR-135a-5p and predicted targets within the DEGs. Compared with the normal chicken follicle theca cells, 953 up-regulated and 1060 down-regulated genes were detected in cells with gga-miR-135a-5p inhibited. The up-regulated genes were significantly enriched in Gene Ontology terms and pathways involved in cell proliferation and differentiation. In chicken follicular theca cells, Krüppel-like factor 4 (KLF4), ATPase phospholipid transporting 8A1 (ATP8A1), and Complexin-1 (CPLX1) were significantly up-regulated when the expression of gga-miR-135a-5p was inhibited. In addition, KLF4, ATP8A1, and CPLX1 confirmed as targets of gga-miR-135a-5p by using a dual-luciferase assay in vitro. The results suggest that gga-mir-135a-5p may involve in proliferation and differentiation in chicken ovarian follicular theca cells by targeting KLF4, ATP8A1, and CPLX1.
Gallus gallus ovarian theca cells gga-miR-135a-5p transcriptome sequencing MicroRNAs (miRNAs) are a class of small noncoding RNAs of about 18-24 nucleotidesin length (Lee et al. 1993;Bartel 2004) that function as regulators in post-transcriptional gene expression by targeting sequence-specific sites in the 39-untranslated region (39-UTR) of mRNA (Grosshans and Slack 2002;Ambros and Chen 2007;Krol et al. 2010). Studies indicate that miRNAs play key roles in ovarian follicular development and function, including the formation of primordial follicles, follicular recruitment and selection, follicular atresia, oocyte-cumulus cell interaction, granulose or theca cell function, and luteinization (Hawkins and Matzuk 2010;Hasuwa et al. 2013;Kang et al. 2013;Maalouf et al. 2015;Zhang et al. 2019). Mouse miR-145 and miR-181a (Yan et al. 2012;Zhang et al. 2013), bovinelet-7 families et al. (Salilew-Wondim et al. 2014), buffalo miR-210 (Shukla et al. 2018), porcine miR-26b et al. (Lin et al. 2012) and chicken miR-107 (Miao et al. 2016) were all validated to be involved in granulosa cell proliferation, apoptosis and other cell function. In ovary theca cells, studies have showed that miRNAs also play an important role in cell function. In bovine, miR-640 and miR-526b à (Sohel et al. 2013), and bta-miR-335 (Gebremedhn et al. 2015) were proved to express more abundant in theca cells. Several predicted miRNA target interactions miR-155/miR-222-ETS1miR-199a-5p-JAG1, miR-155-MSH2and miR-199a-5p/miR-150/miR-378-VEGFA in theca cells were putatively involved in follicular atresia (Donadeu et al. 2017). Another study showed that abundance of MIR-221 was 66.sixfold greater (P , 0.001) in TCs than in GCs in bovine large follicles, and thecal MIR-221 expression is increased by FGF9 (Robinson et al. 2018). In sheep, northern analyses showed that the expression of miR-199a-3p, miR-125b, miR-145, miR-31, miR-503, miR-21 and miR-142-3p intheca cells were higher than those in granulose cells (McBride et al. 2012). In woman, two miRNAs-miR-92a and miR-92b were validated to be significantly downregulated in theca cells and might be involved in the pathogenesis of PCOS (Lin et al. 2015). In addition, miR-26a-5p was verified to facilitate theca cell proliferation in chicken ovarian follicles by targeting TNRC6A (Kang et al. 2017;Wu et al. 2019). MiR-135a were proved to be overexpressed in GCs from PCOS patients, a study showed that miR-135a repressed ESR2 expression in GCs, which further inhibited CDKN1A expression, promoted GC proliferation and repressed GC apoptosis (Song et al. 2019). Furthermore, another finding indicates that miR-135a promotes apoptosis and the DNA damage response in GCs in PCOS, likely via VEGFC signaling (Wei et al. 2020). To our knowledge, the function of gga-miR-135a-5p in chicken follicular theca cells has not been reported. Our previous study showed that gga-miR-135a-5p was differentially expressed, with a fold-change of 8.93, in high compared low-yield ovaries of a Chinese indigenous chicken breed, and it was expressed significantly higher in follicular theca cells than in granulosa cells (unpublished). Therefore, the overall results indicated that miR-135a-5p may play an important role in chicken follicular theca cells.
In this study, we first transfect the gga-miR-135a-5p inhibitor into chicken follicular theca cells. RNA sequencing was performed for transcriptome analysis using the Illumina HiSeq sequencing platform. Adual-luciferase report assay was used to verify the regulatory relationship between miR-135a-5p and the predicted differentially expressed gene (DEG) targets. We present evidence n■ that gga-miR-135a-5p is involved in the biological function of ovarian follicular development. This study providesa scientific basis for a mechanism of gga-miR-135a-5p regulation in the follicular development of poultry.
Ethics statement
All experimental procedures were approved by the Animal Care Committee of the Academy of Agricultural Sciences, ShandongProvince, Ji'nan, China. The care and use of experimental animals were carried out in accordance with the Directory Proposals on the Ethical Treatment of Experimental Animals, established by the Ministry of Science and Technology (Beijing, China).
Birds and tissue harvest
Three single-comb white Leghorn hens were selected randomly from Shandong Poultry Breeding and Engineering Technology Research Center to be used in this study. All birds were reared in an environmentally controlled house. Fresh water and feed were provided according to the Feeding Standard established by the Ministry of Agriculture (Beijing, China). At 40 weeks of age, the F1-F5 follicles were removed carefully and placed in pre-cooled phosphate-buffered saline (PBS) for the next step of theca cell culture.
Theca cell culture and transfection
The F1-F5 follicle theca layers were separated according to Kang et al. (Kang et al. 2017). The isolated theca layers were minced to 1mm 3 pieces and digested with collagenaseII (w/v, 0.2%, Gibco, Grand Island, New York, USA) at 37°for 30 min. Remove the supernatant, the cell precipitation digested with collagenaseII (w/v, 0.2%, Gibco) at 37°for 30 minagain. The dispersed theca cells were filtrated with a sterilized 200-mesh filter and then centrifuged at 1800· rpm for 10 min. The cell precipitations were washed two times with cell culture medium containing M199 (HyClone, Logan, Utah, USA) supplemented with 10% (v/v) fetal bovine serum (Gibco) and 1% (v/v) penicillin-streptomycin solution (Solarbio, Beijing, China). The cells were then seeded in 24-well plates at a density of 2·10 5 per well and cultured at 37°in an atmosphere of 95% air and 5% CO2. The number of viable cells (.90%) was estimated using Trypan blue.
RNA extraction, cDNA library construction and sequencing A total of five samples, including two groups of normal chicken follicular theca cells groups T07 and T08 (NG) and 3 groupstransfected with gga-miR-135a-5p inhibitor T10, T11, and T12 (TG) were used for sequencing. The total RNA of each sample was extracted with Trizol (Aidlab, Beijing, China) according to the manufacturer's instructions. RNA integrity and concentration were checked using an Agilent 2100 Bioanalyzer (Agilent Technologies, Inc., Santa Clara, CA, USA). The mRNA was isolated by using NEBNext Poly (A) mRNA Magnetic Isolation Module (E7490, NEB, Ipswich, MA, USA). The cDNA library was constructed following the instructions of the NEBNext Ultra RNA Library Prep Kit for Illumina (NEB, E7530) and Figure 1 The relative expression level of gga-miR-135a-5p in NG, TG and NC. ÃÃ P , 0.01. NG: follicular theca cells; TG: inhibitor transfected cells; NC: negative control.
Figure 2
The expression level of the top 22 upregulated genes with the most significant differential expression greater than fourfold between cells with normal and inhibited expression of gga-miR-135a-5p.
NEBNext Multiplex Oligos for Illumina (NEB, E7500). In brief, the enriched mRNA was fragmented into approximately 200nt RNA inserts, which were used to synthesize the first-strand cDNA and the second cDNA. End-repair/dA-tail and adaptor ligation were performed on the double-stranded cDNA. Suitable fragments were isolated by AgencourtAMPure XP beads (Beckman Coulter, Inc.), and enriched by PCR amplification. Finally, the constructed cDNA libraries were sequenced on a flow cell using an Illumina HiSeq sequencing platform.
Transcriptome analysis using reference genome-based reads mapping Low quality reads, such as adaptoronly, unknown nucleotides . 5%, or Q20 ,20% (percentage of sequences with sequencing error rates ,1%), were removed using aperl script. The clean reads filtered from the raw reads were mapped to the chicken genome (Gallus gallus, Galgal 4.75) using Tophat2 (Kim et al. 2013) software. The aligned records from the aligners in BAM/SAM format were further examined to remove potential duplicate molecules. Gene expression levels were estimated using fragments per kilobase of exon per million fragments mapped (FPKM) values by Cufflinks software (Trapnell et al. 2010).
Identification of DEGs
DESeq and Q-value were employed and used to evaluate differential gene expression between cells expressing gga-miR-135a-5p (NG) and those with gga-miR-135a-5p inhibited (TG). After that, gene abundance differences between those samples were calculated based on the ratio of the FPKM values. The false discovery rate (FDR) control method was used to identify the threshold of the P-value in multiple tests in order to compute the significance of the differences. Here, only genes with an absolute value of log2 fold change $1 and FDR significance score ,0.01 were used for subsequent analysis.
Functional annotation
The Database for Annotation, Visualization, and Integrated Discovery (DAVID v6.7) was used to annotate the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes pathways of the DEGs. The GO includes biological process, molecular function, and cellular component categories. An online software analysis tool (http:// www.lc-bio.cn/overview/12?tools=GO_BarPlot) was used to plot GO functional classification of the unigenes with a GO term hit to view the distribution of gene functions. Finally anonline software analysis tool (http://www.lc-bio.cn/overview/14?tools=KEGG_Bar-Plot) was used to map the enriched pathways associated with the DEGs.
Quantitative real-time PCR validation
To confirm the differential expression results, we conducted quantitative RT-PCRin a LightCycler 96 Real-Time PCR system (Roche, Switzerland) using a PrimeScript RT reagent Kit with a gDNA Eraser (Takara, Japan) and TB Green Premix Ex Taq II (TliRNaseH Plus, Takara, Japan) following the manufacturer's directions. A total of 12 genes were used in qPCR to determinethe abundance of mRNAs. b-actin (Sangon Biotech, China) was used for normalization of the expression data. The relative mRNA expression level was calculated using the 2 -DDCT method.All the primers for qRT-PCR are exhibited in Table 1. Three independent replications for each sample were used and data are presented as means 6 SD.
Dual-luciferase report assay
The 39-UTR sequence of KLF4, ATP8A1, and CPLX1 harboring the gga-miR-135a-5p binding sites were amplified with the primers ATP8A1_WT, CPLX1_WT, and KLF4_WT ( Table 2). The PCR products were cloned into the pmiR-RB-REPORT (Ribibio, China) vector to construct the wild-type plasmid, designated ATP8A1_WT, CPLX1_WT and KLF4_WT. The gga-miR-135a-5p binding sites were mutated in the WT vectors to construct the mutant luciferase Figure 3 Volcano plot of differentially expressed genes in cells with normal and inhibited expression of gga-miR-135a-5p. The X-axis represents log2 (FC) and Y -axis represents -log10 (FDR). The green dots indicate the down-regulated genes, the black dots indicate the genes with no significant differences, and the red dots indicate up-regulated genes.
reporter vectors designated ATP8A1_mut, CPLX1_mut, and KLF4_ mut. Then, 293T cells were seeded into 24-well plates, and cotransfected with mimics or non-target control at a concentration of 50 nmol/L and 250 ng wild type or mutant luciferase reporter plasmids. After transfection for 48 hr, luciferase activities were measured using the Dual-GloLuciferase Assay System (Promega, USA).
Data availability
The raw sequence data reported in this paper have been deposited in the Genome Sequence Archive (Genomics, Proteomics & Bioinformatics 2017) in Beijing Institute of Genomics (BIG) Data Center (Nucleic Acids Res 2019), Chinese Academy of Sciences, under accession number CRA001745 that are publicly accessible at http://bigd.big.ac.cn/gsa. Supplemental material available at figshare: https://doi.org/10.25387/g3.12899993.
Chicken follicle theca cells transfected successfully with gga-miR-135a-5P inhibitor
The expression level of gga-miR-135a-5p in chicken follicular theca cells, cells transfected with an inhibitor, and NC were detected by qRT-PCR, as shown in Figure 1, compared with the control and normal chicken follicular theca cells, the relative expression level of gga-miR-135a-5p decreased significantly in the cell inhibitor group (P , 0.01).
RNA expression and differential analysis
Based on the filtering criteria of the gene abundance differences with an absolute value of log2 fold change $1and FDR significance score ,0.01, 2013 genes were found to express with a significant difference between cells with gga-miR-135a-5p expression (T07 and T08) and cells with gga-miR-135a-5p inhibited (T10, T11, and T12), while 953 genes were significantly up-regulated, and 1060 genes were significantly down-regulated in cells with gga-miR-135a-5p inhibited (Additional file: Table S1). Only 22 known up-regulated genes showed differences greater than log2 fold change $4 between the groups (Figure 2).The volcano plot of DEGs in different groups is shown in Figure 3. Hierarchical clustering analysis of DEGs was performed, and the result of the heatmap is shown in Figure 4.
Functional annotation of DEGs
The GO enrichment analysis for up-regulated DEGs showed that a total of 125 terms were enriched in biological processes, including cell proliferation, cell differentiation, cell division, regulation of transcription from RNA polymerase II promoters, etc. Among these, 39 terms were preferentially enriched in cell components, such as nucleus, cytoplasm, nucleoplasm, and plasma membrane and 34 terms were enriched in molecular functions including: ATP binding, DNA binding, and transcriptional activator activity ( Figure 5).
The KEGG analysis showed that18 terms were enriched ( Figure 6). The up-regulated DEGs were preferentially enriched in pathways associated with cellular functions such as the cell cycle (cell division), cytokine-cytokine receptor interaction (cell growth, differentiation, and cell death), the TGF-beta signaling pathway (cell proliferation, apoptosis, differentiation, and migration), Wnt signaling pathway (cell-fate specification, progenitor-cell proliferation, and the control of asymmetric cell division),and p53 signaling pathway (cell differentiation).
KLF4, ATP8A1, and CPLX1are target genes of gga-miR-135a-5p To further reveal the regulated mechanism of gga-miR-135a-5p in follicular theca cells in chicken, two online algorithms (TargetScan and Pictar) were used to identify the target genes of miR-135a-5p, the up-regulated DEGs KLF4, CPLX1, and ATP8A1 appeared in both databases. Although, the RNA-seq data indicated that the expression level of KLF4, CPLX, and ATP8A1 were significantly up-regulated with the gga-miR-135a-5p inhibitor, to confirm their relationship in vitro, we cotransfected the wild-type or mutant luciferase reporter vectors of each gene and corresponding mimics or a non-target control into 293T cells. The dual-luciferase reporter assay results showed that miR-135a-5p decreased the activity of luciferase with wild-type KLF4, ATP8A1 and CPLX1 (Figure 8). After the mutation of predicted target sites, the reported fluorescence in mutant vectors recovered. These results suggest that miR-135a-5p may directly regulate gene expression by binding to sites on the 39-UTR of the three target genes, KLF4, ATP8A1, and CPLX1.
DISCUSSION
The function of miR-135a in human disease has been widely studied (Tang et al. 2014;Zhang et al. 2017). It has been found to play an important role in several diseases including epithelial ovarian cancer (Tang et al. 2014), diabetes (Agarwal et al. 2013), malignant glioma , sepsis (Zheng et al. 2017), and endometriosis lesions (Petracco et al. 2019). In addition, miR-135a-5p was found to be critical for exercise-induced adult neurogenesis (Pons-Espinal et al. 2019), by controlling NCX1 expression, miR-135a modulates cardiomyocyte automaticity, Ca 2+ extrusion, and arrhythmogenic Ca 2+ loading/spontaneous Ca 2+ release events to contribute to proarrhythmic remodeling after Complete atrioventricular block (Duong et al. 2017). As an important regulatory factor, miR-135ais involved in the regulation of 3T3-L1 preadipocyte differentiation and adipogenesis through the activation of canonical Wnt/b-catenin signaling by directly targeting Apc (Chen et al. 2014). In chicken, lncRNA-gga-miR-135a -mRNA interactions may promote the adipogenic differentiation of chicken preadipocytes (Chen et al. 2019). Our study provides evidence of gga-miR-135a-5p involvement in chicken follicular theca cells cytopoiesis.
We performed transfection with gga-miR-135a-5p inhibitor to suppress its expression in chicken follicular theca cells, and then performed transcriptome sequencing. Compared with the normal follicular theca cell group, 2013 differential expression genes composed of 953 up-regulated genes and 1060 down-regulated genes were identified. Bioinformatic analyses showed that the up-regulated DEGs were enriched in the TGF-b signaling pathway (gga04350), p53 signaling pathway (gga04115), and Wnt signaling pathway (gga04310), which are known to be involved in follicular development (Xu et al. 2018;Du et al. 2018). The up-regulated DEGs BMP2, BMP4, TGFb2, TGFb3, and BAMBI are enriched in the TGF-b signaling pathway and have a promotive function in granulosa cell proliferation, follicle survival, and prevention of premature luteinization and/or atresia (Schmid et al. 1994;Souza et al. 2002;Knight and Glister 2006). THBS1, CCNB2, and CDK1 are enriched in the p53 signaling pathway and have been found to play a role in granulosa cell proliferation in beef cattle (Dias et al. 2013), humans (Tremblay and Sirard 2017) and ovine species (Talebi et al. 2018). The Wnt family has been implicated in follicular development, and its components the up-regulated DEGs WNT2B, AXIN2, SFRP4, WNT6, NFATC2, WNT5B, and BAMBI have also been found to play important roles in ovarian follicle development (Li et al. 2014;Hatzirodos et al. 2014;Gupta et al. 2014;Drake et al. 2003;Chen et al. 2012). Thus, gga-miR-135a-5p may regulate the follicular theca cell development through the important signaling pathways mentioned above.
Bioinformatics analysis showed that the KLF4, CPLX1, and ATP8A1 DEGs were the target genes of gga-miR-135a-5p. The expression of KLF4, CPLX1, and ATP8A1 was significantly up-regulated in the gga-miR-135a-5p inhibitor group, and the direct binding relationships between them were further validated by a dual-luciferase reporter assay.
Krüppel-like factor 4 (KLF4) belongs to the KLF family of transcription factors, and exerts important biological effects on Figure 7 Relative expression of 12 DEGs between cells with normal (NG) and inhibited (TG) expression of gga-miR-135a-5p. cellular proliferation, differentiation, and apoptosis (Ghaleb et al. 2005;Natesampillai et al. 2008;Black et al. 2001) in various cells types. Combined LH and IGF-I stimulation increased KLF4 mRNA expressed in porcine ovarian granulosa cells (Natesampillai et al. 2008). An H 2 O 2 -induced in vitro model and a 3-nitropropionic acid (NP)-induced in vivo model of mouse ovarian oxidative stress showed that miR-145 protects granulosa cells against oxidative stress-induced apoptosis by targeting KLF4 (Xu et al. 2017).The regulatory function of KLF4 on rat preovulatory granulosa cells also has been confirmed by Hyeonhae and Jaesook (Choi and Roh 2019).They found that KLF4 increases the susceptibility of preovulatory granulosa cells to apoptosis by down-regulating Bcl-2, and promotes an LH-induced cell cycle exit. Interestingly, a directly targeted gene of gga-miR-135a-5p, KLF4, was significantly enriched in the negative regulation of cell proliferation (GO:0008285) and regulation of cell differentiation (GO:0045595), which indicated that gga-miR-135a-5p may up-regulate the expression of KLF4 to regulate the proliferation and differentiation of chicken follicular theca cells.
CPLX1 belongs to a highly conserved complexin protein family and encodes a neuronal protein (Fernandez and Dittman 2009). Studies have shown that Cplx1(2/2) mice have profound ataxia that limits their ability to perform co-ordinated motor tasks, and have pronounced deficits in social behaviors (Drew et al. 2007). The CPLX1 variants were involved in patients with ID, developmental delay, and myoclonic epilepsy (Brose 2008;Kielar et al. 2012;Redler et al. 2017). In chicken, there is a lack of data for CPLX1, it was only found to be correlated to earlobe color in Rhode Island Red chickens (Nie et al. 2016). Our results suggest that CPLX1 was not only strongly linked with the neurodevelopmental function of the neuronal cell body (GO:0043025), regulation of neurotransmitter secretion (GO:0046928), and synaptic growth at the neuromuscular junction (GO:0051124) as previously reported, but also involved in the biological process of transmembrane transport (GO:0055085). We speculated that the gga-miR-135a-5p may participate in the regulation of the transmembrane transport function of chicken follicular theca cells by targeting CPLX1.
ATP8A1 is a member of the P4-ATPases subfamily. In mammalian cells, ATP8A1 has been implicated in the translocation of phospholipids (Daleke and Lyles 2000). In Chinese hamster ovary cells, the phospholipid flippase complex of ATP8A1 and CDC50A was found to play a major role in cell migration (Kato et al. 2013). Also, ATP8A1 was verified to play a role in regulating the growth and mobility of non-small-cell lung cancer cells (Dong et al. 2016). In this study, as confirmed by previous studies, ATP8A1 was enriched in the biological processes involved in phospholipid transport, such as: phospholipid-translocating ATPase activity (GO:0004012), phospholipid transport (GO:0015914) and amino phospholipid transport (GO:0015917). It was also enriched in the biological processes associated with cell development, such as positive regulation of multicellular organism growth (GO:0040018) and negative regulation of cell proliferation (GO:0008285). Interestingly, ATP8A1 was the direct target of gga-miR-135a-5p, this implies a regulation mechanism of gga-miR-135a-5p in chicken follicular theca cells by downregulating ATP8A1.
In conclusion, gga-miR-135a-5p can directly target the 39-UTR of KLF4, CPLX1, and ATP8A1 genes to inhibit their expression in chicken follicular theca cells. The data show that gga-mir-135a-5p may play an important role in regulating chicken ovarian follicular theca cells development. Our findings eliminate a gap in the knowledge of gga-miR-135a-5p regulation in chicken follicular theca cells. The exact regulation of apoptosis or proliferation by gga-miR-135a-5p in chicken follicular theca cells and the pathways involved, will be the focus of our future work.
|
2020-09-10T10:19:30.633Z
|
2020-09-08T00:00:00.000
|
{
"year": 2020,
"sha1": "be2d66d338532beb6789137fdf2540aa20510532",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/g3journal/article-pdf/10/11/4071/37192278/g3journal4071.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f035c1eed0c222ee66cf91d635c44af557b2a26",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
211830714
|
pes2o/s2orc
|
v3-fos-license
|
Advancing High-Throughput Phenotyping of Wheat in Early Selection Cycles
Enhancing plant breeding to ensure global food security requires new technologies. For wheat phenotyping, only limited seeds and resources are available in early selection cycles. This forces breeders to use small plots with single or multiple row plots in order to include the maximum number of genotypes/lines for their assessment. High-throughput phenotyping through remote sensing may meet the requirements for the phenotyping of thousands of genotypes grown in small plots in early selection cycles. Therefore, the aim of this study was to compare the performance of an unmanned aerial vehicle (UAV) for assessing the grain yield of wheat genotypes in different row numbers per plot in the early selection cycles with ground-based spectral sensing. A field experiment consisting of 32 wheat genotypes with four plot designs (1, 2, 3, and 12 rows per plot) was conducted. Near infrared (NIR)-based spectral indices showed significant correlations (p < 0.01) with the grain yield at flowering to grain filling, regardless of row numbers, indicating the potential of spectral indices as indirect selection traits for the wheat grain yield. Compared with terrestrial sensing, aerial-based sensing from UAV showed consistently higher levels of association with the grain yield, indicating that an increased precision may be obtained and is expected to increase the efficiency of high-throughput phenotyping in large-scale plant breeding programs. Our results suggest that high-throughput sensing from UAV may become a convenient and efficient tool for breeders to promote a more efficient selection of improved genotypes in early selection cycles. Such new information may support the calibration of genomic information by providing additional information on other complex traits, which can be ascertained by spectral sensing.
Introduction
As breeding crops with a high yield and superior adaptability is vital to ensuring global food security, new technologies will enhance plant breeding to meet these challenges [1][2][3]. In contrast to recent progress in DNA marker assays and sequencing technologies that enable the high-throughput genotyping of many individual plants at a relatively low cost, phenotyping large numbers of genotypes and mapping populations in field trials is still laborious and expensive [4]. Therefore, the current bottleneck in plant breeding research is phenotyping.
High-throughput phenotyping through the application of remote or proximal sensing that is currently a new frontier offers a rapid and non-destructive approach to plant phenotyping. Numerous studies have shown that the grain yield of wheat genotypes/lines can reliably be assessed by spectral sensing [5][6][7][8]. These studies have also demonstrated that high-throughput phenotyping from ground-based sensing could not only contribute to savings in time and costs, but also allow for more objective information and even re-assessments in later selection cycles because the objective digital data collection can be permanently stored. More importantly, the availability of unmanned aerial vehicles (UAV) has rapidly increased in recent years [9][10][11]. The aerial platforms have an advantage over ground-based sensing platforms in generating surface maps in real time and measuring plant parameters from large numbers of plots at a time [11][12][13]. Using high-resolution and low-altitude UAVs can overcome further limitations of ground-based sensing platforms, such as the non-simultaneous measurement of different plots, trafficability, row, and plot geometries requiring specific sensor configurations and vibrations resulting from uneven field surfaces [14]. However, there is still a lack of data comparing ground-and aerial-based sensing employed to phenotype wheat genotypes with high-throughput sensing.
For wheat phenotyping, limited seeds and resources in early selection cycles force breeders to use small plots with single or multiple row plots in order to include the maximum number of genotypes/lines for their assessment [15,16]. Additionally, due to large panels and technical obstacles, it is still difficult to determine the grain yield; thus, generally, promising lines are scored based on visual assessments alone. An efficient method for more objectively assessing a large number of lines in early selection cycles for the indirect detection of yield or yield-related traits could enhance the selection efficiency and save time and costs. Therefore, to meet the requirements for early selection cycles of wheat breeding, it is necessary to evaluate high-throughput phenotyping methods applied to estimate the grain yield or yield-related traits in small plots in early selection cycles. However, while much work to date has focused on evaluations in large-size plots, few reports have addressed the capacity for high-throughput phenotyping in improving the efficiency in selection in early generations in smaller plots with a couple of or single rows [16]. Although some field studies on spectral sensing employed to estimate the grain yield of wheat genotypes have been conducted in plots with different row numbers [5][6][7]17], a comparison of the performance of different spectral sensing approaches as an indirect selection tool for the grain yield of wheat genotypes in plots with single and multi-row designs has not yet been reported. Different row numbers in plots may lead to different soil coverage, affecting spectral sensing. Therefore, a comparison of varying row designs in plots is required to evaluate the performance of spectral sensing in breeding nurseries with different row numbers. Our previous study compared proximal spectral sensing in field trials with single-, two-, three-, and four-row designs [17], but only a single wheat variety was tested. Furthermore, there is the possibility to reduce the bare soil coverage with UAV imagery by aiming to separate soil from plant pixels, which tends to be more important in using spectral sensing for single-row plots that may also have greater row spacing between plots. Therefore, we hypothesized that aerial-based sensing from UAV could be more suitable for fewer rows or small plots.
In the present study, the objectives were to compare the performance of ground-based hyperspectral and aerial-based multispectral sensing from UAV for phenotyping the grain yield of 32 wheat genotypes/lines in 1-, 2-, and 3-row plots simulating the trial design for early selection cycles, and to compare the results with those of a 12-row plot design that is commonly used to evaluate the yield performance in breeding and agronomical evaluations in Western Europe. To the best of our knowledge, this study is the first to report on the use of both ground-and aerial-based spectral sensing (UAV) and to compare their performance for phenotyping the wheat grain yield in single-row plots with multi-row plots. The expected results may contribute to the further application of the spectral sensing technique in the program of early selection cycles in breeding.
Plant Material, Experimental Design, and Grain Yield Determination
Field experiments were conducted at the Dürnast Research Station of the Technical University of Munich in Germany (48 • 23 60" N, 11 • 41 60" E). The soil is a homogeneous Cambisol with a silty-clay loam texture. Annual precipitation is approximately 800 mm and the average temperature is 8 • C.
A randomized block design was used to test the phenotypic variation of 32 winter wheat cultivars (Triticum aestivum L.), including a panel of 26 modern wheat cultivars from Germany and six from Eastern Europe, with four plot designs that consisted of plots with 1, 2, 3, and 12 rows per plot and four replications. The space between plots was 50 cm and the space between rows was 12.5 cm, in order to achieve a high canopy coverage [17]. The length of the plots was increased to 10 m to further evaluate the influence of plot length on spectral measurements by subdividing them into different plot lengths in order to compare the effect of plot length on spectral measurements. Since there was no difference in spectral measurements due to the plot length from 2 to 10 m, the results are not presented in this paper. The plots were oriented from East to West.
At plant maturity, the grain yield was determined by the mechanical harvesting of 1-, 2-, 3-, and 12-row plots.
Spectral Reflectance Measurements Obtained by Ground-Based Hyperspectral Sensing and Aerial-Based Multispectral Sensing
In order to compare ground-and aerial-based sensing, a hand-held hyperspectral sensor and an unmanned aerial vehicle (UAV) carrying a multispectral camera were used for data acquisition at different BBCH scale (Biologische Bundesanstalt, Bundessortenamt und Chemische Industrie) growth stages [18], i.e., BBCH 49 (booting; 28.05.2018), BBCH 65 (anthesis; 10.06.2018), and BBCH 85 (grain filling; 26.06.2018). These stages were chosen due to being most indicative for the assessment of the winter wheat yield [19].
Ground-based hyperspectral sensing of the crop canopy was conducted through measurements of the reflected radiation. For measuring crop canopies, the primary focus was given to electromagnetic radiation within the visible (VIS, approximately 400-700 nm) and near-infrared (NIR, approximately 700-1100 nm) spectral range. A passive hand-held reflectance sensor (tec5, Oberursel, Germany) enabling hyperspectral readings was used. The bi-directional radiometer has a spectral detection range of 400 to 1100 nm and a bandwidth of 3.3 nm [20]. On clear sunny days at solar noon, canopy eflectance was measured at 0.5 m above the canopy with a 22 • field of view (FOV) of a circular shape, resulting in about a 0.13 m 2 scanning area. The sensor was always positioned amid the row variants.
For aerial-based multispectral sensing, the wing aircraft senseFly ("eBee," SenseFly, Lausanne, Switzerland), equipped with a multispectral and sunshine sensor (Sequoia camera) (Parrot, Paris, France), was used for flying in an East-West direction. Since the Sequoia and sunshine sensor are integrated in a fixed structure, and imagery captured with the Parrot Sequoia camera is automatically recognized by the software, they are placed very accurately and the angles will not change at any time during the flight. The Sequoia multispectral camera takes photos in four spectral bands, i.e., green (550 nm,~40 nm bandwidth), red (660 nm,~40 nm bandwidth), and 2 x NIR regions (NIR-1: 735 nm, 10 nm bandwidth and NIR-2: 790 nm,~40 nm bandwidth) of the electromagnetic spectrum. The focal length is 3.98 mm; image size is 1280 x 960 pixels; and field of view is 61.9 • horizontally, 48.5 • vertically, and 73.7 • diagonally. As the radiometric calibration target, a white balance card was used to enable the Pix4D software to calibrate and correct the images' reflectance. Flights were conducted at 50 m above the ground level, resulting in ground sampling distances of about 5 cm/pixel. Mission planning was done with eMotion 3 for the Sequoia camera. All flights were planned for 80% overlap along flight corridors and concomitantly carried out for the terrestrial sensor measurements at solar noon. Global shutters were used. The Pix4DMapper was used to process the multi-spectral UAV data. Plot-level means of green, red, NIR-1, and NIR-2 measurements from UAVs were created in ArcGIS Desktop version 10.5 (ESRI, Munich, Germany). To precisely extract the canopy coverage from individual plots, the shape files containing annotated single polygons with an optimized width to cover the most indicative sections of the row variants were segmented by hand. The width of the polygons was 15, 30, 40, and 110 cm for the 1-, 2-, 3-, and 12-row plots, respectively, to maximize the canopy coverage while minimizing bare-soil cover. For all flights, the GeoTIFFs with the green, red, NIR-1, and NIR-2 orthomosaics from Pix4D were combined with the plot polygon and shape file. Green, red, NIR-1, and NIR-2 means from each plot were generated using the zonal-statistics function in ArcGIS.
Calculation of Spectral Indices
Canopy spectral reflectance acquired from ground-and aerial-based spectral sensing was used to calculate vegetation indices, i.e., NIR-based indices of the water band index (WBI) and NIR:NIR, and visible-and NIR-based indices of the red normalized difference vegetation index (NDVI), NIR:Red, and NIR:Green (Table 1), which are reportedly highly correlated with the plant biomass and grain yield [19,[21][22][23].
Statistical Analysis
Lme4 and Sommer packages from the R-program (www.R-project.org) were used for the analysis of variance (ANOVA) to differentiate among row treatments. Phenotypic and genetic correlation coefficients were estimated to reveal the association between spectral indices and the grain yield.
The genetic correlation coefficients between spectral indices and the grain yield were calculated by the following formula [25]: where Var and Cov refer to the components of variance and covariance, respectively, and X and Y are the two variables. Broad-sense heritabilities (h 2 ) were calculated on a mean basis according to Holland et al. [26]. Broad-sense heritability is the proportion of the phenotypic variance, which is explained by the genetic variance, and was estimated as follows: where σ g and σ e are the genotypic and residual variance components, respectively, and n is the number of replicate blocks.
Genotypic Variation in Plots with 1, 2, 3, and 12 Rows
Significant genotypic variation in the grain yield among the 32 wheat genotypes was found in all row variants ( Table 2). The highest mean grain yield per row was obtained from single-row plots, and the lowest yield was obtained from 12-row plots. The results showed that there was a significant effect of row variants on the grain yield. Heritability of the grain yield increased with an increasing number of rows per plot and ranged from 0.82 to 0.92. The results also demonstrated that plots with fewer rows showed higher standard deviations (SD). Table 2. Minimum, maximum, mean ± SD, and heritability (h 2 ) of the grain yield in plots with 1, 2, 3 and 12 rows. Mean comparison of plot treatments from Tukey's HSD test indicated significant differences at p < 0.001.
Phenotypic and Genetic Correlations Between Spectral Indices and the Grain Yield in Different Row Variants
The NIR-based indices NIR:NIR and WBI showed strong and significant phenotypic correlations with the grain yield, regardless of the number of rows and growth stage (Table 3). At grain filling, significant correlations of NIR:Red, NIR:Green, and NDVI with the grain yield were found almost for all row variants. Compared with ground-based sensing, spectral indices derived from aerial-based sensing consistently showed higher levels of phenotypic association with the grain yield.
Genetic relationships between spectral indices and the grain yield were similar to phenotypic relationships; however, the coefficients of genetic correlation were sometimes higher than those of the phenotypic correlations (Table 4). Table 4. Genetic correlation between the grain yield and spectral indices from ground-and aerial-based sensing in different row variants at BBCH scale stages 49, 65, and 85.
Heritability of Spectral Indices in Different Row Variants
A moderate to high level of broad-sense heritability was observed for most spectral indices. The heritability of spectral reflectance indices generally increased with the growth stage for a given row variant and with an increasing number of rows at any given growth stage ( Table 5). The highest heritability for a given index was found at grain filling (BBCH 85) in 12-row plots.
For example, at grain filling, the heritability of indices in 12-row plots ranged from 0.65 to 0.96 for ground-based sensing, and from 0.79 to 0.96 for aerial-based sensing.
Compared with ground-based sensing, the values of heritability for the same index from aerial-based sensing were higher in most cases (Table 5).
Phenotypic and Genetic Correlations Between the Grain Yield and Spectral Indices, as Obtained from Ground-Based Hyperspectral and Aerial-Based Multi-Spectral Sensing
The evaluation and selection of moderate-and high-yielding wheat genotypes using spectral indices derived from ground-based sensing have been successfully applied under different environmental conditions in previous studies in plots with increased row numbers [5,6,8,27]. In the present study, the best performing spectral indices from ground-based hyperspectral sensing for predicting the grain yield at phenotypic and genetic levels were the NIR-based indices NIR:NIR and WBI for all variants with different numbers of rows per plot (Tables 3 and 4). Although the indices NIR:Red, NIR:Green, and NDVI could not distinguish among genotypes at BBCH 49 and BBCH 65, they significantly correlated with the grain yield at BBCH 85. Overall, these results agree with the findings presented by other authors studying wheat under well-watered and drought stress conditions [5][6][7][8]19].
This is the first report on a comparison of correlations between spectral indices and the wheat grain yield for plots with different numbers of rows. As single-or two-row plots share a higher fraction of soil coverage than plots with higher row numbers (3 or 12), this study aimed to assess whether differences in canopy/soil coverage representing mixed-pixel situations interfere with spectral sensing. A previous study [17] showed that the one-row design only covered approximately 34% of the field of view of a hand-held spectrometer, whereas two-row plots covered 80%, when the sensor was positioned at 100 cm above the plant canopy. To reduce the effect of bare soil between plots with one or two rows per plot, Barmeier and Schmidhalter [17] suggested optimizing spectral sensing by reducing the sensor-canopy distance, with the sensor always being positioned amid rows. Therefore, the sensor-canopy distance was reduced from 100 to 50 cm in this study. By decreasing the sensor-canopy distance for a hand-held spectral proximal sensor, the results of this study showed that differences in grain yield among genotypes could not only be distinguished in multi-row plot designs, but in single-row plots as well, especially by NIR-based indices (Tables 3 and 4), thus suggesting that it is possible to use spectral sensing for the high-throughput phenotyping of wheat genotypes in early selection cycles. Reliable evaluation in smaller plots for the drivers of yield, especially those novel and rare alleles commonly lost when targeting the grain yield alone in breeder's nurseries, will enable novel phenotypes to be recycled through subsequent crossing and population development [16]. The new information may strongly support genomic selection efficiency, as well as the calibration of genomic information, by providing additional information on other complex traits, such as drought tolerance [8,28], salinity tolerance [29,30], and nitrogen use efficiency [19], which can be ascertained by spectral sensing.
In aerial-based multispectral sensing, significant correlations between spectral indices and the yield were generally higher than those obtained from ground-based sensing, especially at booting and anthesis (Tables 3 and 4), suggesting that increased precision may be obtained from UAV imagery. This is in agreement with recent reports [12,31,32]. The relatively higher precision of measurements by UAVs can be associated with several major factors: (i) Non-vegetation pixels can be better removed from imagery obtained by UAV. This could be more pronounced in plots with fewer rows. Recent studies have demonstrated the possibility to improve the segmentation of plant-soil pixels, e.g., using Support Vector Machine (SVM) classification or Convolutional Neural Networks [27,33,34]; (ii) aerial-based sensing has an advantage over ground-based sensing platforms in generating surface maps in real time and measuring plant parameters from a large number of plots at a time, typically associated with the time required to make ground-based measurements in large trials [12,13]; (iii) using high-resolution and low-altitude UAVs can overcome further limitations of ground-based sensing platforms, such as the non-simultaneous measurement of different plots, trafficability, row, and plot geometries requiring specific sensor configurations, and vibrations resulting from uneven field surfaces [12,28]. Given that the operation of UAV image acquisition is less labor-intensive, and owing to improved segmentation procedures and a higher precision than non-imaging proximal sensing, aerial-based multispectral sensing via UAV is expected to increase the efficiency of high-throughput phenotyping in large-scale plant breeding programs [10,12].
Heritability of Spectral Indices from Ground-and Aerial-Based Sensing
High heritability and strong phenotypic and genetic correlations between indirect traits and the grain yield are desirable [25]. However, in previous studies on wheat genotypes, heritability values of spectral indices have been inconsistent [5][6][7][8]. Falconer [25] proposed that using an alternative indirect selection trait for the grain yield is only appropriate if the heritability of the indirect trait is higher than that of the grain yield itself. Therefore, the authors [5][6][7] concluded that, even with low h 2 values, spectral indices can still be valuable as indirect selection traits to breeders because such values were still higher than those of the grain yield in most cases. The results of this study showed that compared with h 2 values of the grain yield, higher h 2 values of spectral indices were obtained, particularly from aerial-based sensing at grain filling in plots with 1, 2, and 12 rows (Tables 2 and 5), thus confirming that aerial-based sensing delivers a higher precision for high-throughput phenotyping.
Conclusions
Our study demonstrated that NIR-based spectral indices indicated strong and significant phenotypic and genetic correlations with the grain yield, regardless of the row variants per plot and growth stage, indicating a high potential of NIR-based indices as indirect selection traits for the wheat grain yield. Compared with ground-based sensing, spectral indices from aerial-based sensing by UAV consistently showed a higher association with the grain yield, indicating that an increased precision may be obtained and is expected to increase the efficiency of high-throughput phenotyping in large-scale plant breeding programs and allow breeders a more objective selection of improved genotypes in early selection cycles, thereby reducing costs by performing fewer directed samplings.
|
2020-02-13T09:22:47.226Z
|
2020-02-09T00:00:00.000
|
{
"year": 2020,
"sha1": "d2105c0dcd2e640064c4e9b1b696d8aa02110762",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/remotesensing/remotesensing-12-00574/article_deploy/remotesensing-12-00574.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "813751879a70121e36dcdc3881109b8cfb4c557c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Environmental Science",
"Computer Science"
]
}
|
1259997
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge-Aided Non-Homogeneity Detector for Airborne MIMO Radar STAP
The target detection performance decreases in airborne multiple-input multiple-output (MIMO) radar space-time adaptive processing (STAP) when the training samples contaminated by interference-targets (outliers) signals are used to estimate the covariance matrix. To address this problem, a knowledge-aided (KA) generalized inner product non-homogeneity detector (GIP NHD) is proposed for MIMO-STAP. Firstly, the clutter subspace knowledge is constructed by the system parameters of MIMO radar STAP. Secondly, the clutter basis vectors are utilized to compose the clutter covariance matrix offline. Then, the GIP NHD is integrated to realize the effective training samples selection, which eliminates the effect of the outliers in training samples on target detection. Simulation results demonstrate that in non-homogeneous clutter environment, the proposed KA-GIP NHD can eliminate the outliers more effectively and improve the target detection performance of MIMO radar STAP compared with the conventional GIP NHD, which is more valuable for practical engineering application.
To realize GMTI in strong clutter environment based on STAP technique [13][14][15], the clutter covariance matrix has to be estimated from the training samples set.Besides, all the training samples should obey the independent and identical distribution (IID), and the required sample number should be at least as double as the system degrees of freedom (DOFs) according to the well-known Reed-Mallett-Brennan (RMB) rule [16].In addition, STAP in airborne MIMO radar becomes even more challenging in complexity and convergence, because of the extra DOFs created by the orthogonal waveforms [8][9][10][11][12].In practice, it is difficult to acquire enough IID samples to ensure the performance of MIMO radar STAP.Moreover, the training samples non-homogeneity will lead to the estimation error of the clutter covariance matrix and severely degrade the performance of MIMO-STAP [11], [17], [18].Especially, as a classic factor for forming the non-homogeneous clutter environment, when there exist interference-targets (outliers) signals in the training samples set, the specific phenomenon named target self-nulling will be generated [17][18][19][20][21][22].
Non-homogeneity detector (NHD) [18][19][20][21][22] is wellknown for its ability to improve the target detection performance of STAP in non-homogeneous environment.The generalized inner products (GIP) method [18][19][20][21][22] is a typical NHD for identifying the outliers with non-homogeneity, under the condition on the accurate estimation of the clutter covariance matrix.However, strong outliers may exist in the training samples set, which will result in severe performance degradation of the GIP NHD.Furthermore, if there are more than one strong outlier in the training samples set, the performance of the GIP NHD to determine and eliminate the weaker outliers will as well be deteriorated significantly [21], [22].Recently, researchers have proposed the knowledge-aided (KA) STAP methods [22][23][24][25][26][27][28] to improve the STAP performance and its robustness in the practical application.Thus the prior knowledge, such as the system parameters, can be taken into consideration in the training sample selection, to guarantee that all the chosen samples satisfy the IID property.Reference [22] derived a robust NHD method to eliminate the outliers from the training samples set by utilizing the constructed covariance matrix based on the prolate spheroidal wave functions (PSWF).The method can be seen as a KA technique, which is also applicable to MIMO-STAP case with a slight modification.However, the computation and the application of PSWF are somewhat complicated, since it involves time-band-limited sampling theory and the computation of PSWF eigenvector [8], [22].
In this paper, we propose a KA-GIP NHD for airborne MIMO radar STAP, which is very simple and robust in practice.The clutter subspace knowledge is first calculated by the MIMO-STAP system parameters, and then the clutter basis vectors are utilized to construct the corresponding clutter covariance matrix conveniently.Next, the GIP NHD is integrated with KA-clutter covariance matrix to obtain the GIP statistics.Therefore, by comparing the statistics with the setting threshold, the outliers that contaminate the training samples are eliminated.The effectiveness of the proposed KA-GIP NHD for MIMO-STAP is verified by simulation results.
The remainder of the paper is organized as follows.In Sec. 2, we establish the STAP signal model for side-looking airborne MIMO radar.In Sec. 3, a brief review of the conventional GIP NHD is given and the effect of outliers on the conventional GIP NHD is analyzed to formulate the problem.Then in Sec. 4, we present the construction of the knowledge-aided clutter subspace of MIMO radar, and propose the novel GIP NHD for MIMO-STAP based on the KA-clutter subspace.The simulation results are provided in Sec. 5 to show the performance advantage of the KA-GIP NHD for MIMO-STAP over the conventional GIP NHD.In the end, Section 6 summarizes the conclusion of the paper.
Notations.
The operations () T , () H , () -1 , denote the transpose, conjugate transpose, inverse, and Kronecker product, respectively.E[] and trace() denote the expectation and trace of a matrix.I M denotes the M M identity matrix.() stands for the unit impulse function, and a stands for the smallest integer larger than a.
MIMO Radar STAP Signal Model
Figure 1 presents a sidelooking monostatic MIMO radar system equipped with collocated transmit and receive linear arrays, in which there are M transmit elements with uniform spacing d T and N receive elements with uniform spacing d R , and sparse coefficient α = d T /d R , where d R = /2 and is the wavelength.A coherent processing interval (CPI) consists of K pulses with a constant radar pulse repetition interval (PRI) T. The radar platform travels at height H and at velocity V.The cone angle between the line-of-sight (LOS) and the velocity vector is , while and are the azimuth angle and the elevation angle, respectively.The number of clutter patches which are uniformly distributed in a range cell is N c .The whole space- where [1 e e ] , The space-time clutter-plus-noise data vector from the l th range cell is denoted as c c n sc, dc, n where i is the reflect coefficient of the i th clutter patch, and x n (l) represents the additive Gaussian white noise.
The clutter covariance matrix can be expressed as [13], [27] where 2 c,i is the variance of the i th clutter patch, which is in direct proportion to the radar cross section (RCS) and satisfies 2 c, ( ) Assume the noise obeys the Gaussian distribution and is white in both the spatial and temporal domains, thus the noise covariance matrix can be written as where 2 n is the variance of the additive white noise.
The fully-adaptive weight vector for MIMO radar STAP can be calculated by ) f f where R c+n = R c + R n is the clutter-plus-noise covariance matrix, and v(f s,0 , f d,0 ) is the space-time steering vector of target.
Usually, R c+n is unknown a priori in practice, thus it should be estimated from the adjacent training sample set beside the range cell under test where x(l) denotes the training sample of the lth range cell.Through replacing R c+n by R ̂, (5) can be used to calculate the weight vector of the practical MIMO-STAP system.However, with the dimension expansion of the MIMO-STAP system, it is tough to obtain sufficient IID samples in practice, since the practical clutter circumstances are generally non-homogeneous.In particular cases when there exist outliers in the training samples set [11], [17], [18], they will lead to the estimation error of the clutter covariance matrix and seriously degrade the performance of MIMO radar STAP.To solve the non-homogeneity problem, NHDs are utilized to remove outliers from the training samples set before implementing the STAP algorithms to airborne MIMO radar system.
Conventional GIP NHD
The GIP method [18][19][20][21][22] is a representative criterion of NHDs.We assume the original training samples set Ω consists of L range cell samples in the adjacent clutter region of the range cell under test, i.e., Ω = {x(l), l = 1,2,…,L}, then we can define the GIP statistics as where R E is the test covariance matrix estimated by using the original training samples in set Ω .The GIP statistics can be explained as the inner product of the whitening filter output vector by using , and the expectation of which can be obtained by where D = KMN denotes the DOFs of MIMO-STAP system.Therefore it can be found from (8) that when the non-homogeneous training sample cannot be effectively whitened by the whitening matrix 1 E R , the GIP statistics in (7) will deviate from expectation D. Then the non-homogeneous sample can be easily determined by the deviation level between the GIP statistics and the expectation D. Thus, we can eliminate the non-homogeneous training samples when estimating the clutter covariance matrix for MIMO radar STAP.
From the above analysis, we can see that the GIP statistics rely on the accurate estimation of the test covariance matrix R E .However, different from the homogeneous data shown in (2), the non-homogeneous training sample can be expressed as where Δx(l) is the additional item introduced by the nonhomogeneous environment.Assume the training samples set Ω contains P non-homogeneous training samples, and then the covariance matrix estimated by using the training samples in Ω can be written as From (10), we can see that the additional term will be produced if the training sample set Ω is contaminated by the outliers, which has a negative effect on the GIP statistics.This will make the conventional GIP NHD difficult to distinguish the non-homogeneous training sample and cause the undetected error.Therefore, the nonhomogeneity of the training samples will also degrade the performance of the conventional GIP NHD in identification of the outliers in the range cell under test [21], [22].
Hence, in order to enhance the robustness of the test covariance and eliminate the negative effects of the outliers on the conventional GIP NHD, we consider utilizing the KA technique in this paper.The MIMO-STAP system parameters, such as platform height and velocity, transmit array element number and spacing, receive array element number and spacing, temporal pulse number and interval, etc., are taken into account as priori knowledge to construct the clutter covariance matrix offline.Then the clutter covariance matrix in the GIP statistics is data-independent and only the homogenous clutter information of the range cell under test is included.Accordingly, the estimation of R E is not a requisite, and the knowledge-aided GIP NHD can realize the outliers elimination efficiently.
KA Clutter Subspace Construction
Let C be clutter subspace of MIMO radar STAP.From (3), C can be spanned by all the space-time vectors of the c N clutter patches in a range cell, that is , The clutter DOFs of MIMO radar, or the rank of the clutter covariance matrix R c , is denoted as [8] C c ( ) ( 1) Then the space-time vector of the ith clutter patch can be expressed as where B is a matrix of dimension KMN r C , and the ith column and pth row element of B is denoted as , where k = 1, 2,…, K, m = 1, 2,…, M, and n = 1, 2,…, N. (13), the space-time vector of the ith clutter patch can be expressed as the linear combination of r C row vectors of B , which means these row vectors can span the clutter subspace of MIMO radar, that is, C = span{b 1 b 2 … b r C }.And {b i } is orthogonal to each other, i = 1, 2,…, N c .Therefore {b i } can be seen as the clutter basis vectors for MIMO radar.The orthonormal basis vectors {b c,i } for clutter can be derived by the normalizing operation on basis vectors {b c,i }.
It is should be mentioned that, in the case where α and are non-integers because of the parameter preferences in reality, the clutter DOFs may not be an integer.Then the clutter rank should be set as the smallest integer larger than the non-integer, that is r C = N + α(M -1) +(K -1).In practice, B can be obtained through offline sampling and stored in memory.Thus, the training data is not required any more to carry out the clutter subspace estimation.In this section, the given clutter subspace is applicable to all the range cells and it can be calculated by the known parameters of transmit array element number M, receive array element number N, temporal pulse number K, coefficients α and according to (14).
GIP NHD Based on the KA Clutter Subspace of MIMO Radar
Based on the theory of the KA clutter subspace presented in the above section, the clutter covariance matrix of MIMO radar can be constructed as where 2 b,i is the covariance corresponding to each b c,i with to maintain the same clutter-to-noise ratio (CNR) with the clutter covariance matrix R c in (3) [13], [14], under the precondition of the same noise variance 2 n .Consequently, we can calculate the proposed KA-GIP statistics by substituting ( 15) into ( 7) It can be observed that the clutter covariance matrix in the proposed KA-GIP statistics merely contains the homogeneous clutter information in the range cell under test.Furthermore, the estimated clutter covariance matrix R E will not have any influence on the KA-GIP statistics because the KA-clutter covariance matrix R KA has replaced R E .So the non-homogeneous training samples can be identified and eliminated accurately from estimating the clutter covariance matrix for MIMO-STAP.Thus, the outliers can be eliminated from the training samples set more effectively by the proposed KA-GIP NHD, and the target detection performance of MIMO-STAP will be improved.
It is worth noting that these clutter basis vectors can be calculated offline by the MIMO-STAP system parameters (M, N, K, α, ) and memorized in storage.Compared with the PSWF-based clutter subspace knowledge in [22], the construction of the data-independent clutter covariance matrix proposed in this paper is more convenient, though these two methods can achieve approximately the same knowledge-aided effects.Besides, if we constructed the covariance matrix directly by (3) [13], [27], although the discrete representation can approximate the continuous distribution of the real ground clutter, the number and angles of the clutter patches are unknown definitely.This means that the parameters N c and i are undetermined, and the clutter patches have to be divided and located in the modeling process.Then the construction of the clutter matrix by the columns of B has lower computational com- plexity, rather than by (3).Hence for the practical MIMO-STAP system, the proposed KA-GIP NHD is very simple to be executed.The computational complexity of the proposed KA-GIP NHD is herein analyzed.Firstly, in (15), the construction of the KA clutter covariance matrix R KA can be realized through r C multiplications of a KMN 1 vector b c,i by its conjugate transpose H c,i b .In this step, the computational cost is O(r C K 2 M 2 N 2 ).Then, equation ( 16) involves the inversion of matrix R KA , and the computational complexity of (16) showing the GIP statistics, the computational burden of x H (l) 1 KA R x(l) aiming at L samples is O(LK 2 M 2 N 2 + LKMN).Taking all these above factors together, we can come to the conclusion that the total computational complexity of the proposed KA-GIP NHD procedure is O Figure 2 illustrates the corresponding procedure of MIMO radar STAP with this newly-developed NHD.It also should be indicated that the spatial data of MIMO-STAP is equivalent to the MN 1 virtual array output after matched filtering [8].The spatial data is corresponding to "Virtual Element" in Fig. 2, i.e., the data of virtual array elements.Meanwhile, the temporal data of MIMO-STAP belongs to each "Pulse" shown in Fig. 2. Afterwards, the spatial data and the temporal data in each range cell are integrated as a whole when processed via the KA-GIP NHD procedure, as we illustrated in the previous principle.In the final target detection step of Fig. 2, CFAR refers to "constant false alarm rate".CFAR detection is a common adaptive algorithm used in radar systems to detect target returns against a background of noise, clutter and interference.According to the changing strength level of the noise, clutter, and interference, the threshold level can be adjusted adaptively to maintain a constant probability of false alarm.Thus the target detection can be efficiently realized by comparing the MIMO-STAP output with the adaptive threshold.And this portion belongs to the next processing step after our proposed NHD.
Simulations
The From Fig. 3 we can see that the proposed KA subspace basis vectors possess almost all clutter power, which can be utilized to approximate the actual clutter subspace of MIMO radar.Compared with the eigenvalue decomposition (EVD) of the clutter covariance matrix, no samples are needed in obtaining the KA clutter subspace basis vectors, and these basis vectors can be easily updated with the corresponding MIMO radar system parameters.The eigenvalues of PSWF and the clutter power distribution on each PSWF basis vector are shown in Fig. 4 and Fig. 5, respectively.From the simulation results, it is proved that the PSWF-GIP method in [22] can also be applied to MIMO radar STAP.However, the EVD computation for obtaining PSWF eigenvector has to be implemented when using the PSWF-GIP method with the computational complexity of O(K 3 M 3 N 3 ).Thus, the KA-GIP NHD proposed in this paper is more convenient for application compared with the PSWF-GIP method, although the approximate KA consequence can be attained by both methods.
Experiment 2. Effectiveness and robustness of outliers detection
The corresponding calculation results of the conventional GIP statistics and the proposed KA-GIP statistics are shown in Fig. 6a) and b), respectively.For each method, the average values are obtained through 100 Monte Carlo trials.It should be noted that the Monte Carlo simulations, as well as other simulation experiments in this paper, are performed by MATLAB software on a computer with CPU Core i7 2.6 GHz and 8 GB of RAM.Comparing Fig. 6a) to Fig. 6b), it can be seen that the noise level in the background of Fig. 6b) is obviously much lower, and the proposed KA-GIP NHD improves the average statistics significantly.The conventional GIP NHD only detects two strong outliers, while the KA-GIP NHD has detected all the outliers.It indicates that our proposed NHD is much more effective for MIMO radar STAP than the conventional one.
Experiment 3. Clutter suppression and target detection performance of MIMO radar STAP
The STAP performance is measured by improvement factor (IF), where IF is defined as the ratio of output sig-nal-to-clutter-plus-noise-ratio (SCNR) to the input SCNR. Figure 7 shows the IF of the fully-adaptive MIMO-STAP after the outliers in the training samples set are eliminated by the KA-GIP NHD, compared with that for which the conventional GIP NHD is applied to eliminate the outliers.It can be seen that another notch emerges in the outlier Doppler frequency when using the conventional GIP NHD, which is the so-called target self-nulling phenomenon.On the contrary, after using the new NHD proposed in this paper to eliminate the outliers, the MIMO-STAP performance is improved significantly because only IID samples are taken into the STAP weight vector calculation and the effect of outliers has been totally avoided as well as the self-nulling phenomenon.Suppose there exist two targets which are located in the 130th range cell and the 170th range cell, respectively.The signal-to-noise ratio (SNR) of the target in the 130th range cell is assumed to be 0 dB while the SNR of the other in the 170th range cell is -10 dB.The corresponding MIMO-STAP outputs are shown in Fig. 8, after applying the conventional GIP and the KA-GIP NHDs to remove the outliers, respectively.It can be observed from Fig. 8 that by STAP with conventional GIP NHD, the remaining clutter powers in all range cells under test are still too strong, and the weaker target signal in the 170th range cell cannot be detected accurately because of the self-nulling effect.At the same time, the peak value of the relatively stronger target in the 130th range cell is also not very distinct.On the other hand, MIMO-STAP with KA-GIP NHD can suppress clutter effectively and detect both targets correctly in the 130th and 170th range cells.
Conclusion
In this paper, we investigate the non-homogeneity detection technique in airborne MIMO radar STAP, and propose a knowledge-aided GIP NHD for MIMO-STAP.The KA-covariance matrix is constructed to replace the estimated covariance matrix, which is only determined by system parameters and not affected by the outliers at all.Due to this, the outliers can be eliminated more effectively by the novel NHD we put forward, and the target detection performance of MIMO radar STAP in non-homogeneous clutter environments will be improved significantly compared with the conventional GIP NHD.Moreover, the proposed KA-GIP NHD is very convenient to be implemented in practical MIMO-STAP system and has great value for engineering application.
The real scenario can be more complicated and severe than the case we considered in this paper, such as the very rough surface.Under that clutter condition, most regular training samples do not obey the property of IID, and it is difficult to sufficiently estimate the clutter characteristic.Thus, we should deal with the more complicated clutter environment in our future research, and promote relative practical experiment to test our method.Moreover, how to integrate priori knowledge with the real scenario and develop a more effective KA-GIP NHD for MIMO-STAP is also a valuable perspective which will be explored in the future work.
basic MIMO-STAP parameters for the simulation are: transmit array element number M = 5, receive array element number N = 8, temporal pulse number K = 10, radar wavelength = 0.23 m, transmit array element spacing d T = 0.92 m, receive array element spacing d R = 0.115 m, α = N, = 1, pulse repetition interval 500 T μs, platform velocity 115 V m/s, platform height 8000 H m, clutter patch number N c = 180.The CNR is 40 dB.We assume that the cone angle of each outlier is 90° and the normalized Doppler frequency is 0.25, which are the same as those of the target.A non-homogeneous clutter scenario is presented, and there are totally 500 L range cells in the training set.It should be noted that the value of L is less than double of the system DOFs, i.e. 2 L MNK .This implies that the sample number requirement of RMB rule could not be satisfied, and the setting of the training sample number in this simulation accords with the real non-homogeneous clutter scenario.In this simulation four outliers are taken into consideration and
Figure 3
Figure 3 shows the clutter power distribution on each KA clutter subspace basis vector of MIMO radar, which can be expressed as H c, c c, i i b R b , i = 1, 2, …, r C .As comparison, Figure 3 illustrates the eigenvalues of R c .The estimated clutter rank is correctly N + α(M -1) + (K -1).
Table 1
lists the corresponding outlier parameters.
|
2017-12-01T02:03:36.571Z
|
2017-04-14T00:00:00.000
|
{
"year": 2017,
"sha1": "b8fbe6cf8c3e0cad10c1a5217f5f66a7570fc0de",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.13164/re.2017.0345",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b8fbe6cf8c3e0cad10c1a5217f5f66a7570fc0de",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
158983318
|
pes2o/s2orc
|
v3-fos-license
|
Attributed with Language Attitude in Indonesian Language Instruction as The Efforts to Knit Nationalism in a Frame of Diversity
This research aims to 1) describe the use of text attributed with language attitude in Indonesian language instruction and 2) describe student's response to the instruction. It was a descriptive research which data were collected by using observation method and questionnaire. The subjects were 4 lecturers of Indonesian language and 258 students of Unversitas Pendidikan Ganesha. The results of this study indicate that 1) text attributed with language attitude was applied in 4 main steps, namely (1) preparation of conditions, (2) reading of text in groups, (3) intergroup discussions, (4) inferences; 2) Student response in terms of interest, 55.91% of students were very interested, 38.76% of students were interested, and 5.33% were less interested. In terms of textual ability to inspire the sense of nationalism, 63.70% of students felt the text was very capable to do it, 32.43% of students felt the text was capable enough to do it, and 3.88% of students felt the text was less inspiring to raise nationalism within the frame of diversity. Thus, learning was able to attract students and the text used was able to awaken nationalism within the frame of diversity. The implication is the existence of teaching materials to nurture nationalism. Keyword: text, language attitude, Indonesian language, nationalism, diversity
Introduction
Recently, the discussions of nationalism in the framework of diversity are widely practiced.This can not be separated from various problems related to the spirit of nationalism that occurred.There has been a degradation of the spirit of unity [1,2].Therefore, it is necessary to build multicultural awareness to foster the spirit of nationalism in a diversity frame.Multiculturalism is a spirit that appreciates diversity, whether cultural diversity, ethnicity, religion, or other forms of diversity [3,4,1,5].Multicultural awareness is the instrument of national therapy [6].In addition, an understanding of nationalism, identity politics, and a sense of solidarity also needs to be enhanced to foster a spirit of nationality in order to overcome this nation's growing problems [7,8].Diversity should have been viewed as a destiny of God's grace from a long process of the nation's journey that must be cherished [9].
Permendikbud No.21 of 2015 on Character Building Movement at School is a form of understanding of the awareness of national identity and nationalism.In addition, the slogan "Aku Indonesia, Aku Pancasila" is also a sign of the necessity of strategic efforts in fostering the spirit of nationalism within the framework of diversity in the present.One of various strategic efforts such as by applying the project model of citizens in the learning of Citizenship [10].The result of the implementation of this model is able to develop students' nationalism attitude.The study of the movie "Land of Heaven Said" as a media for the growth of students' nationalism attitude has also been done [11].The results of this study indicate the influence of film media as a medium for growing nationalism attitudes.It seems that fostering the nationalism spirit needs to be done through the review of nationalism history itself [12,8].
The growth of nationalism spirit is also done in learning Indonesian and Literature.The study of the novel and its implications for Indonesian Language and Literature learning has been done in an effort to foster nationalism [13,14].Another step taken is the development of anti-corruption education materials integrated with Indonesian language learning which one of its goals is to foster nationalism [15].Until now there has been no research on learning Indonesian in Higher Education in an effort to grow nationalism.Therefore, this research tries to offer an attempt to overcome the problem of nationalism in the framework of diversity.The effort is through a text charged with language attitudes.The content of language attitudes can be used as a means to love the Indonesian language in particular and the Indonesian nation in general.Indonesian is the foundation of the nation [16].This is the adhesive language of the nation.
Method
This research is descriptive research.The data of this research were 1) the steps of using the text of the attitude of the language in Indonesian language learning and 2) the student's response toward the learning.The subjects were 4 lecturers of Indonesian language subjects and 258 students of Universitas Pendidikan Ganesha.There were two methods of data retrieval used in this research, those were observation and questionnaire.The observation was done to know the steps of learning applied by the lecturer and the questionnaire was done to know the student's response toward the learning process.Data analysis was done through three stages, namely data reduction, data presentation, and inference and verification.
The steps of Indonesian Language Learning by using the Text of
Language Attitudes were applied in 4 steps
1) Preparation
This step aimed to stimulate interest and motivation of students in learning.At this stage the lecturer explains the various problems facing by this nation related to nationalism in the frame of diversity.This stage is an important step considering that motivation is the overall driving force in the learners that leads to learning activities, ensures learning activities, and gives direction for learning objectives can be achieved [17,18].The results showed that motivation factor contributed 67% to reading culture [19].Therefore, it is important to generate students' motivation before learning Indonesian by using SHS Web of Conferences 42, 00026 (2018) https://doi.org/10.1051/shsconf/20184200026GC-TALE 2017 phenomenon text of language attitudes.Besides storytelling, the other activities that was done by the lecturer in this step was showing the picture or video related to nationalism problem.The use of historical stories and problems in everyday life was aimed to make the problems more easily understood by the students.
2) Reading text in groups
When the students were motivated to learn, the next stage was grouping.The groups were heterogeneous group consists of 3-5 students.The heterogeneous group formations are very appropriate for the college level and can improve learning outcomes [20,21,22].The next stage was distributing the phenomenon text with language attitude.The students were assigned to read the texts and discuss it in groups and trace information related to the reading material.What students do at this stage was to find the source and choose the best source.A variety of sources of information can be used in the form of photographs, drawings, texts, diagrams, videos, etc. so it is necessary to sort out related sources that can be used to better understand the text of phenomena distributed by professors [23].The investigation of the information wasa divided into 5 stages, namely (1) determining the information needs, (2) accessing the information needed effectively and efficiently; (3) providing critical evaluation and sorting information; (4) understanding the information; and (5) using it ethically And legaly [24].The result of the research showed that the steps had been done by the students with the instruction of the lecturer.
3) Intergroup discussion
At this stage, the students did intergroup discussion by conveying their understanding of the text that had been read and enriched by other information.Other groups should listen and give comment on the other groups' understanding.This intergroup discussion aims to re-evaluate the understanding like what had been done by other groups at the previous step.
4) Conclusion
This step is a step that was done together by students and lecturers.The steps were done by the lecturers, where the lecturers gave students the opportunity to make conclusions regarding to the texts that have been distributed.The lecturers seen to help the students to conclude the text in accordance with the results of the discussions that had been done.
Student Response
The student's response toward Indonesian Language Learning by using the Text of Language Attitudes could be seen from the students' interest.It can be seen in Figure 1.
Student Interest
very interested (55.91%) interested (38.76%)Less interested (5.33%) Figure 1.shows that students are interested in learning Indonesian by using text of language attitudes.Four questions are used to determine the interest of students in following the learning process, such as (1) the attractiveness of learning, (2) the feelings of learning, (3) motivation, and (4) the desire to learn by using text of language attitudes.During this Indonesian language learning has been running well, but cannot attract the students' interest due to the limited resources or learning media used by the lecturers [25,26].The att ondance of language-laden text is expected to overcome the weaknesses of previous Indonesian language learning.
The ability of the Text of Language Attitudes in arousing the spirit of nationalism in the frame of diversity can be seen in Figure 2.
Fig. 2 Text Ability Awakens the Spirit of Nationalism
Figure 2 shows that the text of the attitudes of language is capable in arousing the spirit of nationalism in a diversity frame.The six indicators used to measure nationalism are (1) the position and role of the Indonesian language, (2) awareness of threats and challenges, (3) awareness of diversity, (4) awareness of the role of self in maintaining nationalism, (5) the love of the motherland, and (6) positive attitude toward language and nation.The results of this study were emphasizing on the premise that education, language, and culture are important factors in fostering nationalism [27].In addition, the results of this study also reinforce opinions that emphasize the relationship of nationalism and language learning by saying that the formation of nationalist attitudes can be done through language learning [28].This is also done by Malaysia through 'Enforcing Bahasa Malaysia and Strengthening English [29] as well as by the Taiwanese community through the curriculum of' Mother Language Education [30].Therefore, language learning is packed using text of language attitudes with the goal of developing positive language attitudes that can arousing nationalism within a diversity frame.A strong and prominent national identity can overcome ethnic barriers to believe in diverse communities [31].
Conclusion
Indonesian Language Learning by using the Text of Language Attitudes is conducted through four main stages: preparing conditions, reading in groups, intergroup discussions, and inferences.Students interested and feel that the text of language attitudes used in learning process can foster the spirit of nationalism.The implication of this research is that the text can be used as a medium of growing nationalism to unite nationalism in the frame of diversity.
|
2018-12-21T13:31:20.400Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "852f2819d1185f1a026cbdc47d27d6ed4fc3db9d",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2018/03/shsconf_gctale2018_00026.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "852f2819d1185f1a026cbdc47d27d6ed4fc3db9d",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
55216803
|
pes2o/s2orc
|
v3-fos-license
|
Gas-Phase and Aqueous Photocatalytic Oxidation of Methylamine : The Reaction Pathways
Photocatalytic oxidation (PCO) of methylamine (MA) on titanium dioxide in aqueous and gaseous phases was studied. A simple batch glass reactor for aqueous PCO and an annular continuous flow reactor for the gas-phase PCO were used. Maximum aqueous PCO efficiency was achieved in alkaline media. Two mechanisms of aqueous PCO—decomposition to formate and ammonia, and oxidation of organic nitrogen directly to nitrite—lead ultimately to CO2, water, ammonia, and nitrate: formate and nitrite were observed as intermediates. A part of the ammonia formed in the reaction was oxidized to nitrite and nitrate. Volatile PCO products of MA included ammonia, nitrogen dioxide (NO2), nitrous oxide (N2O), carbon dioxide, and water. Thermal catalytic oxidation (TCO) resulted in the formation of ammonia, hydrogen cyanide, carbon monoxide, carbon dioxide, and water. The gas-phase PCO kinetics is described by the monomolecular Langmuir-Hinshelwood model. No deactivation of TiO2 catalyst was observed.
INTRODUCTION
Monomethylamine (MA) is one of the possible photocatalytic oxidation (PCO) products of an eco-toxicant causing great concern; unsymmetrical dimethylhydrazine ((CH 3 ) 2 N−NH 2 , UDMH), used mostly as a component of rocket propellants, and also in chemical synthesis, as a stabilizing organic fuel additive, an absorbent for acid gases, and in photography [1].The present research is part of a broader study targeting the disclosure of the reaction pathways in PCO of UDMH; the results of which will be presented later.
Conventional strategies for polluted water treatment have certain drawbacks.While the combination of nitrification and denitrification can theoretically deal with nitrogencontaining waste streams, it is, however, limited by having to cope with widely changing concentrations of contaminants and sensitivity of bacteria.Adsorption and reverse osmosis can be applied but, because of its low efficiency, a subsequent waste treatment has to follow [2].In the gas phase, a common industrial practice is incineration, which for nitrogencontaining compounds may result in the formation of nitrogen oxides such as NO, NO 2 , and possibly N 2 O, contributing to the formation of photochemical smog, the greenhouse effect, and stratospheric ozone depletion.To mitigate these unwanted environmental effects equipment modifications, selective noncatalytic and catalytic reduction processes have been required in order to reduce NO x emissions [3].
Methylamine decomposition under oxidative conditions has been reported by different researchers; an exhaustive review was published by Kantak et al. [3].It has been reported that at 623 K, a greater part of MA is initially converted to NH 3 , with a small fraction forming NO x increasing with the increased O 2 concentration.At low temperatures, around 523 K, hydrogen cyanide (HCN) is not formed in appreciable quantities, due to early C−N bond scission.At higher temperatures (1160-1600 K), however, HCN has been observed as a product of MA thermal decomposition [4].Higashihara et al. [5] suggested that the MA thermal decomposition may be a combination of two pressure-and temperature-dependent processes: the unimolecular scission of the C−N bond and the stripping of four H atoms from the parent molecule to form HCN and H 2 .
Photocatalytic oxidation (PCO) over titanium dioxide may present a potential alternative to the air and polluted water treatment strategies mentioned above as it has advantages such as the ambient conditions at which the PCO reaction proceeds effectively, although higher temperatures may be also applied.Ameen et al. [6] reported that rapid gasphase PCO of MA produced a minor amount of NO x and their study of surface intermediates suggested the formation of nitrile bonds.
Aqueous phase
Monomethylamine 40% wt aqueous solution (Aldrich) was used in all tests.2-M sulphuric acid and 4-M sodium hydroxide were used to adjust the solution to the required pH.The pH correction every even hour of treatment was sufficient to keep this parameter constant within decimal increment precision.The titanium dioxide powder used was Degussa P25.
A thermostatted 0.2-L batch glass reactor, inner diameter 100 mm, aperture 40 m 2 m −3 , equipped with a magnetic stirrer was used for the PCO of MA.All experiments were compared with reference samples under identical conditions except UV-radiation.A 365-nm 15-W low-pressure mercury UV-lamp (Phillips TLD) was positioned horizontally over the reactor.The UV-irradiance was about 0.37 mW cm −2 , measured with a UVX radiometer at a distance corresponding to the level of the reactor's surface.The preliminary experiments showed that the 365-nm near-UV light did not exhibit an activity in decreasing MA concentration in aqueous phase.
The adsorption experiments with MA were carried out at concentrations of 100 and 200 mg L −1 .A fixed amount of titanium dioxide of 1 g L −1 was placed in the flasks with magnetic stirrers for 24 hours at 298 K.The measurements of MA adsorption on TiO 2 were carried out using total organic carbon (TOC) measurements in liquid phase before and after adsorption by means of the TOC analyzer Shimadzu TOC 5050A.The adsorption experiments were carried out at pH from 2 to 11.7.
A HACH DR/2000 spectrophotometer was used for chemical oxygen demand (COD) and ammonia analysis.The analyses were carried out in agreement with the standard procedure [7].A Dionex DX-120 ion exchange chromatograph was used for the analysis of anions as intermediate and end products-nitrate, nitrite, and formate concentrations.The species were identified by comparing their retention peaks with those of standard anion solutions.
Gas phase
Gas-phase PCO of MA over UV-illuminated TiO 2 was studied using an annular photocatalytic reactor having an inner diameter of 32 mm.The reactor's length was 165 mm and the total volume of the empty space of the reactor was 55 mL.The annular gap between the wall of the lamp and the inner wall of the reactor was 3.5 mm.A 365-nm 15-W low-pressure mercury UV-lamp (Sylvania, UK) was positioned coaxially in the reactor; the diameter of the lamp was 25 mm.The preliminary experiments carried out with no TiO 2 catalyst applied showed that the 365-nm light did not exhibit an activity in decreasing MA concentration in gaseous phase.
The inner wall of the reactor was coated with TiO 2 (Degussa P25) by rinsing with a TiO 2 aqueous suspension, repeated 25 times, and each rinse was followed by drying.The reactor was assembled with the lamp after the catalyst had been attached to the reactor's wall.Approximately 0.3 g of TiO 2 coated about 197 cm 2 of the reactor (1.5 mg cm −2 ).The irradiance of the UV-lamp was measured with a UVX radiometer at a distance of 3.5 mm from the lamp and averaged about 3.8 mW cm −2 .The reactor was used in a continuous flow mode.
In the experiments an evacuated gas cylinder was first filled with the desired amount of gaseous MA (Aldrich) through an injection port, and then filled with synthetic air (20% O 2 , 80% N 2 ).The air stream containing MA was blended with diluent gas to deliver the desired volatile organic compound concentration to the reactor.The temperature in the reactor during the PCO reactions was maintained at 353 and 373 K.The temperature was adjusted with heating tape wrapped around the reactor.The tape was controlled with a temperature regulator Omega CN 9000A with a Ktype thermocouple.The temperature deviations did not exceed ±1 K.
The gas flow rate was 3.03 L min −1 , which made the contact time equal to 1.1 second.This contact time was sufficient to reliably register the difference between MA concentrations in the inlet and outlet streams, keeping that difference within 30 to 60% limits.The reaction products were analyzed by a Perkin Elmer 2000 FT-IR spectrometer equipped with a Sirocco 10.6-m gas cell.Inlet concentrations of MA varied from 90 to 560 ppmv (4.02 • 10 −3 to 2.50 • 10 −2 mol L −1 ).No humidity was introduced to the air stream.
The experiments on thermal catalytic decomposition of MA were conducted in the dark at 573 K.At lower temperatures, no measurable decrease in MA concentration was observed.The total volume of the reactor was increased to 0.105 L, which at a gas flow rate of 3.03 L min −1 made the contact time as long as 2.1 seconds.
Aqueous PCO of MA
The aqueous PCO efficiency for MA was studied dependent on pH.Total organic carbon (TOC) appeared to be a viable parameter since the only detected carbon-containing byproduct, formic acid, observably reacts faster than the parent compound, thus making MA ultimately mineralized in PCO reactions.The PCO efficiency E was defined as the decrease in TOC divided by the amount of energy reaching the surface of the treated sample: where E = PCO process efficiency, mg W −1 h −1 ; Δc = TOC decrease, mgC L −1 ; V = the volume of sample to be treated, L; I = irradiation intensity, mW cm −2 ; s = solution irradiated surface area, cm 2 ; t = treatment time (h).
The results of PCO of MA solutions are presented in Figure 1.The lowest efficiency was observed within the pH range from 2 to 7: a minor decrease in the MA concentration was observed in low-pH PCO.A drastic increase in the PCO efficiency was observed with pH increasing from about 7 to 11.7.At pH 11.9-12.5, the MA volatility was high enough that no MA was found in reference samples after about eight hours of treatment: the target compound had Anna Kachina et al. simply volatized from the solution.This circumstance made proper TOC measurements in the PCO-treated samples impossible: the residual TOC had no reference with which it could be compared.The low efficiency in acidic media may be explained by poor adsorption of protonated MA molecules on the protonated TiO 2 surface.The pKa value for MA is 10.62 at 298 K, that is, MA is a strong base.This value shows that the amount of protonated MA-ions exceeds the amount of nonprotonated molecules, especially under acidic medium conditions.Positively charged protonated MA molecules are electrostatically repeled by the positively charged TiO 2 surface (the isoelectric point pHzpc for TiO 2 is 6.3) resulting in poor PCO.The PCO efficiency showed an increase with increasing pH from 7 to 11.7, which could be explained, probably, by the closer attraction between the negatively charged TiO 2 surface and the protonated MA molecules.The electrostatic attraction, however, does not mean adsorption of MA on the TiO 2 surface.Therefore, the increase in the PCO efficiency of MA at alkaline pH may be explained by the oxidation of MA with the OH-radicals generated from OH-ions.
The adsorption experiments for MA showed negligible adsorption over the pH range 2 to 11.7; within the experimental error interval, the concentration of MA in the solution did not change as a result of addition of the photocatalyst.This leads to the conclusion that the adsorption of MA on TiO 2 did not play a crucial role in the PCO also under alkaline conditions, where PCO showed a higher efficiency.
Neither anionic intermediates were observed among the products of aqueous PCO of MA at the pH range from 2 to 7, identified by means of anionic ion-exchange chromatograph nor ammonia was found at the same pH range.Nitrite, nitrate, ammonia, and formate were identified as PCO products of MA at pH above 7.0.A low concentration of nitrate can be observed at pH 9, indicating only slight oxidation of MA occurring at this pH.The amount of nitrite and nitrate increased with increasing pH.The maximum PCO efficiency was observed at pH 11.7, which then may be considered to be an optimum pH for the PCO of MA.
Figure 2 shows the formation of nitrite and nitrate ions in an alkaline solution during aqueous PCO of MA.In the first 7 hours of PCO of MA, both nitrite and nitrate ion concentration increased.After that, the nitrate ion concentration increased in contrast to the decrease in the nitrite ion concentration, which can be explained by conversion of nitrite to nitrate.
The formation of formate and ammonia was observed in an alkaline medium.The higher the pH, the more formate was observed in the treated samples.
The formate concentration was about 8.5 times lower than the initial concentration of MA, which indicates that formate was not accumulated due to its high reactivity on UV-irradiated TiO 2 surface.
At the end of the experiment, the formate concentration decreased.The greatest amount of ammonia was observed at pH 11.The curve of the ammonia concentration versus treatment time reached a maximum after somewhere between 4 and 20 hours of treatment, decreasing afterwards most probably due to ammonia oxidation to nitrite, as pointed out also by other authors [8][9][10][11].
Figure 3 illustrates the relationship between ammonia and formate in the treated samples at pH 11.Both formate concentration and ammonia concentration reached a maximum followed by a gradual decrease; it is supposed that the formate is oxidized to CO 2 and H 2 O by PCO.
It is interesting to note the low concentration of formate compared to the residual MA concentration; the TOC analysis showed slow mineralization of MA, thus confirming a faster decomposition of formate than MA (see Figure 3).The profile of the TOC degradation shows no accumulation of organic byproducts.The inflection point of the formate concentration is noticeably lower than that of MA, which also confirms that the degradation of MA is slower than the degradation of formate.
The fact that the concentration of ammonia is lower than that of formate indicates possible ammonia oxidation and direct oxidation of organic nitrogen to nitrite.The pH tending to decrease in the experiments also indicates possible free ammonia volatilization, mineralization of formate, and consumption of OH-ions in PCO reactions.
The degradation of ammonia to nitrite is enhanced at high pH range, which is consistent with previous findings [12][13][14].When the pH increased from 10.3 to 11.4, the amount of nitrite ion formed increased from 0.7 mg L −1 at the maximum concentration point at 8 hours of the treatment to 9 mg L −1 at the same time; the pH was varied at 0.2 increments and the concentration of nitrite increased consistently.Later on in the experiments, the concentration of nitrite decreased at all pH values (see also Figure 2).
Figure 2 indicates that nitrite seems to be oxidized easily to nitrate, although the conversion of ammonia or MA, or both, to nitrite seems to proceed fast: the concentration of nitrite exceeds the concentration of ammonia at the same time of treatment (see Figures 2 and 3).
The general trends of oxidation of MA are summarized by (2): (2)
Gas-phase PCO of MA
Since the products of the thermal catalytic oxidation (TCO) of MA have been established previously [3][4][5], the experiments in the dark at elevated temperature 573 K were carried out to verify the reliability of the measurements.The TCO of MA proceeded along two reaction pathways: dehydrogenation of the MA molecule with hydrogen cyanide formed as a product through the methanimine stage, and C−N bond scission forming ammonia and the carbon mineralization products.The volatile products leaving the reactor included ammonia, hydrogen cyanide, carbon dioxide, and water, which is consistent with the findings of Kantak et al. [3].
The PCO volatile products, visible in the infrared spectra, included ammonia, nitrogen dioxide, nitrous oxide, carbon dioxide, and water.One of the infrared spectra of the PCO products of MA is shown in Figure 4.The MA-ammonia balance showed a fraction of a percent discrepancy, that is, nitrogen oxides appeared in small amounts.
Formation of nitrous oxide (N 2 O) was described in Pérez-Ramíres et al. [15] as a result of PCO of ammonia in the presence of nitrogen oxide.Nitrogen oxide (NO), presumably formed in the PCO reaction from ammonia, was partially oxidized further to nitrogen dioxide and partially reacted with residual ammonia forming nitrous oxide (N 2 O).Nitrogen oxide (NO) was not seen among the reaction products due to its high reactivity,
PCO and TCO kinetics
As previously established by Kim and Hong [16], the complex mechanisms of photocatalytic reactions are difficult to describe for an extended reaction time in a simple model.Therefore, the kinetic modeling is usually restricted to the analysis of the initial rate of photocatalytic degradation.This can be obtained from a minimum detectable conversion of the reactant at a minimum contact time.The Langmuir-Hinshelwood (L-H) model of the monomolecular reaction kinetics, the reciprocal form of which is presented by (4), has been widely used for gas-phase photocatalytic reactions, where r 0 is the initial reaction rate (mol m −3 s −1 ), C 0 is the inlet concentration of the reactant (mol m −3 ), k is the reaction rate constant (mol m −3 s −1 ), and K is the Langmuir adsorption coefficient (m 3 mol −1 ).
Figure 5 shows the conversion of MA as a result of PCO versus inlet reactant concentration: the conversion varies inversely with the inlet concentration at the tested temperatures, that is, the process is obviously not of the first order and may fit to the L-H description.
The initial rate of MA PCO was observed to be consistent with the L-H kinetic model, which is in agreement with the findings of Ameen et al. [6].A linear plot of r −1 0 versus C −1 0 (see ( 4)) is shown in Figure 6, from which k and K values were obtained.Reaction rate constants and the Langmuir adsorption coefficient for PCO of MA at the tested temperatures are given in Table 1.
One can see the reaction rate constant increasing and the adsorption coefficient decreasing with increasing temperature, making the apparent reaction rate practically independent of the temperature within the tested range.In contrast to PCO, TCO kinetic behavior indicated the first order process: the conversion degree varied proportionally with the inlet concentration of MA at 573 K (see Figure 5).
No indication of photocatalyst deactivation was observed in multiple (more than 100) experimental runs of 2 hours each, which is in agreement with the observations reported by Ameen et al. [6].The authors may explain minor deactivation of the photocatalyst referring to the phenomenon reported by Kolinko et al. [17]: nitric acid, composed of negligible amount of transformed nitrogen, enhances the alkaline MA adsorption on the acidic TiO 2 surface having organic nitrates, and ammonium nitrate also photocatalytically oxidized with no additional HNO 3 formation.Slow, or zero after the first formation of nitric acid, accumulation of nitrates makes the catalyst deactivation rate also slow or zero.
Aqueous PCO
The present research showed the aqueous PCO of MA on a titanium dioxide catalyst to be successful only in alkaline media.The optimum pH was 11.7, according to the PCO efficiency parameter.The two pathways of PCO of MAdecomposition to formate and ammonia, and oxidation of organic nitrogen directly to nitrite-lead ultimately to carbon dioxide, water, ammonia, and nitrate.Formate and nitrite were observed as the intermediates.Formate decomposes further to carbon dioxide and water; part of the ammonia reacts on the TiO 2 surface to produce nitrite, which ultimately leads to nitrate.Nitrite is also formed directly from International Journal of Photoenergy the MA molecule in a parallel reaction.The predominant role of OH-radicals in PCO of MA is presumed.
Gas-phase PCO
When considering the gas phase, methylamine is easily oxidized photocatalytically on UV-irradiated TiO 2 .The volatile PCO products of MA in the gas phase included ammonia, nitrogen dioxide, nitrous oxide, carbon dioxide, and water; the TCO products of MA at 573 K included ammonia, hydrogen cyanide, carbon dioxide, and water.The PCO reaction kinetics followed the Langmuir-Hinshelwood description.The photocatalyst demonstrated stable activity at temperature and concentration ranges tested in the experiments.
In both reactions, aqueous and gaseous phases, oxidized nitrogen species (nitrogen oxides in the gas phase, nitrite and nitrate anions in water) are formed through intermediate products, although ammonia prevails as the end product.No toxic cyanide is formed in the PCO reactions.The gas-phase reaction utilizes the radiation energy more effectively.
6 E 1 h − 1 )Figure 1 :
Figure 1: Dependence of PCO efficiency on pH for synthetic solution of MA (the efficiency was calculated for 24-hour treatment of MA solutions).
Table 1 :
Reaction rate constants and Langmuir adsorption coefficients for PCO of MA.
|
2018-12-14T11:04:22.995Z
|
2007-08-02T00:00:00.000
|
{
"year": 2007,
"sha1": "904bd5678ab2b6e8e0c1a749b01273ddb0231b1d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijp/2007/032524.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "904bd5678ab2b6e8e0c1a749b01273ddb0231b1d",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
54528493
|
pes2o/s2orc
|
v3-fos-license
|
A Composite Model for the 750 GeV Diphoton Excess
We study a simple model in which the recently reported 750 GeV diphoton excess arises from a composite pseudo Nambu-Goldstone boson---hidden pion---produced by gluon fusion and decaying into two photons. The model only introduces an extra hidden gauge group at the TeV scale with a vectorlike quark in the bifundamental representation of the hidden and standard model gauge groups. We calculate the masses of all the hidden pions and analyze their experimental signatures and constraints. We find that two colored hidden pions must be near the current experimental limits, and hence are probed in the near future. We study physics of would-be stable particles---the composite states that do not decay purely by the hidden and standard model gauge dynamics---in detail, including constraints from cosmology. We discuss possible theoretical structures above the TeV scale, e.g. conformal dynamics and supersymmetry, and their phenomenological implications. We also discuss an extension of the minimal model in which there is an extra hidden quark that is singlet under the standard model and has a mass smaller than the hidden dynamical scale. This provides two standard model singlet hidden pions that can both be viewed as diphoton/diboson resonances produced by gluon fusion. We discuss several scenarios in which these (and other) resonances can be used to explain various excesses seen in the LHC data.
I. INTRODUCTION
If the recently announced diphoton excess at ≃ 750 GeV [1,2] remains as a true signal, it indicates a long-awaited discovery of new physics beyond the standard model. In a recent paper [3], we have proposed that this excess may result from a composite spin-0 particle decaying into a two-photon final state. Among the possibilities discussed, here we concentrate on the case in which the particle is a composite pseudo Nambu-Goldstone boson associated with new strong dynamics at the TeV scale, which is singly produced by gluon fusion and decays into two photons. For works that have appeared around the same time and discussed similar models to those in [3], see [4]; other related works include [5]. A class of theories involving similar dynamics with vectorlike matter charged under both hidden and standard model gauge groups was studied in [6,7]. The possibility of obtaining standard model dibosons from a composite scalar particle was utilized in a different context in [8,9].
In this paper, we study a simple model presented in Ref. [3] which introduces an extra gauge group G H = SU (N ) at the TeV scale, in addition to the standard model gauge group G SM = SU (3) C × SU (2) L × U (1) Y , with extra matter-hidden quarks-in the vectorlike bifundamental representation of G H and SU (5) ⊃ G SM . Here, SU (5) is used only as a simple mnemonic; it does not mean that the three factors of G SM are unified at the TeV scale. In this model, the diphoton resonance is one of the pseudo Nambu-Goldstone bosons-hidden pions-which is neutral under G SM and has the mass of ≃ 750 GeV. We consider this particular model here because it is theoretically simple and leads to predictions that can be tested in the near future. The only free parameters of the model, beyond the size N of G H , are the dynamical scale of G H and two (in general complex) masses for the hidden quarks, out of which two numbers are fixed by the mass and diphoton rate of the resonance. We calculate all the masses of the hidden pions, which for a given N depend only on a single free parameter: the ratio of the absolute values of the two hidden quark masses. We find that two colored hidden pions must be near the current experimental limits (unless N is unreasonably large); they respectively lead to narrow dijet, Z-jet, and γ-jet resonances below ∼ 1.6 TeV and to heavy stable charged and neutral hadrons (or leptoquark-type resonances) below ∼ 1.2 TeV. We also discuss other, higher resonances in the model, which are expected to be in the multi-TeV region, as well as the effect of possible CP violation in the G H sector on the standard model physics. Furthermore, we investigate what the structure of the model above the TeV region can be. This includes the possibility of (a part of) the theory being conformal and/or supersymmetric. This affects collider signatures and cosmological implications of one of the colored hidden pions: the one that does not decay through G H or standard model gauge dynamics. We also perform detailed analyses of cosmology of hidden baryons.
We then discuss an extension of the minimal model which has an extra hidden quark that is singlet under the standard model gauge group. One of the salient features of this model is that there are two diphoton (diboson) resonances in the hidden pion sector, which can both be produced by gluon fusion and decay into two electroweak gauge bosons. We consider several scenarios associated with these and other related resonances. Representative ones are: (i) the two resonances correspond to two diphoton "excesses" seen in the ATLAS data at ≃ 750 GeV and ≃ 1.6 TeV [1] (although the latter is much weaker than the former); (ii) the two resonances are both near ≃ 750 GeV with a mass difference of 10s of GeV, explaining a slight preference to a wide width in the ATLAS data; (iii) the two resonances are at ≃ 750 GeV and ≃ 2 TeV, responsible for the 750 GeV diphoton excess [1,2] and the 2 TeV diboson excess [10], respectively. We calculate the masses of the hidden pions in the model and discuss their phenomenology. We find that the masses of the hidden pions can be larger than the case without the singlet hidden quark; in particular, the leptoquark type hidden pion can be as heavy as ∼ 1.5 TeV, depending on scenarios. We discuss physics of hidden pions and hidden baryons that decay only through interactions beyond the G H and standard model gauge dynamics. We find that cosmological constraints on this model are weaker than those in the model without the extra hidden quark. The organization of this paper is as follows. In Section II, we consider the minimal model and its phenomenology at the TeV scale. We calculate all the hidden pion masses and discuss their signatures and current constraints. We also discuss particles with higher masses, in particular the hidden η ′ meson and spin-1 resonances. In Section III, we study physics above the TeV scale, especially its implications for collider physics and cosmology. The hidden pion that is stable under the G H and G SM gauge dynamics as well as low-lying hidden baryons are studied in detail. We discuss the possibility that the G H sector is conformal and/or that the theory is supersymmetric above the TeV scale.
In Section IV, we study an extension of the model in which there is an extra hidden quark that is singlet under G SM and has a mass smaller than Λ. We discuss possible signals of two G SM -singlet hidden pions which can be viewed as diboson resonances produced by gluon fusion. Section V is devoted to final discussion. In the Appendix, we analyze the effect of possible CP violation in the G H sector on the standard model physics.
II. MODEL AT THE TEV SCALE
The model at the TeV scale is given by a hidden gauge group G H = SU (N ), with the dynamical scale (the mass scale of generic low-lying resonances) Λ, and hidden quarks charged under both G H and the standard model gauge groups as shown in Table I. 1 The hidden quarks have mass terms where we take m D,L > 0, which does not lead to a loss of generality if we keep all the phases in the other part of the theory. These masses are assumed to be sufficiently smaller than the dynamical scale, m D,L ≪ Λ, so that Ψ D,L andΨ D,L can be regarded as light quarks from the point of view of the G H dynamics. Note that the charge assignment of the hidden quarks is such that they are a vectorlike fermion in the bifundamental representation of G H and SU (5) ⊃ G SM . The model therefore preserves gauge coupling unification at the level of the standard model; this is significant especially given the possible threshold corrections around the TeV and unification scales (see, e.g., [11]). The unification of the couplings becomes even better if we introduce supersymmetry near the TeV scale (see Section III).
A. Hidden Pion for the 750 GeV Diphoton Excess
The strong G H dynamics makes the hidden quarks condensate These condensations do not break the standard model gauge groups, since the hidden quark quantum numbers under these gauge groups are vectorlike with respect to G H [12]. The spectrum below Λ then consists of hidden pions, arising from spontaneous breaking of the approximate SU (5) A axial flavor symmetry: where ψ, ϕ, and φ are real scalars while χ is a complex scalar, and the quantum numbers represent those under The masses of these particles are given by [13] Here, f is the decay constant, 2 and ∆ C,L,Y are contributions from standard model gauge loops, of order where g 3 , g 2 , and g 1 are the gauge couplings of SU (3) C , SU (2) L , and U (1) Y , respectively, with g 1 in the SU (5) normalization. Using naive dimensional analysis [14], we can estimate the quark bilinear condensate and the decay constant as where we have assumed N 5, i.e. the number of color is not much smaller than that of flavor in the G H gauge theory. For N ≪ 5, we might instead have c ≈ (5/16π 2 )Λ 3 and f ≈ ( √ 5/4π)Λ, but below we use Eq. (9) even in this case because the resulting differences are insignificant for our results.
The couplings of the hidden pions with the standard model gauge fields are determined by chiral anomalies and given by where a, b, c = 1, · · · , 8 and α = 1, 2, 3 are SU (3) C and SU (2) L adjoint indices, respectively, and with t a being half of the Gell-Mann matrices. We assume that the φ particle produced by gluon fusion and decaying to a diphoton is responsible for the observed excess [3], so we take The decay of φ occurs through interactions in Eq. (10) and leads to standard model gauge bosons. The diphoton rate at √ s = 13 TeV is given by Here, we have used the NNPDF 3.0 parton distribution function [15] and determined the overall normalization (the QCD K factor) such that it reproduces the production cross section of a standard model-like Higgs boson of mass 750 GeV at √ s = 14 TeV [16]. Since the observed excess corresponds to σ(pp → φ → γγ) ≃ (6 ± 2) fb [17], this gives . (13) With this value of f , the upper limits from searches in the 8 TeV data [18] are evaded. The ratios of branching fractions to various φ decay modes are given by where e and θ W are the electromagnetic coupling and the Weinberg angle, respectively, and we have ignored the phase space factors. These values are consistent with the constraints from searches of high-mass diboson and dijet resonances in the 8 TeV data [3,19]. Observing these decay modes in the 13 TeV run with the predicted rates would provide an important test of the model.
B. Other Hidden Pions
The identification of φ as the 750 GeV diphoton resonance leads, through Eq. (7), to where we have used Eqs. (9,13). This, however, leaves the ratio r ≡ m D /m L undetermined. With Λ = 3.2 TeV N/5, which is motivated by Eqs. (9,13), the masses of the other hidden pions are determined by Eqs. (4 -6) in terms of r: Here, in the second terms we have used Eq. (8) with unit coefficients, but we do not expect that using the true coefficients (which are not known in general) make a significant difference. In the left panel of Fig. 1, we plot these masses for N = 5 as functions of r. In the right panel, we plot the maximal values of the ψ and χ masses, m ψ | r→∞ and m χ | r→∞ , as functions of N . We find that these particles are at unless N is very large, N ≥ 10. We note that if m D and m L are unified at a conventional unification scale (around 10 14 -10 17 GeV), then their ratio at the TeV scale is in the range r ≃ 1.5 -3, with the precise value depending on the structure of the theory above the TeV scale. The ψ particle is created dominantly through single production by gluon fusion, whose cross section is plotted in the left panel of Fig. 2 for σ(pp → φ → γγ) = 6 fb. (The dependencies on f and N cancel for a fixed value of σ(pp → φ → γγ).) Once produced, ψ decays via interactions in Eq. (10) to gγ, gZ, and gg with the branching fractions and B ψ→gg = 1 − B ψ→gZ − B ψ→gγ . The lower bound on the mass of ψ from the LHC data so far [20,21] is about 1 -1.4 TeV for σ(pp → φ → γγ) = 4 -8 fb. For N < 5, this requires r to be not significantly smaller than 1. The χ particle is produced only in pairs because of the conservation of "D number" and "L number" at the renormalizable level, under which Ψ D and Ψ L transform as (1, 0) and (0, 1), respectively. The production cross section is plotted in the right panel of Fig. 2, ignoring possible form factors which may become important when the hierarchy between m χ and Λ is not significant due to small N , e.g. N = 3. The signal of χ depends on its lifetime, which is determined by the strength of nonrenormalizable interactions between the hidden quarks and the standard model particles violating D and L numbers. (This issue will be discussed in Section III A.) Consider first the case in which χ is stable at collider timescales. In this case, a produced χ particle picks up a light quark of the standard model, becoming a heavy fermionic "hadron." Since the charge ±4/3 component of χ is heavier than the charge ±1/3 component by about 700 MeV [22], the former is subject to weak decays into the latter with cτ 1 mm, so that the final heavy hadron has a charge 0 or ±1. The mass splitting between these neutral and charged hadrons is of order MeV, so that the weak decay between them is slow and they can both be regarded as stable particles for usual collider purposes. The lower bound on the χ mass can thus be read off from that of the stable bottom squark [23] with the doubled production cross section as given in Fig. 2 (because of the twice larger number of degrees of freedom). The bound is about 900 GeV. On the other hand, if the nonrenormalizable interactions are strong, χ may decay promptly into a quark and a lepton. In this case, the lower bound on the χ mass is about 750 GeV-1 TeV, depending on the details of the decay modes [24,25]. In any event, because of the theoretical expectations in Eq. (19), searches of the ψ and χ particles provide important probes of the model. Finally, the ϕ particle is standard model color singlet, so it can be produced only through electroweak processes or decays of heavier resonances. The decay of ϕ occurs through interactions of Eq. (10). The ϕ ± decays into W γ and W Z with the branching fractions of cos 2 θ W ≃ 0.77 and sin 2 θ W ≃ 0.23, respectively, while ϕ 0 decays into γγ, γZ and ZZ with the branching fractions sin 2 (2θ W )/2 ≃ 0.35, cos 2 (2θ W ) ≃ 0.30, and sin 2 (2θ W )/2 ≃ 0.35, respectively. The current bounds on ϕ ±,0 are weak and do not constrain the model further.
C. Hidden Eta Prime
Another interesting particle arising from the G H sector is the hidden η ′ state associated with the U (1) A axial symmetry, which we simply call η ′ below. The mass of this particle is expected to be at the dynamical scale [26] where we have used Eqs. (9,13) in the last expression. The couplings to the standard model gauge bosons can be estimated by U (1) A anomalies with respect to the standard model gauge groups This expression is valid in the large N limit, and we expect that it gives a good approximation even for moderately large N . The production cross section through gluon fusion calculated using this expression is depicted in Fig. 3. The energy dependence of the QCD K factor is at most of O(10%) and hence is neglected. Dominant decay modes of η ′ depend strongly on whether the G H sector respects CP (or parity) or not. In general, there is the possibility that the G H sector has significant CP violation due to complex phases of the hidden quark masses or the θ parameter for the G H gauge theory. This possibility is particularly natural if there is a QCD axion, eliminating the effect of CP violation on the QCD θ parameter (but no axion acting on the G H sector); see the Appendix. In this case, η ′ decays into two hidden pions through CP violating terms of the form where m and π collectively denote hidden quark masses and hidden pion fields, respectively. The final state then consists of either 2 quasi-stable particles, χχ, or 4 standard model gauge bosons. As discussed in the Appendix, CP violation of the G H sector may also be observed in the neutron electric dipole moment. If the G H sector respects CP , e.g. as in the case in which the axion mechanism also operates in the G H sector, then η ′ decays dominantly into 3π or 2 standard model gauge bosons. If the former is kinematically open, it dominates the decay; this will be the case if the η ′ mass is indeed given by Eq. (21) with the coefficient close to unity. On the other hand, if the 3π mode is kinematically forbidden due to a (somewhat unexpected) suppression of the coefficient of Eq. (21), then the decay is to 2 standard model gauge bosons. Since the mixing between η ′ and hidden pions is suppressed due to small hidden quark masses, the branching ratios of η ′ are determined purely by the SU (5) flavor symmetry and given by Here, we have used the abbreviations R AB = B η ′ →AB , s W = sin θ W , and c W = cos θ W . If these modes dominate, the production and decay of η ′ also leads to a diphoton signal. 3
D. Heavy Spin-1 Resonances
We finally discuss spin-1 resonances in the G H sector having odd C and P , which we refer to as hidden ρ mesons. We denote the hidden ρ mesons that have the same flavor SU (5) charges as ψ, χ, ϕ, φ, and η ′ by ρ ψ , ρ χ , ρ ϕ , ρ φ , and ω, respectively. These hidden ρ mesons are expected to be as heavy as Λ. They are produced at the LHC and yield interesting signals. For earlier studies of phenomenology of spin-1 resonant production of pairs of hidden particles, see [6,7].
The ρ ψ particle mixes with the standard model gluon and couples to standard model quarks with a coupling constant ∼ √ N g 2 3 /4π. The single production cross section of ρ ψ from quark initial states is of order 1000 -1 fb for the ρ ψ mass of 2 -5 TeV at the 13 TeV LHC. It is also singly produced from a two gluon initial state via higher dimensional operators suppressed by Λ The production cross section from the two gluon initial state is roughly comparable to that from quark initial states. The produced ρ ψ mainly decays into a pair of ψ with a large width, if it is kinematically allowed; the resulting ψ in turn decays into gg, gZ, or gγ, yielding a narrow dijet, Z-jet, or γ-jet resonance. The decay of ρ ψ into ψφ is forbidden by C-parity.
The ρ χ particle is pair produced by ordinary QCD interactions. The pair production cross section is expected to be of O(10 −1 -10 −9 fb) for the ρ χ mass of 2 -5 TeV at the 13 TeV LHC, although there may be deviations from this naive QCD estimate by a factor of a few due to a form factor. The produced ρ χ decays into χψ or χϕ, leading to two quasi-stable (or leptoquark-type) particles and 4 standard model gauge bosons per event. The decay of ρ χ into χφ is forbidden by C-parity.
The particles ρ ϕ and ρ φ mix with the standard model SU (2) L and U (1) Y gauge bosons and couple to standard model quarks and leptons with couplings ∼ √ N g 2 2 /4π and ∼ √ N g 2 1 /4π, respectively. The single production cross sections of ρ ϕ and ρ φ from quark initial states are of order 10 -0.1 fb and 1 -0.01 fb, respectively, for their masses of 2 -5 TeV at the 13 TeV LHC. The coupling between ρ φ and two gluons is absent due to C-parity. The produced ρ ϕ decays mainly into ϕϕ, while the decay into ϕφ is forbidden by C-parity. The ρ φ decays into χχ; the decays into ψψ, ϕϕ, and φφ are forbidden by C-parity.
The ω particle does not mix with the standard model gauge bosons. Furthermore, a coupling between ω and two gluons is forbidden by C-parity. Therefore, the dominant production of ω occurs through a coupling with three gluons In the production process, the initial state is two gluons and the final state is a single ω and a gluon. The production cross section is expected to be of order 10 -0.01 fb for the ω mass of 2 -5 TeV at the 13 TeV LHC. The produced ω decays mainly into three hidden pions and to some extent into χχ (with the branching ratio of a few percent, suppressed by the size of the flavor SU (5) breaking, (m D − m L ) 2 /Λ 2 ); the decays into ψψ, ϕϕ, and φφ are forbidden by C-parity and standard model gauge invariance.
III. PHYSICS AT HIGHER ENERGIES
Some of the physics of hidden hadrons at the TeV scale are affected by theories at higher energies; for example, the lifetime of hidden pion χ is determined by the structure of the theory above Λ. Here we discuss particles that do not decay through G H or standard model gauge dynamics, in particular χ and low-lying hidden baryons. We discuss low-energy operators necessarily to make these particles decay and study their phenomenological implications, including constraints from cosmology. We also discuss possible ultraviolet structures that lead to the required size of the coefficients of these operators.
A. Physics of Hidden Pion χ
The dynamics of G H by itself leaves hidden pion χ absolutely stable. The χ particle, however, can decay into standard model particles through direct interactions between the G H and standard model sectors. Here we study physics of χ decays, focusing on issues such as bounds from cosmology and proton decay as well as implications for collider physics and theories at very high energies. 4 At the level of the standard model fermion bilinears and hidden quark bilinears, the most general operators relevant for χ decays are given by where , and e(1, 1, 1) are the standard model (left-handed Weyl) fermions. Since the operators in the first bracket are matched on to ∂ µ χ, these operators give at the scale Λ. Here, α 1,2,3 are coefficients of order where M * ( Λ) is the scale at which the operators in Eq. (27) are generated. We note that since Ψ L σ µ Ψ † D and Ψ D σ µΨ † L in Eq. (27) correspond to conserved currents in the G H sector, coefficients α 1,2,3 are given by Eq. (29) even if the G H gauge theory is strongly coupled between M * and Λ. With Eqs. (28,29), the decay width of χ is given by where m f is the larger of the final state standard model fermion masses, arising from chirality suppression. As we will see in Section III B, cosmology requires the lifetime of χ to be smaller than ≈ O(10 13 -10 15 sec), assuming the standard thermal history below temperature of about a TeV. If the operators in Eq. (28) are the only ones contributing to χ decays, then this gives where we have used the last expression of Eq. (30) and normalized m f by the top quark mass. On the other hand, loops of hidden quarks involving Eq. (27) induce standard model four-fermion operators suppressed by ∼ (4πM * ) 2 .
If the coefficients of Eq. (27) corresponding to α 1 and one of α 2,3 are nonzero, this induces proton decay which is too rapid for the values of M * satisfying Eq. (31). This implies that if χ decays satisfy the cosmological bound due to Eq. (28), then it must be that α 1 = 0 or α 2 = α 3 = 0. In this case, M * may be small enough that χ decays within the detector (even promptly).
One might think that the upper bound on M * in Eq. (31) implies that unless early universe cosmology is exotic, there must be new physics directly connecting the G H and standard model sectors well below the conventional unification scale, ∼ 10 14 -10 17 GeV. This is, however, not necessarily true. If neither Ψ D,L norΨ D,L is unified in a single multiplet at the high energy scale (as in the case of higher dimensional grand unified theories [28]), we can consider operators leading to χ decays involving standard model operators of dimension 4: with O(1) coefficients at the unification scale M * . (If Ψ D,L orΨ D,L is unified, operators of this form with O(1) coefficients lead to too large mass terms for the hidden quarks at the radiative level.) These operators lead to operators at the scale Λ where ∆ (> 1) is the operator dimension in the conformal phase, which we take to be the same forΨ D Ψ L and Ψ † The large size of the anomalous dimension leading to small ∆ requires that G H is strongly coupled in the conformal phase. On the other hand, if this phase is still the G H gauge theory, operators of the form must have dimensions larger than or equal to 4. Since these dimensions are given by 2∆ in the large N limit (at least for the ones directly related toΨ D Ψ L orΨ L Ψ D but also for others if the conformal phase respects the SU (5) flavor symmetry), an O(1/N ) correction must play the role for making ∆ smaller than 2. This seems to imply that it is difficult to have M * in the large side, M * ≈ 10 17 GeV, which requires ∆ 1.2. Furthermore, strong coupling is also expected to give large anomalous dimensions for operators of the form The dimensions of these operators must also be larger than 4 unless their coefficients are strongly suppressed at M * , since otherwise the theory flows into a different phase above the scale Λ, so that our low energy theory is not the one discussed in Section II. Avoiding the cosmological bound on χ using only operators in Eq. (32) generated at a unification scale requires these conditions to be satisfied.
B. Cosmology
Assuming that the thermal history of the universe is standard below the temperature of about a TeV, the relic abundance of χ is given as follows. Below the temperature T ≃ m χ /30 ≃ 30 -40 GeV, annihilation of χ into gluons becomes ineffective and the χ abundance freezes out. If there were no subsequent annihilation of χ, this would lead to the present-day χ energy density of Ω χ ≈ O(0.01). However, we expect a period of further annihilation at T ≃ Λ QCD ≈ O(100 MeV) when nonperturbative QCD effects become important. At T ≃ Λ QCD , χ particles hadronize by picking up light quarks, which leads to enhanced annihilation of χ. While the details of this late-time annihilation process are not fully understood, we may estimate, based on earlier work [29], that the resulting χ energy density is of order With this estimate, the possibility of absolutely stable χ is excluded, under reasonable assumptions, by heavy isotope searches; see Section V C of Ref. [27] (in which a particle called a xyon corresponds to our χ particle here).
For unstable χ, bounds on the lifetime are given by the requirement that cosmological decay of χ does neither spoil the success of the big bang nucleosynthesis, generate a detectable level of (µ or y) distortions of the cosmic microwave background, or lead to an excessive amount of the diffuse gamma-ray background. These constraints, as compiled in [30] with the update to include [31], are plotted in Fig. 4 in the τ χ -m χ Y χ plane, where τ χ , m χ , and Y χ = ρ χ /s are the lifetime, mass, and entropy yield of χ. The estimate of the χ abundance after the QCD era (corresponding to Eq. (37) if χ were absolutely stable) is indicated by the horizontal band. We thus find that with this estimate, the bound on the χ lifetime comes from observations of the diffuse gamma-ray background and is given by τ χ 10 13 -10 15 sec.
Implications of this bound were discussed in Section III A. In addition to the χ particle, the dynamics of G H leaves low-lying hidden baryons stable. Among all the hidden baryons, we expect the lightest ones to be those in which the constituents form the smallest spin, i.e. 0 or 1/2, and have a vanishing orbital angular momentum [32]. 5 The cosmological fate of the low-lying hidden baryons depends on the strength of operators violating hidden baryon number and operators responsible for χ decays. The former operators lead to decays of hidden baryons either directly to standard model particles or to χ and standard model particles through other (off-shell) hidden baryons. If the timescale for these decays, τ B , is shorter than the χ lifetime, τ χ , then all the low-lying hidden baryons decay in this manner. On the other hand, if this timescale is longer than τ χ , the heavier components of the low-lying hidden baryons decay first into the lightest hidden baryon (and standard model particles) in timescale τ χ ; then the lightest hidden baryon decays to standard model particles (and often χ) in a longer timescale of τ B .
Cosmology of hidden baryons is N dependent, since the spectrum of the low-lying hidden baryons as well as operators responsible for their decays strongly depend on N . There are, however, some statements one can make regardless of the value of N . Suppose m D > m L . (The other case is discussed later.) In this case, among the low-lying hidden baryons the one composed only of Ψ L is the lightest. Then, for even and odd N the lightest hidden baryon has standard model gauge quantum numbers of (1, 1) −N/2 and (1, ) −N/2 , respectively, and hence is electrically charged. The thermal abundance of the lightest hidden baryon is determined by its annihilation into hidden pions and given by n B /s ∼ 10 −16 × (m B /10 TeV). An electrically charged particle with such an abundance is excluded by searches for charged massive stable particles [33]. Thus, the lightest hidden baryon must be unstable in this case.
We now discuss physics of hidden baryon decays in more detail. For illustrative purposes, here we consider the cases of N = 3, 4, 5 for m D > m L . The cases of higher N can be analyzed analogously. N = 3 : The lightest hidden baryon, which is LLL in obvious notation, can decay via interactions of the form which induce mixings between the DLL, DDL, and DDD hidden baryons and the right-handed up-type quarks, left-handed quarks, and right-handed leptons, respectively: Here, we have included possible enhancement of the coefficients by conformal dynamics between the scales M * and M I , with ∆ B (> 3/2) representing the dimension of hidden baryonic operatorsΨΨΨ in the conformal phase. 6 The lightest hidden baryon LLL then decays into χ and standard model fermions with the decay rate Since this state decays hadronically with the abundance of ρ B /s ∼ 10 −12 GeV × (m B /10 TeV) 2 , preserving the success of the big bang nucleosynthesis requires the lifetime to be shorter than O(100 sec) [34], which translates into γ 1,2,3 10 −12 GeV. For M * ∼ 10 14 -10 17 GeV and M I ∼ Λ, this requires ∆ B 3.4 -3.7. The decay of the lightest hidden baryon produces χ. If τ LLL ≡ Γ −1 LLL < 10 −4 sec, the produced χ is subject to QCD enhanced annihilation afterward, so that the analysis in the previous subsection persists. On the other hand, if τ LLL > 10 −4 sec, the abundance of χ is determined by its annihilation just after the decay of the lightest hidden baryon, which is roughly given by where M Pl is the reduced Planck scale. The bounds on τ χ in this case can be read off from Fig. 4. We note that the hidden baryon number violating operators discussed here also induce decays of χ as they violate D and L numbers. Through the mixing between hidden baryons and standard model fermions, χ decays into uq † or qe † with the decay rate This induces a mixing between the DDDL hidden baryon and the standard model Higgs field where ∆ B (> 1) is the dimension of the operator Ψ D Ψ D Ψ D Ψ L in the conformal phase. The lightest hidden baryon then decays into three χ and a standard model Higgs or gauge boson, with the decay rate These operators induce couplings between the DDLLL and DDDLL hidden baryons with standard model particles where ∆ B (> 3/2) is the dimension of hidden baryonic operators ΨΨΨΨΨ in the conformal phase. The lightest hidden baryon decays into a standard model Higgs or gauge boson, a down quark or lepton doublet, and multiple χ, with the decay rate As the decay is hadronic, the lifetime of the LLLLL hidden baryon must be shorter than O(100 sec), which requires ǫ 1,2 10 −15 . For M * ∼ 10 14 -10 17 GeV and M I ∼ Λ, this translates into ∆ B 2.3 -2.5. Cosmology of χ produced by the decay can be analyzed as in the case of N = 3. We now discuss the case with m D < m L . In this case, depending on N and the precise values of m D,L , the lightest hidden baryon may carry standard model color. Since a colored hidden baryon is subject to late-time annihilation around the QCD phase transition era, the bound on its lifetime is weak, τ O(10 13 -10 15 sec). This may allow for the coefficients of the hidden baryon number violating operators to be much smaller than the case discussed above. If χ decay is prompt, this is indeed the case because then all the heavier low-lying hidden baryons decay into the lightest hidden baryon before the QCD annihilation era. On the other hand, if χ is long-lived, decays of the low-lying hidden baryons are all controlled by the hidden baryon number violating operators. The bounds on the coefficients are then (essentially) the same as before, since they are determined by the decays of non-colored hidden baryons.
As seen here, cosmology of hidden baryons is controlled by the lowest dimensional hidden baryon number violating operator. Since its dimension depends on the whole hidden quark content, the existence of an extra hidden quark could alter the situation. In particular, if there is an extra hidden quark charged under G H but singlet under G SM , then the decays of hidden baryons can be faster. This case will be discussed in Section IV.
C. Possible Ultraviolet Theories
We have discussed physics of "would-be" stable particles: χ and low-lying hidden baryons. Assuming the standard thermal history of the universe below the TeV scale, we have found that these particles must decay fast enough via non-renormalizable interactions. The required strength of these interactions suggests the existence of ultraviolet physics beyond the minimal G H and standard model sectors not too far above the dynamical scale, Λ. 7 One possibility is that physics responsible for the χ and hidden baryon decays is indeed at a scale M * which is within a few orders of magnitudes of Λ. Here we consider an alternative possibility that physics leading to these decays is at very high energies, e.g. at a unification scale M * ≈ O(10 14 -10 16 GeV). Here, we consider a few examples that this can be the case.
• Conformal dynamics
As we have discussed before, even if the relevant non-renormalizable interactions are generated far above Λ, such as at the unification scale, conformal dynamics of G H may enhance these operators and induce sufficiently fast decays of the would-be stable particles. Having conformal dynamics requires a sufficient number of (vectorlike) particles charged under G H in addition to Ψ D,L andΨ D,L , which we may assume to have masses larger than the dynamical scale by a factor of O (1 -10). In this case, we can consider that G H is in a conformal phase above the mass scale of these particles but deviates from it below this mass scale and finally confines at Λ. With such dynamics, we can understand the proximity of the dynamical scale of G H , Λ ∼ TeV, and the masses of hidden quarks m ∼ 0.1 TeV, if the masses Ψ D,L ,Ψ D,L and additional particles originate from a common source and the conformal phase of G H is strongly coupled. The existence of a particle charged under G H and singlet under G SM may also help satisfying the constraints from cosmology; see Section IV. We note that (some of) the additional particles added here may be charged under G SM , which does not destroy gauge coupling unification if they form complete SU (5) multiplets.
• Supersymmetry
If superpartners of the standard model particles and hidden quarks are near the TeV scale, then the decay rates of would-be stable particles can be larger than those in the non-supersymmetric model. (With the superpartners near the TeV scale, N ≤ 4 is required in order for the standard model gauge couplings not to blow up below the unification scale.) Specifically, hidden baryon number can be more easily broken. For N = 3, for example, there are dimension-five superpotential operators Introduction of superparticles at the TeV scale also improves gauge coupling unification.
• Superconformal dynamics
It is possible that both conformal dynamics and supersymmetry are at play above some scale not far from Λ. This scenario is very interesting. Conformal dynamics as well as exchange of superpartners may enhance the decay rates of would-be stable particles, and the deviation of G H from the conformal window may be explained by the decoupling of hidden squarks and hidden gauginos at O(1 -10 TeV). Moreover, the anomalous dimensions of some of the operators can be calculated due to supersymmetry.
The ultraviolet structure of this class of theories is tightly constrained if the G H theory is already in a conformal phase at the unification scale M * . Let us first assume that all the superpartners are atm ≈ O(TeV). In this case, N ≥ 4 leads to a Landau pole for the standard model gauge couplings below M * . This is because in the energy interval betweenm and M * , the G H gauge theory is on a nontrivial fixed point, so that the effect of G H gauge interactions enhances the contribution of the hidden quarks to the running of the standard model gauge couplings (at the level of two loops in the standard model gauge couplings). Even for N = 3, the number of flavors of the G H gauge theory is bounded from below, since the G H gauge coupling at the fixed point is larger for a smaller number of flavors, making the contribution from the hidden quarks larger. We find that the number of flavors must be 7 or 8 so that G H is in the conformal window while at the same time the standard model gauge couplings do not hit a Landau pole below the unification scale. Assuming that additional particles always appear in the form of complete SU (5) multiplets, this predicts the existence of particles that are charged under G H but singlet under G SM . As we will see in Section IV, the existence of such particles helps to evade constraints from cosmology. For heavier superparticle masses, the possible choice of N and the number of flavors, F , increases; for example, form ≈ O(10 TeV), (N, F ) = (3, 6), (4,10), and (4, 11) are also allowed.
IV. AN EXTRA HIDDEN LIGHT QUARK
In this section, we discuss a singlet extension of the model described in Section II: we add an extra vectorlike hidden quark, Ψ N andΨ N , which is in the fundamental representation of G H but is singlet under the standard model gauge group. As mentioned in Section III B, this makes it easier to avoid constraints from cosmology. It also leads to two diphoton resonances in the spectrum of hidden pions, which has interesting phenomenological implications (one of which was mentioned in footnote 3). The full matter content of the model is summarized in Table II. This charge assignment was also considered in [6]. We here take the masses of the hidden quarks to be real and positive without loss of generality. We assume m D,L,N Λ. The spectrum below Λ consists of hidden pions where χ, ξ, and λ are complex while the others are real. Note that there are two hidden pions which are singlet under the standard model gauge group, G SM -one is charged under SU (5) ⊃ G SM while the other is singlet, which we refer to as φ and η, respectively. The masses of the hidden pions are given by where c and f are the hidden quark bilinear condensates and the decay constant, respectively. The mixing between φ and η vanishes for m D = m L due to the SU (5) symmetry. Expect for this special case, the mass eigenstates φ + and φ − are determined by Eq. (58) Here, the mixing angle θ and the mass eigenvalues m + and m − (m 2 + > m 2 − ) are related with m D,L,N as The dimension-five couplings of the hidden pions with the standard model gauge fields are determined by chiral anomalies. The couplings of ψ, ϕ, and φ are given by Eq. (10), while those of η are given by Eq. (22) with the replacement η ′ → η/ √ 6. The couplings of the mass eigenstates φ ± can be read off from these expressions and the mixing in Eq. (59). Here, the value of the decay constant f is determined so that σ(pp → φ− → γγ) = 6 fb is obtained at √ s = 13 TeV (right). In the gray-shaded regions of θ, no choice of mD,L,N may reproduce the required φ± masses.
A. Diphoton (Diboson) Signals and Other Phenomenology
A distinct feature of this model is that there are two standard model singlet hidden pions, φ + and φ − , which are produced via gluon fusion and decay into a pair of standard model gauge bosons, including a diphoton. If kinematically allowed, they may also decay into three hidden pions; with parity violation by θ H = 0, they also decay into two hidden pions. Here we assume that the decay channels into hidden pions are suppressed kinematically and/or by θ H ≃ 0. There are several interesting possibilities to consider in terms of the phenomenology of these particles:
TeV diphoton excess
We may identify the two singlets φ − and φ + as the origins of, respectively, the 750 GeV excess and the slight "excess" at ≃ 1.6 TeV seen in the ATLAS diphoton data [1]. In the left panel of , the decay constant f is required to be small. This is because at θ/π ∼ 0.35 the dimension-five coupling between φ − and gluons vanishes, while at θ/π ∼ 0.65 that between φ − and photons vanishes, so formally f → 0 is required to obtain σ(pp → φ − → γγ) = 6 fb at these values of θ. Note that for m D,L,N Λ ≃ 4πf / √ N , the results obtained here using chiral perturbation theory expressions, Eq. (53 -58), cannot be fully trusted, although we may still regard them as giving qualitatively correct estimates.
In Fig. 6, we present predictions for the production cross section of φ − times the branching ratios into two electroweak gauge bosons at √ s = 8 TeV (left) and 13 TeV (right). In the blue-shaded regions, chiral perturbation theory gives only qualitative estimates because of m D,L,N Λ. In the left panel, we also show the upper bound on the cross section for each mode [3,19] by the dotted line using the same color as the corresponding prediction. For any θ/π ∈ [0, 0.03) ∪ (0.4, 0.55) ∪ (0.95, 1), in which chiral perturbation theory can be trusted, the bounds are all satisfied. Predictions for the production cross section of the other neutral hidden pion, φ + , times branching ratios into two electroweak gauge bosons at √ s = 13 TeV are presented in Fig. 7. For θ ∼ π/2, the production cross section of diphoton is O(fb), which is consistent with the "excess" at ≃ 1.6 TeV. We also show the prediction for the masses of the hidden pions in Fig. 8. For θ ∼ π/2, the mass of the χ particle can reach ∼ 1.5 TeV. This helps to evade the experimental bound on this particle (see Section II B).
• (Apparent) wide width of the 750 GeV excess Alternatively, we may consider that both φ − and φ + have masses around 750 GeV with a small mass difference of 10s of GeV. In this case, the two resonances are observed as an apparent wide resonance [36], which is mildly preferred by the ATLAS diphoton data. Such mass degeneracy occurs if m D ≃ m L ≃ m N . The required value of the decay constant f is larger than that in Eq. (13): because the two resonances contribute to the diphoton rate. With Eqs. (53 -58, 63) and m D ≃ m L ≃ m N , the masses of the hidden pions are determined for given N , which are shown in Fig. 9. Since f is larger, the hidden pion masses are larger than in the model without the singlet hidden quark. In particular, m χ easily exceeds 1 TeV, helping to evade the constraint.
• 2 TeV diboson excess
Yet another possibility is that φ − and φ + have masses ≃ 750 GeV and ≃ 2 TeV, with the latter responsible for the diboson excess reported by the ATLAS collaboration [10] through the W W and ZZ decay modes [9]. In Fig. 10, we show the production cross section of W W and ZZ through φ + . The meaning of the shades is the same as in Fig. 6. We find that the cross section of a few fb, which is needed to explain the excess, may be achieved for θ/π ∼ 0.6, although chiral perturbation theory can give only qualitative estimates in this region.
We note that in addition to φ − and φ + , the model also has η ′ associated with the anomalous U (1) axial flavor symmetry. If its mass is somewhat lower than the naive expectation and if the G H sector respects CP , then we have three scalar resonances decaying into two standard model gauge bosons in an interesting mass range. In this case, we may consider even richer possibilities; for example, φ ± may be responsible for the 750 GeV excess with an apparent wide width, while η ′ may be responsible for the 2 TeV diboson excess. Collider phenomenology of ψ, χ, and ϕ, is essentially the same as that in the model without the singlet, except that if there are couplings of the form h † Ψ LΨN and hΨ NΨL , giving a (small) mixing between λ and the Higgs boson, then ϕ may decay into two Higgs bosons through mixing involving λ. The ξ particle is pair produced by ordinary QCD processes. Its signals depend on the lifetime, and are similar to those of the charge ±1/3 component of χ. The λ particle is pair produced by electroweak processes. Signals again depend on the lifetime. Suppose it is stable at collider timescales. The electrically charged component of λ is heavier than the neutral component by about 360 MeV. The charged component thus decays into the neutral component and a charged (standard model) pion with a decay length of ∼ 10 mm, which might be observed as a disappearing track. Since the decay length is short, however, the current LHC data do not constrain the mass of λ. If λ decays promptly, then it can be observed as a resonance. Since λ has the same standard model gauge quantum numbers as the standard model Higgs doublet, possible decay modes of λ resemble those of heavy Higgs bosons in two Higgs doublet models.
B. Hidden Quasi-Conserved Quantities and Cosmology
In the model with a singlet hidden quark, approximate symmetries in the G H sector can be more easily broken than in the model without it. For this purpose, the mass of the singlet hidden quark need not be smaller than Λ, so the following discussion applies even if the mass of the singlet hidden quark is larger than the hidden dynamical scale. "L number" can be broken by a renormalizable term which mixes λ with the standard model Higgs field. 8 Unless this mixing is significantly suppressed, λ decays promptly into a pair of standard model particles, with the decay channels similar to heavy Higgs bosons in two Higgs doublet models. Furthermore, if m χ > m ξ , χ decays into ξ and a standard model Higgs or gauge boson through the emission of (off-shell) λ. "D number" is broken by operators These operators induce decay of ξ into a pair of standard model fermions. They also induce decay of χ into a pair of standard model fermions and λ (or a standard model Higgs or gauge boson through the mixing of Eq. (64)). Since the operator in the first bracket of Eq. (65) is scalar, it may be significantly enhanced by conformal dynamics of G H . Note that the dimension of the operators in the second bracket of Eq. (65) is smaller by one than that of Eq. (32). Thus, if the G H sector is in a conformal phase between M * and Λ, then the required size of anomalous dimensions of ΨΨ to achieve the same decay rate of χ is smaller by one than in the model without the singlet hidden quark. Hidden baryon number can also be violated by non-renormalizable interactions involving the singlet hidden quark. For N = 3 and N = 4, the lowest dimension operators violating hidden baryon number still have the same dimensions as those in Eqs. (39) and (44). For N = 5, however, the following operators exist: in which the dimension of the standard model operators is smaller by one than that in Eq. (47). Here, to make the expression compact, we have combined the hidden quarks and standard model fermions into SU (5) multiplets: These operators induce mixings between hidden baryons and standard model fermions. Because of the lower dimensionality, the required size of anomalous dimensions of the hidden baryonic operators (in the case that G H is in a conformal phase between M * and Λ) is smaller by one than in the model without the singlet hidden quark.
With superparticles around the TeV scale, approximate symmetries in the G H sector are even more easily broken. In particular, D and L numbers are broken by the following superpotential operators where H u and H d are the up-type and down-type Higgs superfields, respectively. Since these are relatively lowerdimensional operators, particles whose stability would be ensured by D and L numbers, i.e. χ, ξ and λ, decay with cosmologically short lifetimes. For example, with the dimension-five operators suppressed by the scale M * 10 17 GeV, these particles decay with a lifetime shorter than O(100 sec) even if G H is not in a conformal phase between M * and the TeV scale. With G H not being in a conformal phase, the coefficients of the first two operators need not be suppressed much: coefficients of O(0.1) or smaller are enough to make the mixing between λ and Higgs fields sufficiently small to preserve their respective phenomenology. Hidden baryon number can also be easily broken. For N = 3, there exist dimension-five superpotential operators violating hidden baryon number. With the suppression scale M * 10 17 GeV, the lifetime of hidden baryons is shorter than O(100 sec) even if G H is not in a conformal phase between M * and the TeV scale. Having conformal dynamics even allows for the decay before the onset of the big bang nucleosynthesis, although the coefficients of the first two operators in Eq. (67) need to be appropriately suppressed in this case.
V. DISCUSSION
In this paper we have studied a simple model in which the recently reported diphoton excess arises from a composite pseudo Nambu-Goldstone boson, produced by gluon fusion and decaying into two photons. In the minimal version, the model only has a new hidden gauge group G H at the TeV scale with a hidden quark in the vectorlike bifundamental representation of G H and SU (5) ⊃ G SM . We have found that the model predicts hidden pions ψ(Adj, 1, 0), χ( , , −5/6), and ϕ(1, Adj, 0), in addition to the diphoton resonance φ(1, 1, 0), and that the masses of ψ and χ are smaller than 1.6 TeV and 1.2 TeV, respectively. The existence of these particles, therefore, can be probed at the LHC in the near future. We have studied physics of would-be stable particles-χ and low-lying hidden baryons-in detail, including constraints from cosmology. We have discussed possible theoretical structures above the TeV scale, including conformal dynamics and supersymmetry, and their phenomenological implications.
In the extended version of the model, there is an additional hidden quark that is singlet under the standard model gauge group and has a mass smaller than the hidden dynamical scale. This yields two hidden pions that can be produced by gluon fusion and decay into standard model dibosons, including a diphoton. We have discussed several scenarios in which these and other resonances can be used to explain various excesses seen in the LHC data. The existence of the singlet hidden quark also helps to write operators inducing decays of would-be stable particles, such as χ( , , −5/6), ξ( , 1, −1/3), λ(1, , 1/2) hidden mesons and low-lying hidden baryons. In particular, if the theory becomes supersymmetric near the TeV scale, the scale suppressing these higher dimensional operators can be as high as the unification scale M * ≃ 10 16 GeV while avoiding all the cosmological bounds.
While we have presented it as that explaining the 750 GeV diphoton excess, the model discussed here may also be used to explain other diphoton/diboson excesses that might be seen in future data at the LHC or other future colliders. 9 In particular, our studies of would-be stable particles and the structure of theories at higher energies can be applied in much wider contexts. We hope that some (if not all) of the analyses in this paper are useful in understanding future data from experiments. violating effects in the G H sector are encoded in the phases of the hidden quark masses, which we denote with the capital letters, M D and M L , to remind ourselves that they are in general complex.
We first analyze the effect on the QCD θ parameter. For this purpose, we consider a nonlinearly realized field where t A are the generators of SU (5), normalized such that tr[t A t B ] = δ AB /2, and ξ A (x) are canonically normalized hidden pion fields. The relevant part of the Lagrangian is then given by where the first term is the kinetic term, the second term arises from the hidden quark masses and the third term is determined by chiral anomalies in which we have kept only the gluon field, where t a (a = 1, · · · , 8) are the generators of SU (3) ⊂ SU (5) corresponding to the standard model color. In our field basis, c > 0. The Lagrangian of Eq. (A2) induces a vacuum expectation value of the ξ 24 field, which corresponds to the hidden pion φ: ξ 24 = ξ 24 + φ. The relevant terms in Eq. (A2) are where where we have restored θ H using the fact that only the quantities invariant under the phase rotation of Ψ LΨL can appear here. This expression, therefore, applies in any field basis. In the lack of a low-energy adjustment mechanism such as a QCD axion (and barring accidental cancellation), this contribution must be tiny, |∆θ| 10 −9 . Aside from the possibility of fine-tuning between the first and second terms in Eq. (A9), this requires 2m D θ D − 2m L θ L + m L θ H 2m D + 3m L 10 −9 1 N .
For m D,L = 0, this generically forces all the physical phases to be tiny, rendering it unlikely that the hidden η ′ decays into two hidden pions with a significant rate. We stress that this constraint disappears if there is a QCD axion.
In the presence of a QCD axion, the contribution to θ is harmless. We, however, still expect that the presence of CP violation in the G H sector generates the following dimension-six operator This operator induces the neutron electric dipole moment of order d n ∼ 10 −26 e cm × (100 TeV/κ) −2 [37], so the scale κ must satisfy κ 100 TeV. In the present model, we have where m collectively denotes m D,L , and h(x; m D , m L ) is a dimensionless function. In the relevant parameter region of m ≈ O(100 GeV), Λ ∼ a few TeV, N ∼ a few, we find κ ∼ 100 TeV, so the contribution can be sufficiently small and yet may be seen in future experiments. The G H sector CP violation also contributes generally to the quark/lepton electric dipole and chromoelectric dipole operators, L ∼ iψ L σ µν ψ R F µν + h.c. and iψ L σ µν t a ψ R G aµν + h.c., but these contributions are suppressed by an extra loop factor, so they are not very constraining. Overall, if there is a QCD axion, we may consider the possibility of significant CP violation in the G H sector, making the hidden η ′ decay dominantly into two hidden pions.
|
2016-05-12T04:17:57.000Z
|
2015-12-15T00:00:00.000
|
{
"year": 2016,
"sha1": "691385f5ac431a837ade3f0b93e7151569099a7f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP03(2016)091.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "6db2db89fe8e27addf18e9276d72bea04e7c0da7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
118504847
|
pes2o/s2orc
|
v3-fos-license
|
A search for planetary eclipses of white dwarfs in the Pan-STARRS1 medium-deep fields
We present a search for eclipses of $\sim$1700 white dwarfs in the Pan-STARRS1 medium-deep fields. Candidate eclipse events are selected by identifying low outliers in over 4.3 million light curve measurements. We find no short-duration eclipses consistent with being caused by a planetary size companion. This large dataset enables us to place strong constraints on the close-in planet occurrence rates around white dwarfs for planets as small as 2 R$_\oplus$. Our results indicate that gas giant planets orbiting just outside the Roche limit are rare, occurring around less than 0.5% of white dwarfs. Habitable-zone super-Earths and hot super-Earths are less abundant than similar classes of planets around main-sequence stars. These constraints give important insight into the ultimate fate of the large population of exoplanets orbiting main sequence stars.
INTRODUCTION
Searches for planets outside our solar system have focused primarily on hydrogen-burning main-sequence stars similar to our Sun (e.g. Bakos et al. 2004;Borucki et al. 2010). As we discovered that planets are nearly ubiquitous in our Solar neighborhood ) and in the Kepler field (Petigura et al. 2013) searches around M-dwarfs gained popularity (e.g. Nutzman & Charbonneau 2008). Studies of Mdwarfs enjoy a boost in sensitivity to small planets because transits block a larger fraction of the stellar disk and induce a larger amplitude reflex motion of the star around the barycenter due to their low mass. Some studies have also searched for and explored the planet occurrence rates as a function of stellar mass from M-dwarfs to intermediate-mass subgiants (Johnson et al. 2007). Microlensing campaigns survey stars of many types and are sensitive to planets around all massive hosts regardless of their stage in stellar evolution (Gaudi 2012) but followup characterization of these planets is impossible. However, there have been few dedicated searches for planets around white dwarfs (WDs).
Many studies including Mullally (2007), Farihi et al. (2008), and Kilic et al. (2009) searched for infrared-excess indicative of planetary companions to WDs. They detected several brown dwarf companions (Zuckerman & Becklin 1992;Farihi et al. 2005;Steele et al. 2009) but no planetary-mass objects. Mullally (2007) also searched for companions using pulsations of WDs to look for periodic deviations in the pulse arrival times caused by an orbiting companion. They find evidence of a 2.4 M J companion in a 4.6 year orbit. Hogan et al. (2009), andDebes et al. (2005) conducted high contrast imaging surveys of nearby WDs to search for low-mass companions at large separations. Burleigh et al. (2006) found a brown dwarf in the near-IR spectrum of WD 0137-349 with an orbital period of only 2 hours. This object may have survived the common-envelope phase or migrated from larger orbital distances after the formation of the WD. Faedi et al.
(2011) conduct a transit search for a sample of 174 WDs using SuperWASP data (Pollacco et al. 2006) and find no eclipsing companions but can put only weak constraints on the planet occurrence rates due to their small sample size (<10% for Jupiter-size planets). Drake et al. (2010) search for eclipses of ∼12,000 color-selected WDs using Catalina Sky Survey photometry and Sloan Digital Sky Survey spectroscopy. They find 20 eclipsing systems and three of them have radii consistent with substellar objects and no detectable flux in the spectra.
WDs have radii only ∼1% of the Sun, or about the same size as the Earth. This implies that an Earthsized object transiting the WD with an impact parameter of 1.0 would cause a complete occultation. Although these occultations are short-duration, they can be easily detected from small ground-based telescopes with short exposure times and relatively low photometric precision (Drake et al. 2010). In addition, the most common WDs are old and cool with surface temperatures of ∼5000 K. Their small radii and low surface temperatures imply that their luminosity is low, with typical values of ∼10 −4 L , and the habitable zone is close-in (a∼0.01 AU, Agol 2011) giving rise to significant transit probabilities. This makes Earth-size planets orbiting in the habitable zones of old, cool WDs relatively easy to detect via the transit method.
Most main-sequence stars, including our Sun, will eventually end their lives slowly cooling as WDs. Since approximately 50% of main sequence stars host at least one planet (Mayor et al. 2011) it is interesting to consider their fate as the star evolves into a WD. It is unlikely that any planets inside ∼1 AU would survive engulfment by their host stars as they expand onto the red giant branch but it is unclear what becomes of the planetary debris. Since WDs quietly cool for the age of the universe, it is conceivable that new planets could form out of the debris of a previous generation of planets. Migration of planets from outside of 1 AU is also plausible, but little theoretical work has been done on the formation or migration of planets hosted by WDs. Several studies have identified pollution by heavy elements on the surfaces of WDs (Zuckerman et al. 2010) and IR excess indicative of a debris disk (Debes et al. 2011). Extensive work has been done to identify the chemical composition of this pollution. Silicates and glasses were detected in the atmosphere of six WDs by Jura et al. (2009) and interpreted as signs of accretion of asteroid-like bodies onto the WD. A detailed study by Xu et al. (2014) using data from the Keck and Hubble Space Telescopes showed strong evidence that the composition of metals in the atmospheres of WDs G29-38 and GD 133 closely mirror the composition of the bulk Earth. Furthering the idea that close-in terrestrial planets orbit and eventually accrete onto WDs.
We present a systematic search for eclipses of WDs by planetary-size objects in the Pan-STARRS1 mediumdeep fields (Tonry et al. 2012). We use a combination of astrometric and photometric selection techniques to identify 3179 WDs with a range of ages and temperatures. Each WD was observed on 1000-3000 epochs during the past 5 years for a total of 4.3 million measurements. Although we do not detect any substellar companions, this large number of observations allows us to place tight constraints on the occurrence rates of planets orbiting WDs.
WD sample
We analyze a total of 3179 WD candidates spread across the 10 medium-deep fields spanning 70 square degrees on the sky. Each field is observed on 1000-3000 epochs with four to eight consecutive 240 s exposures per night. Our sample of WDs is segregated into two categories. We identify 661 targets using their proper motions as described in Tonry et al. (2012) (astrometric sample hereafter). These objects have a high probability of being bona fide WDs and a very low contamination rate.
The remaining 2518 WDs were selected based on their photometric colors (color-selected sample hereafter). We use the following criteria to select the locus of hot, blue stars from the (g P1 -r P1 ) vs. (r P1 -i P1 ) color plane shown in Figure 2 , and i P1 < 22. This sample is restricted to hot WDs due to the requirement of blue colors and is likely contaminated by other hot stars.
To quantify the contamination rate of the colorselected sample we created a Besancon galactic simulation of the medium-deep fields (Robin et al. 2003). When we make the same color-cuts we find that 42% of the stars are bonafide WDs according to the model. The stars that are within this locus but not WDs are mostly distant A and B-type subdwarfs in the halo of the galaxy. Closer F-type subdwarfs would also fall into the locus, but are mostly far too bright to be included in our sample. We also find that the contamination rate is highly dependent on appearant magnitude with the fainter stars being much more likely to be WDs. We assume a 58% contamination rate for our color-selected sample for all further analysis. This reduces our total number of WDs to 1718.
Control sample
Our control sample consists of stars with similar magnitudes and colors to the astrometrically-selected WDs but with undetectable proper motions. These should be relatively hot stars with radii much larger than WDs around which we would not expect to see the very shortduration eclipses indicative of a planet occulting a WD. We can compare the number of potential eclipses found in the WD sample to the number that we find in the control sample to better understand the frequency of eclipse-like events caused by non-astrophysical effects.
We select the control sample by binning the astrometric sample of WDs in 2-dimensional color bins of r P1 vs. (r P1 − i P1 ). For each bin that contains at least one WD we select two times the number of WDs in that bin from a sample of all stellar detections derived from deep stacks of the medium-deep fields excluding stars that are already part of the WD samples. Figure 1 shows our control sample and astrometric WD sample in the r P1 vs. (r P1 − i P1 ) color plane. If fewer than three field stars are available in a particular bin we select all available stars. This produces a total of 1296 stars for the control sample which is later trimmed down to 1288 by removing RR-Lyrae, Delta-Scuti and other variable stars (see § 2.4). -Astrometrically-selected WDs (blue) and control sample stars (red). The small black points are all detections from the deep stacks that were not selected for either the control or WD samples.
Light curves
Light curves are extracted for each WD and control sample star by directly analyzing the first-level Pan-STARRS1 photometry product (SMF files). These SMF files consist of the raw photometry extracted from the calibrated images before a zero-point or precise world coordinate system (WCS) is established. Each camera exposure corresponds to a single SMF file. For each SMF file we first find the WCS solution in order to associate pixel locations with sky positions. We then associate the per-image detections with detections in deep stacks for each field and extract the PSF-fitted photometry to obtain raw instrumental magnitudes. We fit for the photometric zero-point using the technique described in (Schlafly et al. 2012). The instrumental magnitudes for all detections within 5 arcminutes of the target are also extracted and recorded along with the target instrumental magnitudes. All epochs for which a target could not be matched to a detection in the SMF file are carefully recorded and the neighboring star photometry is still extracted if available. This ensures that we are sensitive to large decreases in flux that may cause the target to fall below the detection threshold in a particular image and in some cases we can use the photometric statistics of the neighboring stars to explain the non-detection. We also record the pixel locations relative to the entire CCD array and particular chip for each epoch.
Eclipse detection
Since eclipses are rare and extremely short duration traditional periodic search algorithms such as the boxleast-squares periodogram (BLS, Kovács et al. 2002) fail to recover such signals. BLS excels at detecting signals in the regime of many transits with low single-event S/N but planetary eclipses of our target stars would produce very infrequent, but very deep, high S/N eclipses. Instead we employ an extremely simple eclipse detection technique. We look for low outliers in the light curves (dropouts) that are caused either by a complete nondetection or show a deficit of flux relative to the median flux level (∆F ) that is greater than five times the measurement uncertainty (∆F/σ lc ≥ 5). Figure 3 shows the distribution of ∆F and ∆F/σ lc for all light curves.
The raw light curves are heavily contaminated with non-detections and large flux drops that could be indicative of an eclipse event or a variety of non-astrophysical scenarios. For every dropout we first check that the star did not fall off of, or too near the edge of a chip. We initially noticed that the dropout events were concentrated around the edges of the chips. This is likely caused by the PSF fit failing due to a strong gradient in the background region near the edges of the chips. This effect is worse at the corners of the chips near the readout electronics. For these reasons we remove all light curve measurements that fall within 10 pixels (2. 5) of an edge or within 100 pixels (25 ) of a corner. We consider this filter unbiased with respect to eclipses because there is no reason to expect that real eclipses would preferentially occur when the stars fall near the edge of a chip. Measurements with reported positions that fall between chip gaps or off the array are also excluded at this stage. All non-detections are removed with these chip location-based filters.
If the photometry of the neighboring stars also show a large decrease in flux at the same time of the target dropout, clouds or poor seeing is likely to blame. We exclude all measurements for which the median magnitude of the neighboring stars drops by more than 0.5 magnitudes or the standard deviation of the neighbor magnitudes is greater than one. We also de-correlate the target relative flux measurements against the median ∆F of the neighboring stars to reduce the effect of spatiallydependent extinction. Now that we have removed most of the egregious outliers from the light curve we re-define the measurement errors. We sum in quadrature the reported measurement uncertainties with the median absolute deviation (MAD) of the full light curve in each filter. This processes always inflates the errors relative to the original measurement uncertainties and effectively removes many remaining candidate dropouts by decreasing the value of ∆F/σ lc .
At this stage we use the VARTOOLS package to create BLS and analysis-of-variance (AoV) periodograms (Hartman et al. 2008;Kovács et al. 2002;Schwarzenberg-Czerny 1989;Devor 2005) for all WD and control sample stars. We visually inspect these periodograms and the light curves phase-folded to the ephemeris that corresponds to the highest peak in each periodogram. Obvious periodic variable stars are removed from further analysis. Thirty-three RR-Lyrae and Delta-Scuti stars, one dwarf nova (IY Uma) and three variables of unknown type are identified and removed at this stage.
For the remaining dropouts we check their CCD locations against the regions of the array that are consistently -Illustration of the filtering process for a light curve of a typical g'=18.9 WD in medium-deep field 3. The total number of measurements and the number of dropouts (∆F/σ lc ≥ 5) are shown the the lower-right of each panel. Dropout candidates are plotted as triangles. a) Raw light curve before any filtering. Error bars are equivalent to the reported measurement uncertainties. Notice the large number (1977) of dropout candidates. b) Light curve after applying the chip location-based filters described in §2.4. c) Light curve after removing measurements in which neighboring stars show large deviations from the median flux level or large scatter. d) Light curve after de-correlating against the neighboring star relative flux and re-scaling the measurement uncertainties by adding the reported uncertainties in quadrature with the standard deviation of the light curve in each filter. This tends to inflate the error bars and pushes the vast majority of dropout events below the 5-sigma cutoff. e) Light curve after the final level of photometry-based filtering. In this stage we compare the CCD pixel positions of the stars during dropout events with known masked regions of the CCD array. Two dropout events remain after all photometry-based filters. Postage stamp images are downloaded and visually inspected for the remaining dropout events. masked by the Pan-STARRS1 Image Processing Pipeline (IPP). After applying all of the photometry-based tests we are left with 11570 potential dropout events and of a total of 4.3 million detections. 2570 of the dropout candidates are from the control sample and the remaining 9000 are from the merged WD samples. This photometric filtering process for a single representative case is illustrated in Figure 4, and the total number of detections and non-detections removed at each stage in the filtering process are listed in Table 1.
We download the corresponding postage stamp images for any dropouts that make it through all of these light curve-based tests for additional screening. In addition to the postage stamp corresponding to the dropout we also download a deep stack around the target and the image that corresponds to the light curve measurement that is closest to the median value for that filter. We apply a few more automated filters before visually inspecting the remaining candidates. The images are automatically inspected for masking or CCD defects around the target that produce not-a-number (nan) values, very poor seeing, or clouds as indicated by a low zeropoint magnitude. We also perform aperture photometry on the three images and correct to an absolute apparent magnitude using the zeropoint magnitude provided in the image headers. Our photometry acts as check that the magnitude value reported by the IPP is in rough agreement with simple aperture photometry.
As a final step we use the HOTPANTS implementation of the ISIS image subtraction software (Alard 2000) to produce a difference image using the deep stack as a template. We convolve the template to match the PSF and zero point of the dropout candidate image and subtract the convolved template from the candidate postage stamp. This difference image was used to aid the visual inspection of the 133 dropout events that could not be explained by any of the photometry or image-based filters. Figure 5 shows an example dropout candidate image and the image-differencing processes used for visual inspection. We find no eclipse with a duration compatible with an eclipse by a substellar object in any WD or control sample light curve.
Theoretical eclipse probabilities
In order to asses the likelihood that an occultation would have occurred during our observing window, we calculate the probability of eclipse as a function of eclipse depth and then apply the noise properties and eclipse detection techniques that we used in our search. This tells us the number of eclipses we should have been able to detect as a function planet radius, orbital semi-major axis and the occurrence rate of planets around WDs (η).
b(t) ≈ a sin 2 (Ω + ωt + α 0 ) + sin 2 θ cos 2 (Ω + ωt + α 0 ) Equation 2 for b(t) gives the sky-projected center to center distance between the star and planet as a function of time (t). Ω is the longitude of the ascending node of the planet's orbit, a is the semi-major axis of the orbit, ω is the angular frequency of the orbit, θ is the inclination of the planet's orbit, and α 0 is the phase of inferior conjunction. Minimizing Equation 2 leads to the smallest sky projected separation over the orbit, b 0 = R W D cos θ.
In order to determine the likelihood that a particular ∆F could be caused by an eclipse of the WD we calculate the probability of eclipses as a function of eclipse depth. First, we make some assumptions for physical parameters that are mostly constant within the parameter region of interest. We assume that M W D = 0.6 M , R W D = 0.01 R , all theoretical companions are on circular orbits, no limb darkening, and 240 s as the integration time for every exposure. The probability of measuring an eclipse depth < ∆F (p, b) > at time t averaged over an exposure time of ∆t is Eclipses will only occur if |b 0 | < 1 + p, therefore the probability that a randomly-oriented, circular orbit will eclipse is Although systems with |b 0 | < 1 + p will eclipse at some time during the orbit, the fraction of orbital phase covered during eclipse is small. The probability that any part of an eclipse will overlap with the integration time of our survey is where P is the orbital period and T dur is the eclipse duration and E is the integration time.
For eclipses with durations shorter or equal to the exposure time the likelihood of any given measurement being in eclipse is then the sum of the probabilities for all possible orbital configurations that would produce an observed eclipse of depth m. For example, a measurement with m = 0.1 could be caused by a very small planet transiting slowly across the face of the star with a transit duration approximately equal to the exposure time. Alternatively, an eclipse of a much larger planet causing a complete occultation of the WD on a very short-period orbit would streak across the face of the star with a transit duration much shorter than the exposure time. The mean flux during the exposure may look identical in these two cases. Both of these cases and all other situations that could cause an observed eclipse depth m must given the appropriate weight in the final likelihood calculation. Figure 6 shows the eclipse depth probability distributions for a few hypothetical scenarios.
By the definition of our eclipse detection algorithm each exposure is sensitive to eclipses of depth m ≥ 5∆F/σ lc . By integrating over all scenarios that would cause an observed eclipse depth greater than or equal to 5∆F/σ lc for every measurement we derive the probability that we could have detected an eclipse during each exposure if η = 1. The inverse of the summed probabilities over all exposures for all light curves gives a total number of expected eclipses for the survey as a Poisson expectation value for the rate of eclipses (Figure 7). We then compare this Poisson distribution for the expected number of eclipses with the lack of detected eclipses for many values of a, p, and η.
Occurrence constraints
If we treat the number of expected eclipses as a Poisson expectation value (λ), the probability that we should detect k eclipses is -Left: probability of measuring an eclipse with depth ∆F during a single 240 s exposure of a random WD that hosts a single companion with the orbital parameters shown. p is the planet to star radius ratio, Rp is the radius of the planet in Earth radii, a/R W D is the orbital semi-major axis scaled to the radius of the WD, and a is the semi-major axis in AU. Right: Model eclipse light curves for the planet parameters shown on the left panel and an impact parameter 1.0. The red circle is the mean flux for an exposure centered on the mid-eclipse time. The bar extending from the red circle shows the length of the exposure time. This is the largest signal that we could expect to find for planets with these parameters. This corresponds to the maximum < ∆F > bin with a probability greater than zero in the left panel. -Expected detectable eclipse rate per million exposures of the medium-deep survey. An eclipse is deemed detectable if the depth is greater or equal to five times the measurement uncertainty. The measurement uncertainty is calculated by adding the reported uncertainty in quadrature with the standard deviation of the light curve on a per filter basis. The dashed line marks the point at which the eclipse duration is equal to the integration time. Eclipses caused by objects with parameters that fall in the region above and to the right of the dashed line will have eclipses that may span multiple adjacent exposures. Our assumption that each light curve measurement is independent is invalid in this regime and our expected eclipse rate will be slightly overestimated. P (k, a, p) = λ(η, a, p) k exp(−λ(η, a, p)) k! .
Since we have zero detected eclipses this can be simplified to P (0, a, p) = exp(−λ(η, a, p)). By setting P (0, a, p) equal to a confidence interval C and decomposing λ(a, p) into the expectation value of eclipses if the planet occurrence rate is equal to 1 (λ 1 (a, p)) multiplied by the actual planet occurrence rate (η) we derive the maximum planet occurrence rate that is compatible with the observations at a confidence level of C assuming the planet occurrence rate is constant as a function of a and p.
DISCUSSION
Although we find no convincing detections of eclipses with durations consistent with substellar objects we are still able to put strong constraints on the WD-hosted planet occurrence rate. Figures 8 and 9 show the maximum occurrence rate that is consistent with our observations at 95% and 68% confidence levels assuming R WD = 0.01 R and M WD = 0.6 M . This should be a relatively good approximation since the masses and radii of most WDs fall close to these values. For each reported occurrence rate (η) we first state the value corresponding to the maximum allowable occurrence rate averaged over the specified region of interest for the 95% confidence limit and then the 68% confidence limit immediately following in parenthesis. For example, our results suggest that less than 0.4% (0.2%) of WDs host planets with radii greater than ∼2 Earth radii and semi-major axis between 0.002 and 0.01 AU. 0.4% is the maximum occurrence rate allowed by our data at 95% confidence and 0.2% is the same for a confidence level of 68%.
It is an interesting exercise to break up the twodimensional occurrence limits into regions that correspond to classes of planets that we are more familiar with orbiting main-sequence stars. Other studies have shown similarities between the architectures of exoplanetary systems around low-mass M-dwarfs with the moons of Jupiter (Muirhead et al. 2012) and scaled-down versions of our solar system or exoplanetary systems around more massive stars. If we scale down the orbital distances =0.01 AU 95% confidence 68% confidence Fig. 9.-Maximum planet occurrence rate consistent with our data as a function of planet radius at a semi-major axis of a = 0.01 AU for confidence levels of 95% (solid) and 68% (dashed). Shaded regions are disfavored by our data. This plot represents a slice through Figure 8 at a = 0.01 AU.
of the known exoplanet population we can look at the occurrence limits in a few interesting regimes; hot Jupiters, hot super-Earths, and habitable-zone super-Earths.
The Roche limit for a fluid body with mean density ρ p orbiting a WD with density ρ WD and radius R WD can be approximated as For our assumed WD properties the Roche limit for a Jupiter-like planet is L R ≈ 0.01 AU. It is not surprising that we do not detect any Jupiter-sized objects inside 0.01 AU. However, we can equate a population of Jupiter-sized planets orbiting between 0.01 and 0.04 AU to the hot Jupiters observed orbiting very close to solartype stars. In this regime an eclipse duration is slightly longer than the duration of a single exposure. Therefore our expectation value for eclipses is slightly overestimated, however we do not expect this to be the dominant source of error in the occurrence rate limits. The mean maximum occurrence rate for WD-hosted hot Jupiters (R = 10 − 20R ⊕ ) is 0.5% (0.2%). Indicating that hot Jupiters around WDs are very rare or non-existent. This is in good agreement with the frequency of hot Jupiters around solar-type stars measured to be between 0.3% and 1.5% (Marcy et al. 2005;Gould et al. 2006;Cumming et al. 2008;Howard et al. 2011;Mayor et al. 2011;Wright et al. 2012).
A rigid body can orbit slightly closer to the WD without being tidely disrupted. Planets with radii larger than ∼1.5 R ⊕ generally have densities lower than that of the Earth and likely have an extended gas-dominated atmosphere (Weiss & Marcy 2014). However, some super-Earths with slightly larger radii have high densities consistent with a rocky composition, e.g. CoRoT-7b (Léger et al. 2009), Kepler-20b (Gautier et al. 2012), and Kepler-19b (Ballard et al. 2011. This class of planets may be the remaining cores of evaporated gas giant planets (Hébrard et al. 2004). Our results suggest that less than 1.5% (0.6%) of WDs host planets with radii between 2.0 and 5.0 R ⊕ orbiting with semi-major axis between 0.005 and 0.01 AU. Howard et al. (2012) measure an occurrence rate of 13% for 2-4 R ⊕ planets with orbital periods shorter than 50 days. However, the occurrence rate drops with shorter orbital periods to 2.5% for periods shorter than 10 days. Our lack of detections indicate that hot super-Earths are almost certainly less common around WDs than they are around solar-type stars.
Perhaps the most interesting planets to consider are those that have an equilibrium temperature such that they could sustain liquid water on their surfaces. Since WDs cool and decrease in luminosity as they age, the habitable zone (HZ) boundaries also change as a function of time. Agol (2011) define the WD continuous habitable zone (CHZ) as the range of semi-major axis that would be within the HZ for a minimum of 3 Gyr and also outside of the tidal destruction radius for an Earth-density planet. For a 0.6 M WD this corresponds semi-major axis between 0.005 and 0.02 AU. Our data show that planets in the CHZ with radii between 2-5 R ⊕ could be present around no more than 3.4% (1.3%) of WDs. This is significantly less than the predicted frequency of Earth-size planets in the habitable zone of solar-type stars (∼22%, Petigura et al. 2013).
A large population of short-period planets orbiting solar-type and M-dwarf stars has been observed. We might expect WDs to host similar planets if they can reform from a post-giant phase debris disk or migrate from Eclipses per million measurements Fig. 10.-Expected detectable eclipse rates calculated as described in §3.1 for hypothetical surveys using the Pan-STARRS1like throughput with different exposure times. The numbers within the dashed box indicate the mean eclipse rate in that region of parameter space. Shorter exposure times give increased eclipse detectability for the shortest-period objects within ∼0.03 AU but planets orbiting this close to their host WD would likely be ripped apart by tidal forces. Although the mean eclipse rate in the region of interest goes up with longer exposure times this is reversed if you consider a fixed total survey exposure time (take twice as many 60 second exposures as 120 second exposures, etc.). However, the eclipse rates remain nearly constant indicating that the best way to increase sensitivity in this regime is to increase the number of epochs observed (larger number of WDs and/or higher cadence).
larger orbital distances once the star becomes a WD. However, our observations are quite sensitive to planets larger than the Earth orbiting close to the WD, and the lack of any eclipses suggests that these processes are highly inefficient if they occur at all. There are very few planets in short-period orbits around WDs.
Future survey design
Since eclipse times are generally shorter than the 4 min exposure times for the medium-deep survey we explore the idea of designing a similar survey with shorter exposure times and decreased sensitivity to shallow eclipses. This would cause less dilution of the eclipse signals over the duration of the exposure. We re-calculate the expected eclipse rates for exposure times of 30, 60, and 120 seconds scaling the measured noise properties from our 240 s data. We use the mean eclipse rate for planets with radii between 1-5 R ⊕ orbiting between 0.005 and 0.02 AU as a metric for comparison. Figure 10 illustrates the result. We find that decreasing the exposure time gives a modest boost in sensitivity to these planets for a given total survey exposure time. The most dramatic increase in sensitivity when going to short exposure times is for the very short period planets orbiting interior to 0.003 AU. However, planets are not able to withstand the tidal forces this close to the WD so we would not expect planets to exist in this regime. The expected eclipse rate in our region of interest is dominated by signal-to-noise of the individual detections. Although the eclipses are diluted by long exposure times, this is balanced by the increased gain in sensitivity to these shallow, diluted eclipses due to the greater signal to noise obtained in longer exposures. This suggests that the best way to detect these Earth to Neptune size planets in the WD CHZ may be to increase the etendue of the survey to detect more WDs on a greater number of epochs by covering a large area of the sky at high cadence. The ATLAS (Tonry 2011) and Large Synoptic Survey Telescope (Ivezic et al. 2008) surveys should be ideal for detecting these extremely rare events.
Pan-STARRS1 3π
The Pan-STARRS1 3π survey covers 30,000 square degrees with approximately 60 observational epochs per object (Kaiser et al. 2010;Magnier et al. 2013). The depth and cadence are inferior to that of the medium deep fields, but the huge amount of sky observed makes it interesting to explore the contribution that this survey could make to the occurrence rate limits if we were to perform a similar analysis on a combined dataset.
We start with an order-of magnitude estimate of the number of WDs we would expect to find in the 3π survey data via reduced proper motion. The exposure times for the 3π survey are 60 seconds vs. 240 seconds for the medium deep fields, but let us assume that our ability to detect WDs is limited by the length of the observational baseline and not by signal to noise of the detections. Since the sky coverage is a factor of ∼400 greater in the 3π survey it is reasonable to scale the number of astrometrically-selected WDs found in the mediumdeep fields (661) by 400. Therefore, we expect to find ∼30,000 WDs via reduced proper motion in the 3π data. Since each WD is observed 60 times this gives a total of 1.8 million measurements. The shorter exposure times increase our sensitivity to very short duration eclipses, however the largest gain in sensitivity is to planets orbiting well inside the tidal destruction radius (see Figure 10 and §4.1). Combining these 1.8 million epochs with the 4.3 million epochs from the medium-deep fields increases our total number of measurements by a factor of 1.4 and strengthens (decreases) our maximum occurrence constraints by this same factor. This ∼ √ 2 improvement would not change our primary conclusion that planets around WDs are rare.
CONCLUSIONS
Our systematic search for eclipses of WDs in the Pan-STARRS1 medium-deep fields places strong constraints on the WD planet occurrence rates. We analyze a sample of ∼3000 WDs selected via proper motion and color along with a control sample of ∼1200 stars. These WDs were observed for 5 years on over 4.3 million epochs.
We search for potential eclipses by identifying low outliers in the light curves. A total of 133 candidate eclipses are identified after applying a series of photometry then image-based filters to remove outliers caused by weather, CCD artifacts, or an improperly modeled PSF. After visual inspection of all candidates we find none that is consistent with an eclipse or occultation by a substellar object.
We calculate the number of expected eclipses if every WD hosted at least one planet (η = 1) by convolving a trapezoidal transit model with the survey exposure time and integrating over all possible geometric orientations and many values of R p and a. The expected number of eclipses are treated as a Poisson expectation value for the rate of events which are converted into 95% (68%) confidence intervals. We then invert these rates to obtain the maximum value of η that is consistent with our data.
Our results suggest that hot Jupiters around WDs are at least as rare as they are around solar-type stars, occurring around no more than 0.5% (0.2%) of WDs. Hot super-Earths occur around no more than 1.5% (0.6%) of stars, and super-Earths in the CHZ are present around no more than 3.4% (1.3%) of WDs. All evidence presented in this study indicate that short-period planets around WDs are significantly less abundant than shortperiod planets orbiting main-sequence stars.
|
2014-09-30T21:17:15.000Z
|
2014-09-30T00:00:00.000
|
{
"year": 2014,
"sha1": "e7bf689ac94f04fccc653803d35a2db7a0cd43c1",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/0004-637X/796/2/114/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e7bf689ac94f04fccc653803d35a2db7a0cd43c1",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233324262
|
pes2o/s2orc
|
v3-fos-license
|
PP-YOLOv2: A Practical Object Detector
Being effective and efficient is essential to an object detector for practical use. To meet these two concerns, we comprehensively evaluate a collection of existing refinements to improve the performance of PP-YOLO while almost keep the infer time unchanged. This paper will analyze a collection of refinements and empirically evaluate their impact on the final model performance through incremental ablation study. Things we tried that didn't work will also be discussed. By combining multiple effective refinements, we boost PP-YOLO's performance from 45.9% mAP to 49.5% mAP on COCO2017 test-dev. Since a significant margin of performance has been made, we present PP-YOLOv2. In terms of speed, PP-YOLOv2 runs in 68.9FPS at 640x640 input size. Paddle inference engine with TensorRT, FP16-precision, and batch size = 1 further improves PP-YOLOv2's infer speed, which achieves 106.5 FPS. Such a performance surpasses existing object detectors with roughly the same amount of parameters (i.e., YOLOv4-CSP, YOLOv5l). Besides, PP-YOLOv2 with ResNet101 achieves 50.3% mAP on COCO2017 test-dev. Source code is at https://github.com/PaddlePaddle/PaddleDetection.
Introduction
Object detection is a critical component of various realworld applications such as self-driving cars, face recognition, and person re-identification. In recent years, the performance of object detectors has been rapidly improved with the rise of deep convolutional neural networks (CNNs) [23,8,10]. Although, recent works focus on novel detection pipeline (i.e., Cascade RCNN [2] and HTC [3]), sophisticated network architecture design (DetectoRS [19] and CBNET [15]) push forward the state-of-the-art object Figure 1. Comparison of the proposed PP-YOLOv2 and other object detectors. With a similar FPS, PP-YOLOv2 outperforms YOLOv5l by 1.3% mAP. Besides, when we replace PP-YOLOv2's backbone from ResNet50 to ResNet101, PP-YOLOv2 achieves comparable performance with YOLOv5x while it is 15.9% faster than YOLOv5x. The data is recorded in Table 2. detection approaches, YOLOv3 [20] is still one of the most widely used detector in industry. Because, in various practical applications, not only the computation resources are limited, but also the software support is insufficient. Without necessary technique support, two stage object detector(e.g. Faster RCNN [21], Cascade RCNN [2]) may excruciatingly slow. Meanwhile, a significant gap exists between the accuracy of YOLOv3 and two stage object detectors. Therefore, how to improve the effectiveness of YOLOv3 while maintaining the inference speed is an essential problem for practical use. To simultaneously satisfy two concerns, we add a bunch of refinements that almost not increase the infer time to improve the overall performance of the PP-YOLO [16]. To note that, although a huge number of approaches claim to improve object detector's accuracy independently, in practice, some methods are not effective when combined. Therefore, practical testing of combina-tions of such tricks is required. We follow the incremental manner to evaluate their effectiveness one by one. All our experiments are implemented based on PaddlePaddle 1 [17].
In fact, this paper is more like a TECH REPORT, which tells you how to build PP-YOLOv2 step by step. Theoretical justification of the failure cases is also involved. To this end, we achieve a better balance between effectiveness (49.5% mAP) and efficiency (69 FPS), surpassing existing robust detectors with roughly the same amount of parameters such as YOLOv4-CSP [26] and YOLOv5l 2 . Hopefully, our experience in building PP-YOLOv2 can help developers and researchers to think deeper in implementing object detectors for practical applications.
Revisit PP-YOLO
In this section, we will perform the implementation of our baseline model specifically. Baseline Model. Our baseline model is PP-YOLO which is an enhanced version of YOLOv3. Specifically, it first replaces the backbone to ResNet50-vd [9]. After that a total of 10 tricks which can improve the performance of YOLOv3 almost without losing efficiency are added to YOLOv3 such as Deformable Conv [5], SSLD [4], CoordConv [12], Drop-Block [6], SPP [7] and so on. The architecture of PP-YOLO is presented in the paper [16]. Training Schedule. On COCO train2017, the network is trained with stochastic gradient descent (SGD) for 500K iterations with a minibatch of 96 images distributed on 8 GPUs. The learning rate is linearly increased from 0 to 0.005 in 4K iterations, and it is divided by 10 at iteration 400K and 450K, respectively. Weight decay is set as 0.0005, and momentum is set as 0.9. Gradient clipping is adopted to stable the training procedure.
Selection of Refinements
Path Aggregation Network. Detecting objects at different scales is a fundamental challenge in object detection. In practice, a detection neck is developed for building highlevel semantic feature maps at all scales. In PP-YOLO, FPN is adopted to compose bottom-up paths. Recently, sev-eral FPN variants have been proposed to enhance the ability of pyramid representation. For example, BiFPN [24], PAN [14], RFP [19] and so on. We follow the design of PAN to aggregate the top-down information. The detailed structure of PAN is shown in Fig. 2.
Mish Activation Function. Mish activation function [18] has been proved effective in many practical detectors, such as YOLOv4 and YOLOv5. They adopt the mish activation function in the backbone. However, we prefer to use pre-trained parameters because we have a powerful model which achieves 82.4% top-1 accuracy on ImageNet. To keep the backbone unchanged, we apply the mish activation function in the detection neck instead of the backbone.
Larger Input Size. Increasing the input size enlarges the area of objects. Thus, information of the objects on a small scale will be preserved easier than before. As a result, performance will be increased. However, a larger input size occupies more memory. To apply this trick, we need to decrease batch size. To be more specific, we reduce the batch size from 24 images per GPU to 12 images per GPU and expand the largest input size from 608 to 768. The input size is evenly drawn from [320, 352, 384, 416, 448, 480, 512, 544, 576, 608, 640, 672, 704, 736, 768].
IoU Aware Branch. In PP-YOLO, IoU aware loss is calculated in a soft weight format which is inconsistent with the original intention. Therefore, we apply a soft label format.
Here is the IoU aware loss: where t indicates the IoU between the anchor and its matched ground-truth bounding box, p is the raw output of IoU aware branch, σ(·) refers to the sigmoid activation function. To note that only positive samples' IoU aware loss is computed. By replacing the loss function, IoU aware branch works better than before.
Dataset
COCO [11] is a widely used benchmark in the field of object detection. In this work, we train all our models on the COCO train2017 which consists of 118k images across 80 classes. For evaluation, we evaluate our results on the COCO minival which consists of 5k testing images. Our evaluation metric also follows the standard COCO style mean Average Precision (mAP).
Ablation Studies
In this subsection, we present the effectiveness of each module in an incremental manner. Results are shown in Ta of the model in FP32-precision which does not include result decoder and NMS following YOLOv4 [1].
A. First of all, we follow the original design of PP-YOLO to build our baseline. Since the heavy pre-processing on the CPU slows down the training, we decrease the images per GPU from 24 to 12. Reducing batch size drops mAP by 0.2%. Training settings are described in section 2 entirely.
A → B. The first refinement with a positive effect on PP-YOLO that we found was PAN. To stable the training process, we add several skip connections to our PAN module. The detailed structure of PAN is shown in Fig. 2 D → E. In the training phase, the modified IoU aware loss performs better than before. In the former version, the value of IoU aware loss will drop to 1e-5 in hundreds of iterations during training. After we modified the IoU aware loss, its value and the value of IoU loss are in the same order of magnitude, which is reasonable. After using this strategy, the mAP of model E increases to 49.1% without any loss of efficiency.
Comparison With Other State-of-the-Art Detectors
Comparison of the results on MS-COCO test split with other state-of-the-art object detectors is shown in Figure Method Backbone Table 2. Comparison of the speed and accuracy of different object detectors on the MS-COCO (test-dev 2017). We compare the results with batch size = 1, without tensorRT (w/o TRT) or with tensorRT(with TRT). Results marked by "+" are updated results from the corresponding official code base. Results marked by "*" are test in our environment using official code and model. " †" indicates the result includes bounding box decode time(1˜2ms). The backbone of YOLOv5 has not been named yet, so we leave it blank.
1 and Table 2. We compare our method with YOLOv4-CSP and YOLOv5l because they have roughly the same amount of parameters as our model. It clearly shows that PP-YOLOv2 outperforms these two methods. With a similar FPS, PP-YOLOv2 outperforms YOLOv4-CSP by 2% mAP and surpasses YOLOv5l by 1.3% mAP. Besides, when we replace PP-YOLOv2's backbone from ResNet50 to ResNet101, PP-YOLOv2 achieves comparable performance with YOLOv5x while it is 15.9% faster than YOLOv5x. Therefore, we can draw a conclusion that compared with other state-of-the-art methods, our PP-YOLOv2 has certain advantages in the balance of speed and accuracy.
Moreover, PP-YOLOv2 is implemented based on Pad-dlePaddle. As a deep learning framework, PaddlePaddle not only supports model implementation but also pays attention to model deployment. With official support, adapting TensorRT for PP-YOLOv2 is much easier than other detectors. Specifically, the Paddle inference engine with TensorRT, FP16-precision, and batch size = 1 further improves PP-YOLOv2's infer speed. The speed-up ratios for PP-YOLOv2(R50) and PP-YOLOv2(R101) are 54.6% and 73%, respectively.
Things We Tried That Didn't Work
Since it takes about 80 hours for training PP-YOLO with 8 V100 GPUs on COCO train2017, we involve COCO minitrain [22] to speed up our analysis on ablation studies. COCO minitrain is a subset of the COCO train2017, containing 25K images. On COCO minitrain, the total iterations is 90K. We divide the learning rate by 10 at iteration 60k. Other settings are the same as training on COCO train2017.
We tried lots of stuff while we were working on PP-YOLOv2. Some of them have a positive effect on COCO minitrain while hinders the performance when training on COCO train2017. Due to the inconsistency, someone may doubt the experimental conclusion on COCO minitrain.The reason why we use COCO minitrain is that we want to seek refinements with universal features, which means that they should be useful on different scale datasets. It is also important to figure out the reason why they failed. Therefore, we discuss some of them in this section. Cosine Learning Rate Decay. Different from linear step learning rate decay, cosine learning rate decay is exponen-tially decaying the learning rate. Therefore, the change of the learning rate is smooth, which benefits the training process. We follow the formula in Bag of Tricks [9] to set learning rate at each epoch. Although cosine learning rate decay achieves better performance on COCO minitrain, it is sensitive to hyper-parameters such as initial learning rate, the number of warm up steps, and the ending learning rate. We tried several sets of hyper-parameters. However, we didn't observe a positive effect on COCO train2017 eventually. Backbone Parameter Freezing. When fine-tuning the Im-ageNet pre-trained parameters on downstream tasks, freezing parameters in the first two stages is a common practice. Since our pre-trained ResNet50-vd is much powerful than others(82.4% Top1 accuracy versus 79.3% Top1 accuracy), we are more motivated to adopt this strategy. On COCO minitrain, parameter freezing brings 1mAP gain, however, on COCO train2017 it decreases mAP by 0.8% . A possible reason for the inconsistency phenomena was speculated to be the different sizes of the two training sets. COCO minitrain is a fifth of COCO train2017. The ability to generalization of parameters that are trained on a small dataset may be worse than pre-trained parameters. SiLU Activation Function. We tried using SiLU [25] instead of Mish in detection neck. This increases 0.3% mAP on COCO minitrain but drops 0.5% mAP on COCO train2017. We are not sure about the reason.
Conclusions
This paper presents some updates to PP-YOLO, which forms a high-performance object detector called PP-YOLOv2. PP-YOLOv2 achieves a better balance between speed and accuracy than other famous detectors, such as YOLOv4 and YOLOv5. In this paper, we explore a bunch of tricks and show how to combine these tricks on the PP-YOLO detector and demonstrate their effectiveness. Moreover, with PaddlePaddle's official support, the gap between model development and production deployment is narrowed. We hope this paper can help developers and researchers get better performance in practical scenes.
|
2021-04-22T01:16:00.343Z
|
2021-04-21T00:00:00.000
|
{
"year": 2021,
"sha1": "59b2f2f456fd1dccc9210e34486c12e3b67a376c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b9f624ce0de2424d534ed1cb733ea05a01b0b82f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
252225493
|
pes2o/s2orc
|
v3-fos-license
|
End-to-End Data Quality Assessment Using Trust for Data Shared IoT Deployments
Continued development of communication technologies has led to widespread Internet-of-Things (IoT) integration into various domains, including health, manufacturing, automotive, and precision agriculture. This has further led to the increased sharing of data among such domains to foster innovation. Most of these IoT deployments, however, are based on heterogeneous, pervasive sensors, which can lead to quality issues in the recorded data. This can lead to sharing of inaccurate or inconsistent data. There is a significant need to assess the quality of the collected data, should it be shared with multiple application domains, as inconsistencies in the data could have financial or health ramifications. This article builds on the recent research on trust metrics and presents a framework to integrate such metrics into the IoT data cycle for real-time data quality assessment. Critically, this article adopts a mechanism to facilitate end-user parameterization of a trust metric tailoring its use in the framework. Trust is a well-established metric that has been used to determine the validity of a piece or source of data in crowd-sourced or other unreliable data collection techniques such as that in IoT. The article further discusses how the trust-based framework eliminates the requirement for a gold standard and provides visibility into data quality assessment throughout the big data model. To qualify the use of trust as a measure of quality, an experiment is conducted using data collected from an IoT deployment of sensors to measure air quality in which low-cost sensors were colocated with a gold standard reference sensor. The calculated trust metric is compared with two well-understood metrics for data quality, root mean square error (RMSE), and mean absolute error (MAE). A strong correlation between the trust metric and the comparison metrics shows that trust may be used as an indicative quality metric for data quality. The metric incorporates the additional benefit of its ability for use in low context scenarios, as opposed to RMSE and MAE, which require a reference for comparison.
collected and consumed led to the IoT big data wave. This is characterized by volume, velocity, variety, veracity, and value, the 5V's of big data as they are known [1]. As this data is collected, it must undergo several stages from collection to decision-making. These stages form the big data model. The big data model is a series of stages that the data must undergo from when it is created to when it is used. Each preceding stage is critical for the success of the next stage. Fig. 1 shows the various stages of the big data model. Data collection, data preprocessing, data processing, and data use are separate stages of the big data model. It is beneficial to interrogate data quality independently at each stage in the model. It can also be argued, however, that data quality should be reviewed longitudinally through the model for a given input and use case. For each stage, data can have different properties, and therefore, data quality has to be assessed separately but also represented differently. This is equally true for different data users and applications within the IoT ecosystem.
The data generated and consumed within IoT comes from several domains including, but not limited, to: 1) smart homes; 2) smart cities; 3) manufacturing; and 4) automotive to environmental sensing. Current discussions introduce the opportunities that sharing and consuming data across such domains of the IoT ecosystem would have for further innovation in the IoT space. This sharing of data across multiple domain spaces is referred to as data-shared IoT [2]. The example in Fig. 2 shows how a smart city application can benefit from data fusion of its own data with data from other IoT applications for better insights. For this fusion to be fruitful, it is important to ensure that the shared data conform to certain quality standards and can be trusted by the consuming application.
This article presents a mechanism that aims to achieve the following objectives: 1) tailor assessment as per user requirements; 2) decouple metrics from the evaluation strategy; and 3) allow for longitudinal fusion of quality scores. Achieving these aims comes with challenges associated with them. Here, the aims are explained, and in Section IV, the challenges are highlighted.
Decoupling the mechanics of providing a quality score from the means of evaluating the quality. The big data cycle (BDC) has various stages that data must undergo. At each stage, the data can have different quality issues. To determine and address data quality issues, it is beneficial to assess quality at each stage considering only the data properties at that stage. The ability to map data quality to the individual stage of the BDC will be referred to as the mechanics of providing or advertising quality. This can be separated from the metric used to evaluate the quality. This article uses a trust metric to provide a quality score, however, the mechanism can be applied to other quality metrics. This is illustrated in Section VII where each trust stage is decoupled and integrated into the BDC.
This separation is useful to achieve a domain and use-caseagnostic solution. Each unique domain and use case can define its own metric for calculating a quality score without affecting the mechanics of applying quality assessment or advertising quality. Furthermore, when the quality requirements of an application change, the metric of assessing quality can be customized to that application's needs without affecting, or needing to change the underlying mechanics of assessment.
Longitudinal fusion of data quality defines a means to combine the various quality scores from each stage of the BDC into a single score that is representative of the endto-end processes that the data has undergone. This process should be independent of the metric of assessing quality. Any fusion technique can be used here. In Section VIII, a naive fusion approach was used to distinguish between a low-quality stream and a gold reference stream. The intention, however, is to investigate more advanced fusion techniques.
A tangible link between data quality, data quality types, and their effect on data through the stages of the BDC has been demonstrated [2]. This concept, however, is yet to be implemented. This describes the longitudinal relationship between the various data quality issues across the BDC. For example, knowing which data quality issues are present in the initial stages of the BDC (data preprocessing) and how this affects the next stage can help determine a data-processing technique in that stage.
Data quality tailoring allows an application to customize quality assessment to suit its requirements. Quality itself is subjective and so should be evaluated as such. Data that is good for a particular application or use case might not be for the other. Each application should be able to define its own quality. For example, if data accuracy is important for a particular application and data latency (timeliness) is not, then the quality score should be customized to reflect that. This is illustrated by the use case in Section X, based on a custom data quality score, and each application can connect or disconnect from a data source.
Current solutions aim to optimize quality assessment for a given use case [3], [4], [5], [6]. The resulting solutions, however, may not be applicable to another use case. Taleb et al. [7] described the importance of integrating quality assessment with the BDC, connecting quality assessment with the source of the quality issues. This approach, however, has not been implemented.
This article presents a real-time end-to-end implementation of a data quality assessment framework that can be used to assess the quality of data in IoT deployments. The framework leverages trust as a means to assess quality where no reference or ground-truth data is available. Assessment of quality throughout the BDC allows identification of introduced quality issues at each stage. The framework is agnostic of the trust metric or fusion technique used; however, a trust metric presented in earlier work [2] is used in the implementation and testing.
The rest of the article is structured as follows: Section II presents background information. Section III presents the state-of-the-art. Section IV details the existing challenges in implementing quality assessment in IoT environments. Section V presents the motivation for the proposed framework and highlights why trust is an important metric for the assessment of data quality in IoT. Section VI provides a detailed description of the framework including the mathematical formulation of the framework and microservice-based implementation of the solution. Section IX presents the testing environment, datasets employed, evaluation strategy, and results of the evaluation. Finally, Section XI presents the concluding remarks and future work.
II. BACKGROUND
This section introduces three concepts that are central to this work: 1) data quality; 2) data quality dimensions (DQDs) and trust; and 3) their application within an IoT context. It then highlights the use of trust as a measure of quality.
A. Data Quality
Data collected from sensing IoT devices is of paramount importance today. Such data is being used to advance innovations and inform decision-making. Much of this data comes from low-cost sensor devices, which are inherently unreliable [8]. Assessing and ascertaining the quality of such data before using it is therefore important. Data quality has widely been studied in database management [9], [10], [11] and also in the big data context [12]. Poor data quality management can have adverse negative effects on business decisions [13].
Data quality is subjective, making it dependent on the use case and domain area. It has been defined differently in academic and industrial contexts [14]. Sidi et al. [3] defined data quality based on how appropriate it is for use based on user need. According to Heravizadeh et al. [15], quality means the totality of the characteristics of an entity (data) that bear on its ability to satisfy stated and implied needs.
B. Data Quality Dimensions
DQDs provide an acceptable way to measure data quality. Several authors have defined different DQD, each with associated metrics [14]. A DQD is a characteristic or feature of information for classifying information and data requirements. As such, it offers a way for measuring and managing data quality as well as information [3]. It is important to note that there is no standard definition of DQD that is acceptable as domain-independent [16]. It is argued that some of these could be task-independent, and therefore not restrained by the context of application while others are task-dependent [17]. In Lee et al. [18], many of these were studied and later summarized them into four main categories as shown in Table I.
C. Trust
Trust can be defined as the belief of a trustor in a trustee that the trustee will accomplish a given task by satisfying the trustor's expectation [16]. Different users have different requirements that must be satisfied before they can trust a source. Examples of these can relate to DQDs including reliability, competence, credentials, and reputation. In the era where information is widely available, users are tasked with gauging the quality of such. From the trust perspective, a data source can build a reputation over time to become trustworthy. Trust in itself is a process, and therefore trust can be formed, improved, and also lost. Some data sources have built trust over time and now they are more trusted than others. Trust has been widely used in other areas. In service computing, Malik and Bouguettaya [19] and Chang et al. [20] used trust to select the best service for a user. Jøsang et al. [21] proposed systems that could be used to derive measures of trust and reputation for Internet transactions.
D. Trust as a Measure of Data Quality
In a wider sense, trust has been used as a measure of quality, especially in information systems. It is assumed that if more people trust a product or service, it has better quality and vice versa. This same principle has been used widely in information search on the Internet and more recently in recommender engines [22].
Like trust, quality is an iterative process that must be constantly reassessed. To achieve a level of trustworthiness, different trust attributes must be evaluated at every stage and how these contribute to each other. The uniqueness of trust as a metric for data quality assessment lies in its properties. Byabazaire et al. [23] highlights these properties and how each can be used to harness trust as data quality metrics, especially in data-shared IoT scenarios. For example, trust is personalizable. In IoT, each application has a unique description of data quality.
III. STATE-OF-THE-ART
Several solutions have been proposed to help ensure that data retain its quality within database management and a few in the context of IoT and big data. While data quality assessment in a general context can be considered a mature field of study, data quality assessment in the context of IoT has not yet been fully explored [24]. There are several methods to ensure data retains its quality. This section contrasts two approaches to data quality assessment that relate to this work. The first approach aims to develop DQDs that can be used by domain experts to assess data quality both generally, and in the context of IoT. The other approach aims to take these DQDs and develop solutions that automate the process of assessing and improving data quality.
A. Development of DQDs
DQDs offer a way for measuring and managing data quality as well as information [25]. More precisely, DQDs describe the various measurable metrics of data quality. The main aim of the solutions is to define new DQDs and evaluate them based on expert opinion.
The data quality assessments framework based on DQDs date back to 1998. Total Data Quality Management (TDQM) by Wang [27] has been widely accepted within database management systems [28], [29] and within big data and IoT systems [30], [31], [32]. One of its core advantages is that it emphasizes an iterative approach to data quality management. Data users specify their requirements, and data engineers (information product engineers) translate these into DQDs that are measurable. Finally, expert knowledge is used to validate the requirements against the output of the engineers. Their implementation proposes 15 DQDs that can be applied to various domains and have been widely adopted. The evaluation of the framework is based on domain expert knowledge.
Lee et al. [18] later proposed AIM quality (AIMQ) that is based on TDQM. The major contribution of this work classifies DQDs into four categories: 1) intrinsic; 2) contextual; 3) representational; and 4) accessibility. Several other methodologies have also been proposed based on TDQM.
While the solutions above are more general, more recently, Alrae et al. [33] proposed "House of Information Quality framework for IoT systems." This differs from the above solutions in that it compares DQDs associated with information quality and the core IoT elements to define DQDs that are necessary for IoT applications. Like the above solutions, expert opinion is used as a significant validation method. This is a good validation strategy, but not good for runtime assessment.
B. Application of DQDs
This section highlights solutions that use DQDs to implement applications for assessing and improving that quality. In these, some have advanced a data-centric approach by trying to mitigate the errors in the data itself [34], [35], [36], and others have proposed a process-centric approach where the data collection process is assessed [3], [4], [5], [6].
Javed and Wolf [36] presented a technique that leverages spatial and temporal interpolation to identify outliers in sensor reading. They evaluate their solution using weather sensing and conclude that the same method can be used in any application domain where the underlying phenomenon is continuous.
Tsai et al. [34] proposed an abnormal sensor detection architecture that leverages machine-learning techniques, using a trained Bayesian model that can predict values of sensor nodes via other correlated sensors. Their results show they can detect abnormal sensors in real-time.
Vilenski et al. [35] looked at a multivariate anomaly detection technique for ensuring the data quality of dendrometer sensor networks. The anomalous sensors are identified statistically by comparing a sensor's readings to an expected reading from a similar, healthy sensor network. As a gold standard, expert knowledge was used to assess the system. The above solutions address a single DQD (accuracy) by employing anomaly detection techniques.
Contrary to the above, Kim et al. [37] used accurate and consistent DQDs and proposed a system for filtering data based on the sensing objects. By employing a Bayesian classifier, the classifier can filter sensing objects with inaccurate data and then deliver data with integrity to the server for analysis. The performance of the proposed data-filtering system is evaluated through computer simulation.
This research identifies the necessity of addressing multiple DQDs and directs the work in this article for a general framework to apply multiple DQDs in assessment, with those DQDs specified by the end user.
Current approaches are evaluated by either expert knowledge (Section III-A), by a unique process (Section III-B), or by bespoke gold standard reference measure for a given use case. The evaluation strategy is thus specific to the use case. Identifying a means to evaluate data quality assessment strategy via a benchmark remains an open challenge. Table II compares some of the previous research and the evaluation strategies used against the proposed approach.
IV. EXISTING CHALLENGES
The approach to data quality assessment in this article is to investigate how the aspects of data quality affect the performance of each stage in the big data model and how this affects subsequent stages.
To understand how data quality assessment proliferates and affects data use cases, it is essential to understand the relationship between data quality and the big data model. Thus far, the literature does not consider data quality as a fundamental aspect of the big data model. A challenge exists with regard to structuring DQDs within the big data model so that the effect of data quality is identified throughout the model. Currently, it is difficult to know which part of the big data model is responsible for what portion of the overall data quality score. These are referred to as structure-related challenges [2].
With a given data quality structure in mind, considerations on how data quality measurements from one stage of the big data model can and should affect data quality measurements at other stages in the big data model. This may involve combining or weighting quality measurements for a given stage or use case. A number of challenges exist in this space and are referred to as method-related challenges.
Implementing a given assessment structure and methodology into a data pipeline from end to end considering the uniqueness of each stage of the BDC remains a challenge. These are referred to as implementation challenges.
A. Method-Related Challenges
1) Data quality can be highly subjective. A single data point or source can have varying qualities depending on the use-case context. How might data quality be represented in a general manner throughout the big data model yet allow subjective assessment by two (or more) end users? 2) Data quality is measured and represented in different forms depending on the stage and context within the big data model. How can these data quality measures be combined across the big data model stages to infer a quality metric which is useful for use-case quality determination? This article addresses these challenges by providing a means of decoupling the mechanics of providing a quality score and the methods of evaluating quality.
B. Implementation-Related Challenges 1) The current implementation of data collection/ processing solutions for IoT and big data are based on a data pipeline. The BDC is broken down into a set of defined individual and independent services. For a data assessment solution to be feasible, it should be integrated into such data pipelines. Challenges exist in choosing the significant stages in the pipeline where data should be evaluated. 2) Other challenges include, fusing different scores from independent stages of the data pipeline into a single score that can be used and advertised to applications. This would help explain the interrelationship of the various stages of the BDC and data quality issues. This article implements a naive approach of considering all the scores equally, but the goal is to investigate other fusion techniques.
V. RATIONALE FOR TRUST This section serves and provides the motivation for introducing trust as a driver for data quality measurement and incorporating data quality into the big data model. Trust has been used previously as a measure of data quality. Keßler and De Groot [38] studied how the quality of geographic information can be estimated through the notion of trust as a proxy measure.
Trust is a unitless measure that can be used in composite metrics. A trust score can be used to represent the composite score of a chosen number of DQDs used to assess quality while representing a competitive score evaluating two (or more) sources. Moreover, trust based on previous events can help minimize the required processing time for real-time applications. Trust is expressed using the experience metric in the implementation of this article.
Experience (e):
Data quality is currently measured by evaluating over a period of data points and determining a quality score for an instance over this period. Real-time subscriptions to data streams are concerned with current data quality at the head of the stream. Assessing this quality over a period, up to the head of the stream can be expensive. Furthermore, this quality measure will age, requiring reevaluation.
This article presents an experience metric (e), based on a trust paradigm, which can be used to continuously assess quality at the head of a data stream with low overhead. Experience is modeled on the following properties of trust. 1) Dynamic: Trust can increase or decrease with new experiences (usage or interactions). This feature has been modeled through different techniques. For example, in PeerTrust [39], they use an iterative windowing approach which allows users to customize the overall trust score by varying past and present experiences of an actor. In this article, the defined generic experience metric provides an innovative way to allow a data stream/data provider to build trust over time. This is different from the current data quality evaluation strategies that, while considering historical data, return an instantaneous metric. 2) Personalized: This allows each data agent/consumer to customize its own trust metric by either assigning different weights to the metric or defining its own experience metric. This provides an innovative way to assess data quality based on the specifications of the data consumer.
VI. PROPOSED FRAMEWORK The framework is based on the properties of trust that can be improved over time to indicate good quality data whose quality threshold is acceptable for a certain use case. Trust itself is a continuous assessment process. Throughout this process, trust can be formed, improved, or lost. This article defines three trust stages for evaluating data quality. This was informed by previous research in the field [4]. Fig. 3 shows how these relate to the big data model. 1) Initial trust: This is trust that is derived without investigating the data itself but rather the context of the data. This includes investigating the source and equipment/sensors used, and metadata are also assessed. Metadata plays a key role in determining quality. Sensor and decision-making. The result of these products can be monitored as effective decisions or accurate models. This monitored result can be used as a feedback mechanism throughout the stages of the big data model to build trust. This relates to the propagative property of trust [40]. The result-driven feedback is an important facet to build trust in the absence of a gold standard and in a shared data environment. The initial trust would help us define a trust score that represents data quality during data collection. Investigative trust defines a trust score that represents data quality during data prepossessing and data analytics. Finally, result-driven trust defines a trust score that represents data quality during data analytics and other data use processes. This ensures that data quality is represented across the entire big data model. A proposed data quality assessment framework is shown in Fig. 4.
The framework defines three phases: 1) starting phase (SP); 2) investigation phase (IP); and 3) results phase (RP). These are explained further in Section VI-C. At each of the phases, a phase trust score is calculated; SP n , IP n , RP n by combining parameters and weights. For example, SP (SP n ) is calculated with parameters (a 1 , a 2 , . . . , a 3 ) and weights (w 1 , w 2 , . . . , w 3 ). Table III presents a nonexhaustive list of parameters. This is completed for each of the phases with the respective parameters and weights. After determining a phase trust score, a use-case-specific threshold is defined. Such weights can be learned through feedback from the previous stage for a particular use case. This is used to model the effects of the phases on each other. The framework then combines these with the experience metric and outputs a single end-to-end trust metric that can be used to evaluate data quality.
A. Framework Phases
Three phases are defined that relate to the trust formation process. First, the framework will evaluate the context of the data by looking at aspects of the data; origin of data (reputation of the source), metadata. Second, the framework examines the data itself, assessing quality issues that exist in the data.
Finally, the framework assesses use-case-specific results and their applicability to data quality. A predictive model use case may compare real and predicted values and attribute trust based on this. At each phase, the framework defines parameters a 1 to a n that define the dimensions of data quality used at each stage.
B. Determining Weights
The framework applies a set of linear weights to the attributes at each stage. The weights can be learned through the feedback process from the previous stage. This ensures that each use case can uniquely customize its own quality experience.
C. Formulation of the Trust Metric
The mathematical definition of (SP), (IP), and (RP) is illustrated further in Section VIII-B. The trust score of a data stream i is defined as where e is a metric called experience defined below.
Experience (e): The proposed framework uses experience e to model the natural behavior of trust. This is motivated by work conducted by Gao et al. [41]. The experience score is driven by positive and negative experiences. Therefore, the metric e of a given data stream i is defined as where ϑ i is the count of positive experience toward data stream i , and δ i is the count of negative experience toward data stream i . Consider a data stream i , it is said to have positive e at time t if the trust score at time t −1 is greater than a threshold, else i shows a negative experience. At t = 0, (ϑ i +δ i = 0), this means that data stream is new or has just started streaming. Therefore, experience is set to e i = 0.5. This is informed by previous research in the field [42].
VII. END-TO-END IMPLEMENTATION
The proposed framework is tested via an end-to-end implementation of the data pipeline that is based on industry standard technologies and practices. The application development follows a microservice-based architecture that decomposes the application into a small set of complete self-contained services. This ensures that the framework can be seamlessly designed into any pipeline. This implementation aims to achieve the following goals.
1) Decouple the mechanics of providing or advertising a quality score from the methods of evaluating quality. This allows different, competing, or updated metrics to be used to calculate trust without affecting the scoring mechanism. 2) Describe the placement of the trust framework into big data pipelines. 3) Evaluate the feasibility of end-to-end data quality assessment in terms of computing resources. The end-to-end data pipeline shown in Fig. 5 is composed of four system components: these are mapped from the four phases of the big data phases described in Section I. This section describes the role of each component and the technologies used in each.
A. Data Collection
This phase of the data pipeline is concerned with data producers or data sources. These could be live sensors streaming to the cloud, historical data coming from a data warehouse/database, or data coming from a third-party application programming interface (API). Both live stream and historical data sources (batch processing) were considered for testing and evaluation.
B. Data Preprocessing
Two operations take place in the preprocessing block; first, the calculation of the initial trust described in Section VI as part of the trust framework is applied. The system checks if the initial trust meets this stage's application data quality requirements. If satisfied, each data stream is tagged as good data and passed on to a Kafka node. Otherwise, that data can be tagged as usable data. In this case, the data can be improved by subsequent processing, and else, the data is discarded. Deciding what constitutes usable data is still an open research question.
Second, a producer-broker-consumer mechanism is set up to receive data from the data producers (sensors). This in turn will present the data to the next part of the data pipeline (consumer). In a typical IoT data-shared environment, it is expected that a single data source can share its data with one or several data consumers each with different data quality requirements. The need for parallel distributed processing where a single input data source can be processed independently by each application is supported by Kafka and Zookeeper.
Kafka, a distributed messaging system that is used for collecting and delivering high volumes of data with low latency [43]. Zookeeper is an orchestration server for Kafka. Fig. 6 shows the high-level architecture of Kafka. In the proposed system, each data consumer (spark application, described in Section VII-C) initializes a Kafka topic. This is the name of the source(s) from which it expects to receive data. A single application can subscribe to one or many Kafka topics. Each data source can only publish a single topic. Both Kafka and Zookeeper are run as microservices running on docker containers. The system can scale both horizontally and vertically.
C. Data Processing and Analytics
Data processing, real-time modeling, and data storage are performed by the data processing and analytics cluster. The second phase of trust calculation is conducted in this system component. To achieve the functionality of this subcomponent, the data needs to be processed in real-time as it is received. A spark streaming service is used to handle real-time data.
Spark streaming is part of the core spark API that allows for real-time processing of data from various sources including Kafka, Flume, and Amazon Kinesis. This processed data can then be saved to file systems, databases, live dashboards, or even pushed to a Kafka node as shown in Fig. 7. The setup consists of one master node with two workers. Each spark application is initialized with a list of topics to which it listens. Each topic corresponds to a data source name that is part of the IoT data-sharing ecosystem. Both historical and real-time data can be assessed using this implementation. Historical data can be assessed in batches defined by a period. Real-time data is assessed on a continuous basis windowed by a period or sample size.
The following operations take place using the calculated trust score.
1) The trust score from the previous phase is aggregated with that of the current phase. 2) If the trust score is lower than a threshold, operations can be performed here to improve the overall quality and a new trust score calculated. If the data does not meet the quality standards of the application, the current data source will be deemed unusable.
D. Data Use
The final component of the data pipeline considers data use. Data use operations such as data visualization and machine learning are typically applied in the BDC. RP trust is calculated based on the output of such operations. This requires feedback from the application. The feedback loop would help define the longitudinal relationship between the data quality properties at each stage of the big data model, and how these might affect or help improve the overall data quality of a data stream. For example, a prediction model application may return a prediction error based on the current data pipeline used.
VIII. TRUST FRAMEWORK BY EXAMPLE
To demonstrate the application of the proposed system, an example based on a dataset from an IoT deployment collected to measure air quality was used. The air quality dataset is comprised of two data streams: a gold reference sensor stream and an IoT sensor stream. Both are colocated and measure the same feature (carbon monoxide (CO) concentration). From each of these streams, a continuous trust metric is formed. The trust metric is built using three DQDs: 1) accuracy; 2) completeness; and 3) timeliness. The implementation of each of the metrics is defined in Section VIII-B. At any given time, a value of trust is given to the stream based on these DQDs and past trust scores (experience). The example is presented to, first, provide an implementation of trust and show the dynamics of the trust metric when evaluating a data stream, and second, to test and validate the trust mechanism. This is achieved in three stages. 1) First, the trust metric of the known gold standard data stream is compared against the trust metric for colocated IoT data stream. This test indicates how trust can be used as a metric over multiple DQDs to differentiate between sources of varying quality. 2) Second, trust is compared to known data quality metrics MAE and RMSE. The comparison will validate trust as an indicator of data quality. Should the defined trust metric show a strong correlation with the known data quality metrics over a range of data streams, we may conclude that the trust metric can indeed be a usable quality metric. 3) Finally, the resource costs of implementing trust as a continuous quality metric are evaluated. The first evaluation is based on data from a CO sensor (referred to as the low-cost sensor) and a gold reference sensor. These are colocated and measure the same property. For each stream, DQDs at each stage are defined. These are used to calculate the trust score and subsequently experience for each stage. Finally, result scores from the initial and investigative stages are fused, resulting in a trust score for a stream. A naive approach using equal weighting linear fusion is applied in this example. Further investigation into fusion techniques will be conducted in future work.
A second evaluation is based on the gold reference stream. The goal is to have a standard and generic way of evaluating data quality frameworks that are based on model performance. Models are usually the final step of most IoT data processes, and the previous research has shown a correlation between data quality and model performance [44]. This can help establish a benchmark for all data quality frameworks across the IoT.
The stream is modeled to generate five streams. The first stream is the original stream (no noise added). The other five streams are generated by randomly introducing noise to the original stream in varying proportions of 10, 15, 20, 25, and 30%. The percentage is the overall size of the data stream.
To generate the noise, NumPy's random function was used [45]. This is based on the normal distribution and draws random samples. The size of the sample corresponds to the proportion of error, for example, 10%. The generated noise is then added to the original stream to create a new stream. For each stream, a trust metric is calculated. For each stream, a model is built and evaluated with two known metrics: 1) RMSE and 2) MAE. The framework is agnostic of the modeling technique. This example uses an autoregressive integrated moving average (ARIMA).
A relationship exists between data quality and model performance [44]. As the data quality degrades, so does model performance, keeping other factors constant. This relationship is used as a way to evaluate the trust metric. If this relationship exists between the calculated trust metric and data quality, it can be concluded that the trust metric is a valid metric for describing data quality. Fig. 8 summarizes the data and process flow of the example. The final evaluation measures how the system resources are utilized as the data go through the data pipeline. Central processing unit (CPU) utilization of the system without the trust calculation and CPU utilization with the calculation of trust at each stage are compared. The delay introduced by trust calculation is also measured.
The results and evaluations reported in is study are based on the starting and investigation phases of the framework highlighted in Fig. 4. The mechanism for the RP is still under study. In the SP, one DQD is considered, timeliness, and two DQDs in the IP, completeness and accuracy. To differ from previous studies that use the same DQDs by taking into account only the current state of a data stream, our implementation uses a trust approach based on past and present experience. Karkouch et al. [46] reported that because of the instantaneous nature of these DQDs, at some point, they will be either unreliable or become insignificant. For example, if a sensor dirty fails (sensor node fails, but keeps up reporting readings that are erroneous) under any circumstances, then the accuracy dimension is rendered insignificant and unreliable unless it is enhanced with past experience. These metrics are defined in Section VI-C. This implementation considers uniform weights for each DQD. Effective weight determination is still an open research question.
A. Dataset Description
The study uses a publicly available dataset that was collected from an IoT deployment (see reference [47]). A multisensor device was colocated with a conventional air pollution analyzer. This was used to provide the true concentration values of the target pollutants at the measurement site. These values were thus used as a gold standard. This dataset is suitable for testing as the discrepancies in quality between the streams are known and can be used for system evaluation. This study uses data from the CO sensor (referred to as a lowcost sensor) and the gold reference sensor. Fig. 9 shows hourly concentration estimation of CO over a one-week window. The red line represents the true concentration value as measured by the conventional analyzer colocated with the low-cost sensor whose values are shown by the blue line.
B. Mathematical Implementation
A previous study [2] examines if a trust-based framework can be used to evaluate the quality of a data stream without a gold reference. This study expands on this to consider the first two phases (SP and IP) of the framework. This work is different from the previous study in that it implements evaluation over the big data model, evaluating a fundamental component of the proposed framework.
The implementation and evaluation are conducted over two phases using: 1) timeliness and 2) accuracy and completeness DQDs. Following from (1), SP i and IP i are given by IP i = Accuracy Completeness 1) Accuracy: This is widely considered as meaning a correct and unambiguous correspondence with the real world [48]. Ballou and Pazer [49] defined accuracy as the recorded value being in conformity with the actual value. This work defines the metric for accuracy following the definition by Blake and Mangiameli [48]: where V T is the number of tuples in a relation having one or more incorrect values and N A is the total number of tuples.
To determine V T , a statistical technique based on median absolute deviations (MADs) is used. Absolute deviation from the median has been used for a long time to filter outliers [50]. The median is a measure of central tendency and is preferred to the mean as it is less sensitive to the presence of outliers which can have an outsized effect in IoT. The median is also a location estimator that has the highest breakdown point. The following formula as defined by Huber [51] was used to calculate MAD: where x j is the original observations, M j is the median of the series, and α is the data normalization constant defined by [52]. It is defined as α = (1/(Q(0.75)), where Q(0.75) is the 0.75 quintile of that underlying distribution. The normalization step is important because otherwise MAD would estimate the scale up to a multiplicative constant [51] only. Fig. 10 illustrates how the MAD was used to determine V T . To determine the tolerance factor, Miller [53] suggested three values: 2, 2.5, and 3 standard deviations. The choice will determine the sensitivity of the metric. Since IoT data is noisy, the extreme value of 3 was used to ensure that noisy data are not tagged as outliers.
2) Completeness: The metric for completeness is given by [48] and is defined as follows: on the level of data values, a data value is incomplete (i.e., the metric value is zero) if and only if it is "NULL"; otherwise, it is complete (i.e., the metric value is one). All data values that represent missing or unknown values in a specific application scenario (e.g., blank spaces or "9/9/9999" as a date value) are represented by the data value "NULL." For a relation R, let T R be the number of tuples in R that have at least one "NULL" value and let N R be the total number of tuples in R. Then, the completeness of R is defined as follows: 3) Timeliness: The parameter for timelines of a data stream is affected by two components: currency, referring to the lag between when the data point was produced and when it was used or processed. The second, volatility, refers to how long the data point remains valid [54]. For some applications like accident avoidance in autonomous vehicles, this is very important. In others such as disease prediction in smart agriculture, the currency is less important. Unlike accuracy and completeness, timeliness is not determined directly from the data but rather by the context of the data. To this end, the metric for timelines is defined as C. Weighting the Scores Previous works [2] have two main drawbacks. 1) 1) Both previous and current experiences are weighted equally. The natural norm of trust assigns more weight to previous experiences when compared to current ones. 2) 2) The trust curve is biased by sudden changes in the data properties. The desired effect would be a gradual change for small changes in the data properties, with larger variations in trust occurring from larger data property changes. To mitigate the above effects, a weighted moving average that assigns more weight to past experiences when compared to current experiences is applied. This also reduces the effects of sudden increases and decreases in the trust curve due to small changes. Fig. 11 compares the trust score before and after weighting. The highlighted area shows how the effects of weights mitigate the above challenges. This is important in data-shared IoT where data is noisy and sudden changes in the data do not always relate to poor quality data.
IX. RESULTS AND EVALUATIONS
To evaluate the trust metric, an experiment was set up to compare the trust metric to known statistical measures. The ARIMA forecasting model was used. Although multiple factors dictate a model's performance, it is argued that the quality of the data that goes into the model is of most importance [44]. As the data quality changes, the assumption is that so does the model's overall performance. Using the original gold reference stream, a trust score was calculated. Also, an ARIMA model is built and tested for the same stream. The results of the trust score are then compared to the RMSE and MAE. The process is repeated for the other generated streams.
The correlation measures were based on the Pearson correlation coefficient. As it can be seen from both Table IV and Fig. 12, there is a strong negative correlation between the trust score and the RMSE and MAE. As the quality of the data degrades, the performance of the ARIMA model decreases. This is indicated by the increase in the values of RMSE and MAE. The same relationship exists between the trust score and data quality. As the quality of data degrades, so does the trust score. This shows that the proposed trust metric can act equivalently as known metrics like the RMSE.
It is important to note that the trust metric is calculated without reference to a gold standard. This presents a significant opportunity to measure data quality in shared IoT where there is no gold reference to compare. The metric for RMSE and MAE are solely based on the accuracy of DQDs. It was, therefore, not possible to evaluate how completeness and timeliness would affect such metrics. Trust metric, however, is a multidimensional metric that incorporates several DQDs depending on the user's needs and application. This further illustrates the need for new ways to evaluate data quality. Future work will explore how to effectively compare the trust metric to other multidimensional metrics that support several DQDs. Fig. 13 presents the data quality differences between the gold reference sensor and the low-quality sensor. The aim here was to differentiate between two known data quality streams: 1) the gold reference and 2) the low-cost sensor. The green highlight in the Fig. 13 shows how both the gold reference and low-quality sensor's trust decrease. During this period, the number of outliers increased by 2% for both the gold reference and low-quality sensors. The number of missing values, however, increased by 3% and 2% for gold reference and low-quality sensors, respectively. This accounts for the reduction in the trust scores for both sensors.
In the first red highlight, there are inconsistencies in the trust scores for the gold reference and low-quality sensors, with that of the low-quality sensor being lower. During this period, the data authors [47] reported that after the 30th week (starting march 2004), there was sensor drifting. This was later corrected by the calibration of the sensors. This sensor drift was detected by the trust mechanism and resulted in a lower trust score for this period. After calibration, the trust score returns to high values. The detection of sensor drift and its impact on data quality has previously shown to be difficult to measure.
In the last red highlight, there was a 13% increase in the number of missing values for the low-quality sensor. This explains the overall decrease in the trust score during this period. The trust metric, therefore, helps describe the quality of each data stream independently without relying on the other and shows the effects of different DQDs in a single metric. This type of comparison over multiple DQDs is not possible with existing techniques.
As part of the evaluation process, the impact of the trust computation framework on the data pipeline in terms of system resources (percentage CPU usage) and the delay introduced in the data pipeline due to trust calculation was monitored. Fig. 14 shows a higher percentage usage of compute resources for a data pipeline with trust computation when compared to the same job without trust computation. On average, there was a 7.8% increase in CPU usage for jobs with trust calculation. On average, there is a delay introduced into the data pipeline due to trust calculation. This is not an in-process delay, but rather compounded total delay from the start of the job to the end. Not with standing this the data is still delivered. The results reported here compare system performance as the number of nodes in the clusters increases (up to four worker nodes). Each of the worker's nodes has six cores and 4 GB. As shown in Table V, as the number of worker nodes increases, the delay decreases from 10.6 to 4.2 min. Since the delay is caused by an extra process (trust calculation), and spark is a distributed processing engine, increasing the number of nodes in the cluster would minimize or even eliminate such a delay. This demonstrates the scalability of the system.
X. APPLICATION: WHEN TO DISCONNECT
FROM A DATA SOURCE Data deriving from IoT can help foster innovations and save lives. Performing analytics on this data is, however, challenging due to the heterogeneity, complexity, and dynamic nature of IoT. Therefore, businesses and organizations have to maintain redundant data sources, seek third-party sources, or a few that can afford them, and install high-end sensors to mitigate such heterogeneity within the data.
The process of deciding which data source to engage and disengage is largely based on the quality of data from such a source. This is typically performed manually after the data has been processed. Data generation, collection, and delivery in IoT are automated and so should the process of selecting and maintaining a data source. Current data quality metrics are instantaneous, that is, a data source is evaluated based on its current state only, without previous context.
Given the dynamic, heterogeneity, and high volatility of IoT data, this would result in a high variance in network connection due to disconnecting from a data source each time there is a change in the data properties. This problem is exacerbated by low bandwidth and intermittent network disruptions that affect most IoT deployments. The trust metric, however, has been modeled to mitigate this problem. One of the novel elements of the trust score is its experience metric. This helps to incorporate the past and current context of a data source and also ensures that sudden changes in the data properties do not lead to sudden changes in the trust metric. This feature is further improved by the weighting strategy implemented in Section VIII-C.
To illustrate the application, data collected from weather stations between 2015 and 2019 in the United Kingdom was used. The data is produced at an interval of 15 min. Each weather station generates an average of 30 000 data points every year with over 100 weather stations. This includes air temperature, rainfall, relative humidity, and wind speed. Agricultural applications dynamically connect to and retrieve data from these stations.
In a data-shared IoT environment, applications should be able to dynamically select a data source, a weather station, for example, based on its data quality needs. Figs. 15 and 16 show two applications each with a different trust score threshold (0.98 and 0.95), each connected to a different weather station. Each application would advertise its threshold. In each case, the application would then automatically disengage from the data source when its trust score falls below the application's threshold and would automatically be connected to another data source of sufficient quality. While the thresholds determined in the figures are somewhat arbitrary, the method can be used to allow different applications to prescribe different levels of quality for their given use case. Each application and use case can set its own threshold. This can then be used to disengage from a data source.
This kind of automation can be used in other applications, for example: 1) 1) Automate data source repairs and maintenance. Continued disengagement from a data source could mean it is faulty. 2) 2) Gradual decrease in data quality could be associated with sensor drift. Therefore, the sensor can be automatically reset back to where its known data quality was good.
XI. CONCLUSION AND FUTURE WORK This article has described an end-to-end implementation of a trust framework that can be used to estimate the quality of data. The implementation was evaluated using data collected from a real-world experiment.
The implementation is based on industry-standard data pipelines with Kafka as distributed messaging service and Apache Spark for real-time data processing. This is unique as it integrates with all the stages of the big data model. To the best of our knowledge, it is the first end-to-end data quality assessment framework implementation. This implementation enables one to estimate data quality in cases where there is no gold standard to compare. The other advantage is that one would be able to represent data quality in a general manner throughout the big data model. To evaluate the system, the article compares the trust score to RMSE and MAE. Using Pearson's correlation coefficient, the results and evaluations showed that the trust metric is a good metric for data quality assessment in cases where there is no gold standard, which is the case in data-shared IoT. Although the results have shown a slight increase in system resources in terms of CPU and execution time, this can be mitigated with distributed processing.
The article listed several challenges related to data quality assessment, both structural and method-related challenges. We have explored how some of these can be solved by the implementation described using real-world datasets. However, some challenges remain: for example, we equally combine the trust scores for the different phases. There is a need for a fusion algorithm that accounts for the contribution of each stage. Also, the mechanism for feedback and how this propagates from the RP is not yet fully formed. These and other challenges highlighted above form part of our future work.
|
2022-09-15T16:25:25.973Z
|
2022-10-15T00:00:00.000
|
{
"year": 2022,
"sha1": "2374b02076512130a1b6996161f03d2fba52a452",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/7361/4427201/09884973.pdf",
"oa_status": "HYBRID",
"pdf_src": "IEEE",
"pdf_hash": "d52dc5b7b420ec1f634325d4981d9bec6e5f96ac",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
1168969
|
pes2o/s2orc
|
v3-fos-license
|
Digital Commons@Becker
Background: The computational prediction of DNA methylation has become an important topic in the recent years due to its role in the epigenetic control of normal and cancer-related processes. While previous prediction approaches focused merely on differences between methylated and unmethylated DNA sequences, recent experimental results have shown the presence of much more complex patterns of methylation across tissues and time in the human genome. These patterns are only partially described by a binary model of DNA methylation. In this work we propose a novel
Background
The most important epigenetic modification of vertebrate DNA involves the addition of a methyl group to the carbon-5 of the pyrimidine ring of the cytosine in CpG dinucleotides (CpGs) [1][2][3].The methylation of DNA provokes a localized restriction of transcription that can be used for the selective silencing of genes.This form of transcriptional control is mediated by regulatory regions termed CpG islands (CGIs) which overlap the promoter of all human housekeeping genes and over half of all tissue-specific genes [4][5][6][7].CGIs are the only regions in the human genome that are rich in unmethylated CpGs [5] and therefore represent a notable exception to the almost "global" methylation that affects the bulk of the genome and has, over the time, resulted in the depletion of CpG dinucleotides from it.
The methylation of CGIs is associated with a host of normal and cancer-related processes [8][9][10][11][12][13][14][15][16][17][18][19], making them an important target for large-scale studies of DNA methylation that aim to shed light on their role in the epigenetic control of gene expression [20,21].Nevertheless, measuring DNA methylation experimentally involves procedures that are time-consuming, expensive and have only recently been scaled to genome-wide approaches that maintain a high degree of resolution [3,22,23].Computational solutions to the genome-wide prediction of CGI methylation would therefore be a great aid [24].However, the characteristics that make a sequence susceptible or resistant to methylation are not completely understood.
Recent studies employing supervised machine learning methods account for differences between methylated and unmethylated DNA sequences [25][26][27][28].However, recent experimental results have shown the presence of much more complex patterns of methylation in the human genome [29].Since these patterns may vary across tissues and developmental stages, they are partially described by a binary methylation model [30].Current computational methods therefore distinguish well between constitutively methylated and unmethylated CGIs, but do not take tissue-specific CGI methylation into account.This is a significant source of uncertainty, since their prediction models were trained on a heterogeneous mixture of constitutive and tissuespecific methylation.This could mask the characteristics that truly discriminate between CGIs that are methylated or unmethylated in all tissues.One of the primary causes for this situation was the lack of high-resolution methylation data from multiple healthy human tissues [25].This impeded the discovery of tissue-specific CGI methylation classes and the key characteristics that predispose certain CGI sequences to either constitutive or tissue-specific methylation.
The data of Human Epigenome Project (HEP) [31] gives us the opportunity to try and resolve this issue.They specify the methylation status of more than 30000 individual CpGs from the human chromosomes 6, 20 and 22 in twelve healthy tissues and cell types, therefore representing the highest-quality source of experimental methylation data currently available [31] and have been used before to gain insights into the epigenetic variability of the human genome [29].In this paper we use this information to identify novel profiles that map DNA sequence, structural, physicochemical and evolutionary attributes of CGIs into methylation profiles.They clearly distinguish CGIs that are constitutively methylated or unmethylated from CGIs that show a tissue-specific degree of methylation.At the same time, these profiles provide important insights into the key attributes that determine if a CGI has a functional role in the epigenetic control of gene expression and is predisposed to become methylated during normal cellular differentiation.
Results and discussion
In recent years, there have been several successful efforts at predicting the methylation status of CGIs.These methods use different combinations of DNA patterns and attributes to classify them as methylated or unmethylated [25][26][27]32], but do not take tissue-specific CGI methylation into account.In this work we elucidate intermediate subclasses of CGIs with a differential degree of methylation that better reflect the complex methylation patterns of the human genome.
Our approach is primarily a mining process carried out in the absence of supervised data [33][34][35][36][37] (Fig. 1) that identifies associations among two different data domains and uses these associations to label the database.First, we independently clusterer two different data-domains: CGI methylation and sequence-related attributes.Then, both domain-clusters are related and evaluated based on the probability of intersection using the hypergeometric measurement.This measure helps to uncover cohesive relational clusters while avoiding conditional decisions -often derived from the use of conditional probabilities-of clustering first one domain and then the other or viceversa [34].
Once the database is labeled, after a summarization process, we proceed with the corresponding feature selection of the new methylation classes and inference steps with the creation of a classifier with the newfound knowledge.
Profile identification
We selected CGIs that were covered to over 70% by the CpGs in the HEP dataset, requiring their methylation status to be defined in at least 2 tissues.This measure ensured a balanced dataset, where more than two-thirds of the CGIs were defined in all tissues and 493 (95%, of the CGIs) were defined in at least 10 tissues (Table 1, Additional file 1).
The CGIs were identified by the CpGcluster algorithm [38], which does not rely on the traditional three parameters of length, GC-content and O/E-ratio [39].Instead, it searches for closely spaced CpGs and computes the probability of finding a cluster with the same length and number of CpGs in the genome (p-value).CpGcluster has a high degree of sensitivity for detecting known, functional CGIs, while at the same time, excludes spurious repetitive elements [38].Over 97% of the CGIs in our dataset were covered by a CGI predicted using the traditional parameters [39], while the inverse ratio is 71%.This lower degree of coverage is due the high specificity of CpGcluster that excludes more false-positives, such as Alu-repeats, than methods based on the traditional thresholds.
We then characterized each of the CGIs in our dataset using 38 attributes belonging to four distinct categories: (1) CGI-specific attributes (e.g.their G+C content, Observed/Expected ratio and CpGcluster p-value), (2) Repetitive sequences (number and type of repetitive elements); (3) Evolutionary conservation (e.g.PhastCon content), as well as (4) Structural and physicochemical properties of the DNA itself (e.g.twist, tilt, roll, shift, slide and rise) [40].The attribute global linear analysis (PCA analysis) (Additional file 2) showed that all of them contributed significantly to the overall variability of the dataset.Therefore, all raw data were used in further steps because they provide a more interpretable characterization.
Then attribute and methylation data were independently clustered by both hierarchical and k-means clustering methods.The validity indices that define the appropriate number of clusters were not conclusive, as shown in Table 2. Thus we selected cluster partitions yielding more than two clusters using the best two partition scores (see Methods).
In order to determine a combination of biological CGI attributes that naturally intersected with a specific pattern of methylation, we linked the two pairs of clusters by calculating the probability of intersection (PI) and employing a significance p-value < 0.05.This approach optimizes the cluster partitions based on the coincidence between independent clusters [37] instead of intrinsic intra/inter clustering measurements [41].The application of this unsupervised process to our dataset identified 55 significant intersections (profiles) where two independent clusters had more CGIs in common than would be expected by chance (Additional file 3).These 55 profiles are redundant due to the fact that partitions from distinct numbers of clusters were allowed in the former step.Therefore, a cluster from one domain might be related to more than one cluster from the other domain and vice versa.We removed this redundancy (Figure 2a) by grouping the 55 profiles and selecting a representative prototype from those that recognize similar observations.The process resulted in 9 nonredundant profiles (PBC) (Figure 2), which demonstrate clear patterns of tissue-specific methylation (Table 3) associated with distinct biological characteristics (Table 4).The attribute values in Table 4 were normalized between 0 and 1.This normalization is performed before the clustering process in order to prevent bias clusters caused by attributes with high absolute values.The significance at a p-value < 0.05 is relative to these normalized values.The nonnormalized values are available in the supplementary information.The number of CGIs recovered with each profile is registered in Table 5.
Finally, we labeled the observations using the corresponding methylation patterns (Figure 3a, Figure 3b) into four methylation classes: Constitutively methylated, containing CGIs that are highly methylated in all tissues (Profile 1); Unmethylated in sperm contained CGIs that only lacked methylation in sperm (Profile 2); Differentially methylated contained CGIs that showed a distinct degree of methylation for each tissue (Profile 3); and Constitutively unmethylated, comprising CGIs that are uniformly unmethylated across all tissues and cell types (Profiles 4 through 9).
The initial PCA analysis suggested that all attributes were informative.However, we considered the data labeled in the previous step and applied a local feature selection analysis based on the entropy of the attributes (i.e.decision tree) for dissecting each new methylation class.The most relevant attributes were the novel ones included in this study: PhastCon content, the p-value computed by the CpGcluster algorithm, and structural characteristics of the CGIs describing their three-dimensional flexibility such as: twist, tilt, roll, shift, slide and rise of a DNA sequence from a novel model of dinucleotide stiffness.Moreover, a correlation analysis We grouped all profiles recognizing the same observation using a column/row hierarchical clustering, and summarize each cluster by their most representative prototype (i.e., the most supported relation of each cluster).The validity index we used (see methods) suggests a partition into 9 final profiles.
of all attributes shows that each relevant attribute was not replaceable by any other (Additional file 6).Here we describe the most relevant attributes for each class.
Constitutively methylated class
CGIs in this class had a high average degree of methylation in all tissues (avg_meth > 0.8), and reflect PBC1.This class is described by a higher average content of CA and TG dinucleotides, which can be seen as the "footprint" of methylation, since they are often the result of the deamination of methylated cytosine.The most notable difference between the constitutively methylated class and the other classes, specially the constitutively unmethylated, is its higher overlap with PhastCon elements.This attribute never reaches values greater than 0.1 in the former class, while it never falls below 0.5 in the differentially methylated classes.A high PhastCon overlap was originally seen as a sign of a potential conserved functional regulatory element [38,42] however their high degree of methylation poses limits on their functionality by restricting access to the DNA.
Constitutively unmethylated class
The CGIs in this class have a low avg_meth (≤ 0.2) in all tissues; it summarizes PBCs from 4 through 9. Previous results have shown a negative correlation between the concentration of CpGs and a high degree of methylation, and this idea has been used as the starting point for methylation prediction.However, our findings agree with the new experimental work of Raykan et al. [43].
The authors find that DNA methylation can occur at high-, medium-and, contrary to previous notions, at even some low-CpG density promoters.It has been also found that certain promoters with few CpGs were active and methylated, whereas other promoters of that group can be unmethylated when active [44].These data suggest that DNA methylation is involved in regulating activity over a broad range of CpG O/E-ratios, including CpG-poor promoters, located in tissue-specific differentially methylated regions (tDMRs).
All constitutively unmethylated PBCs are very similar, with exception of the unexpectedly high content in GCrich repetitive elements of PBC 9.Only 6% of the CGIs supported this PBC indicating that the combination of attributes learned from the few remaining repetitive elements is not representative of a large subset of CGIs.This is a consequence of the HEP selection process, which excluded most of these repeats [31].However, in our results, 36 out of the 46 CpG islands overlapping with a repetitive element, overlap either with the extended promoter region (14 CpG islands) or with the TSS of a gene (22 CGIs).Of these CGIs, 32 (70%) are unmethylated despite the presence of a repeat.In fact, all TSS-overlapping CGIs are unmethylated regardless of the repeat in their vicinity.The presence of a promoter seems to be incompatible with the methylation of these repeats.This small set agrees with the recent experimental findings of Meissner et al. [30], where it was proven that not all transposable elements are equally affected concerning methylation.Long interspersed nuclear elements (LINEs) and long terminal repeats (LTR) are generally methylated independently of CpG density.However, CpG density influences whether nonautonomous short interspersed nuclear elements (SINEs) and low complexity regions remain unmethylated, which may be a reason for the low degree of methylation of the repetitive elements of profile 9.These results show that CGIs with distinct biological characteristics can share the same methylation status (Figure 2a) and that the degree of CpG enrichment or the presence of a repetitive element alone does not determine if a sequence is protected against methylation.
These results show that CGIs with distinct biological characteristics can share the same methylation status (Figure 2a) and that the degree of CpG enrichment or the presence of a repetitive element alone does not determine if a sequence is protected against methylation.
The CpGcluster p-value is a key attribute for distinguishing between constitutively methylated and unmethylated CGIs.This measure distinguishes true CGIs from repetitive Alu-elements, which are the main source of false-positive CGI predictions, and is not linked to either G+C content or the O/E ratio.This is important because it can determine the significance of a CGI independently of changes in the G+C content of the genomic sequence and is therefore not affected by fluctuations in the sequence composition.G and C-containing dinucleotides (CpC, CpG, GpG) in conjunction with a reduction in A and T-containing dinucleotides (ApA, TpT) and the curvature are also important attributes for this class.The high contents of G and C dinucleotides leads to an increasing degree of sequence bending and reduces both the macroscopic curvature of the DNA as well as the amount of energy needed to separate the strands (stacking energy).Despite the increasing G+C content, the O/E ratio decreases continuously (Figure 2c), indicates that there is a balance between CpG enrichment on one hand and the overall G+C content on the other.
In addition to the previously known classes we found two tissue-specific methylation classes:
Unmethylated in sperm class
These CGIs lack methylation in sperm, and have a slightly lower degree of methylation in CD4 T lymphocytes, dermal fibroblasts, dermal keratinocytes and the muscle tissue of the heart.Nevertheless, sperm is the only tissue that shows a level of methylation below 0.2 making it the defining characteristic of this class.This agrees with known normal development and control of sperm-specific gene expression of germ cells [45], as well as with their epigenetic reprogramming during gametogenesis.
Differentially methylated class
In contrast to the classes constitutively methylated and unmethylated in sperm, this class showed a heterogeneous degree of methylation ranging across all tissues.It presented both highly methylated and unmethylated CGIs in the same tissue yielding an intermediate degree of methylation after averaging.This average was significantly lower than that of the CGIs unmethylated in Sperm.This class also presented the lowest average distance between CpGs, the highest CpG O/E value, G+C A CGI was defined as being misclassified if it lacked an experimental methylation value in sperm but was classified nonetheless as constitutively methylated or unmethylated solely in sperm.CGIs that were missing methylation data in more than half of tissues and cell types under analysis were also defined as misclassified and removed.content, and CpG and GpC dinucleotide content than the constitutively methylated and unmethylated in sperm classes.This class presented unique structural characteristics, such as low degrees of bending and curvature, as well as a high degree of solvent-accessibility of the DNA backbone (DNA cleavage), which may indicate a higher degree of permissiveness for DNA binding.DNA sequence bending is generally higher if the sequence contains phased GGGCCC sequences, and therefore the bending should be higher for sequences rich in G+C.This does not apply to this class where alternating CpGs and GpCs limit bending, curvature, twist and the number of helix turns, but tends to increase the flexibility of the sequence.
Functional CGI categorization by gene association
In order to assess the functionality of the four newly defined methylation classes, we measured the coincidence between them and defined gene association classes (Table 6) using probability of intersection (PI) (Table 7).The PI is usually employed to perform coincidence analysis because it is a context-sensitive metric that takes into account the domain where the intersection is calculated.It measures the degree of inclusion of one set into another, considering both the number of instances intersected between two sets as well as those instances not belonging to the intersection.Formally, the PI determinates the p-value, which is the statistical significance of observing over represented intersected instances (i.e.occur more frequently than could be expected by pure chance).The results obtained show that the new classes represent unique, functional CGI profiles associated with distinct CGI methylation patterns and gene compartments (Table 8).
The CGIs in our dataset were associated with 497 protein-coding gene loci (Table 6).While the vast majority of the CGIs (> 68%) were located in the vicinity of a promoter, less than 4% of the CGIs were outside the genic environment, indicating that the dataset is skewed towards genic regions.This is a consequence of a bias in the HEP data, which includes few intergenic regions [31].Normally a higher percentage of CGIs would be found outside of the genic region in an unbiased dataset [38].Approximately 20% of the CGIs are located in the gene-body and we found that these CGIs were unequally distributed, since there were approximately 30% more CDS-overlapping CGIs than those located in introns.
Moreover, it is known that CGIs associated with the gene-body are susceptible to both constitutive as well as tissue-specific methylation [2,43,46].However, by separating between coding and non-coding regions, we were able to distinguish between highly methylated and differentially methylated CGIs.Constitutively methylated class It coincided significantly with the CDS of the genes in our dataset.They are highly conserved and may be the result of GC-rich codons simulating the presence of a CGI.The methylation of a CDS region itself, in contrast to a TSS region, does not impede the progression of transcription, making this region permissive for both methylation and compact chromatin conformation.
Differentially methylated class
This class coincided significantly with CGIs located in introns, indicating the presence of functional methylation-dependent sequence elements.Though the majority of the differentially methylated CGIs that were conserved overlapped with the CDS (Table 9), we found that they were the only class of CGI that was significantly enriched in highly conserved non-coding elements (HCNEs) [47] (Additional file 5).Since the differential methylation of HCNEs has recently been shown in a comparison of embryonic stem cells (ES) and ES-derived differentiated cells in mouse [30] these differentially methylated CGIs may represent examples of enhancers that are controlled by methylation [48,49].
Although this supports the view that germline-specific genes are preferentially methylated in somatic tissues [44], the only significant intersection with a gene-class was with the pseudogene-proximal CGIs, 22% of which were unmethylated in sperm.Only 12% of all CGIs were associated with pseudogenes and the majority of them represent "processed" pseudogenes (> 64%).This may still include parts of the core promoter region, including the promoter-overlapping CGI [53].Their lack of methylation in sperm was thought to be a by-product of the global changes in methylation that occur during spermatogenesis [45].Although it may also permit them to be transcriptionally active in sperm [44] they are normally targeted for silencing through methylation during differentiation and therefore show a high degree of methylation in somatic tissues [45].This complicates the identification of CGIs that are involved in controlling the sperm-specific expression of protein-coding genes via promoter-CGI methylation such as the MAGE and HAGE-genes [9,54] because it may lead to false positives in genome-wide studies of promoter methylation.For example, DPPA5 a functional testis-specific gene [55], is active in pluripotent cells and down-regulated during the differentiation process [56], but we found that it contains a CGI and a pseudogene within its 5'UTR.Therefore it is not clear if the lack of methylation of this CGI is necessary to maintain tissue-specific activity or simply a by-product of the pseudogene in its vicinity.
Constitutively unmethylated class
This class showed significant coincidence with the CGIs overlapping the TSS.However, they showed neither the highest G+C-content nor the highest O/E ratio of the whole unmethylated group.Instead, they showed the lowest average CpGcluster p-value, further supporting the use of this attribute as a better measure of functionality than the CpG enrichment or G+C content alone [38].
This categorization of the methylation classes was used to re-classify the dataset into the four functional methylation classes shown in Table 8.The promoter/ TSS proximal CGIs represent the vast majority of all CGIs and they are predominantly unmethylated.It has been estimated that about 18% of the CGIs in the human genome are subject to tissue-specific methylation [43] and we found our data to support this estimate since just over 17% of the CGIs were either unmethylated in sperm or differentially methylated.It is noteworthy to mention that neither of these two classes overlapped significantly with the promoter-proximal CGIs.
Prediction of CGI methylation
Our four functional CGI methylation classes were then employed in the development of a supervised classifier.
Our hypothesis was that if these classes were biologically and computationally significant, they would be useful in predicting new observations.To do so we first labeled each observation using the classes assigned by the profiles.We then used a simple classifier (i.e.decision tree) that employs 23 of the 38 attributes to predict the four classes of methylation.The methods were tested via 10-fold cross-validation where imbalanced classes were compensated [57] (see Methods).
These results show that the decision tree can encode rules that predict CGIs with distinct methylation patterns at a high level of accuracy (Table 9, Additional file 4).It is difficult to asses the performance of our method compared to previous computational approaches due to the fact that all constrain the prediction to only two methylation classes, where a sequence is either "methylated" or "unmethylated" across all possible tissues or cell types [26,32] and they do not take into account tissue-specific methylation.
The ability of our approach to predict methylated and unmethylated CGIs, was then directly compared to the results obtained from EpiGRAPH [32], which were tested on our HEP-based CGI methylation data and the methylation data used by the EpiGRAPH system (Table 10).In order to compare the methods we used a binary methylation classification system as described in the Materials and Methods section.
Both datasets were classified using two methods, SVM and decision tree, the former one with two different implementations, EpiGRAPH C4.5 and the Matlab decision tree (CART, version R2007a).In all cases we used default parameters.
The results obtained using both datasets are very similar, independently of the classifier used.Thus, we suggest that our attribute set is capable of predicting methylated Matlab -Decision tree** 89.39 0.707 *Validation was performed using 10 repetitions of 10 fold cross-validation **Validation was performed using 10 fold cross-validation 1 HEP CGI data (using our attributes and binary methylation classes) 2 EpiGRAPH methylation data (using the default EpiGRAPH sequence attributes and binary methylation classes) 3 HEP CGI data (using our attributes and four methylation classes) The average accuracy (Acc) and correlation coefficient (CC) were used to measure fitness.
Unexpectedly, a different implementation of a simpler method such as a decision tree, obtained a better accuracy and CC than the more complex SVM with the default setting parameters.In addition, this method produces interpretable rules that can be used for a better understanding of the data and easily extended to the use of multiple classes.
As shown in table 10 we are able to predict four different classes with accuracy close to that of the binary methylation prediction.The comparison with a more sophisticated classifier, suggests that the new information in terms of CGIs attributes and tissue-specific methylation classes are the key factors that improve the CGI classification instead of the classifier themselves (i.e.method bias) [57].
Conclusion
The analysis of DNA methylation has been based primarily on the use of binary models which predict DNA sequences to be methylated or unmethylated.We have presented a profile-based approach that is able to define novel CGI methylation data relationships which not only separated between constitutively methylated and unmethylated CGIs but also identified CGIs showing a differential degree of methylation across tissues and cell-types or a lack of methylation exclusively in sperm.Our approach differs fundamentally from previous studies since it does not specify CGI classes a priori.Instead, it employs unsupervised data clustering methods for the detection of groups of CGIs sharing a common tissue-specific degree of methylation as well as similar attributes.These types of clustering methods avoid the potential biases of the limited CGI dataset available here since they do not require pre-determined classes in order to detect homogeneous groups within the data.
The functional CGI profiles discovered in this work bring new insights into the features associated with CGI methylation susceptibility, which included their evolutionary conservation, their significance, as well as the evolutionary evidence of prior methylation.Moreover, the usefulness of this information in building a simple classifier demonstrated that the ability to predict CGI methylation is mostly based on the biological information and the relationships uncovered between different sources of knowledge.This information can be exploited for the improvement and development of new tools able to detect not only constitutive or tissue-specific CGI methylation with equally high degrees of accuracy, but CGI functionality across the genome as well.
Contrary to previous studies, our method does not rely on ad hoc thresholds in order to determine if a CGI is constitutively methylated, unmethylated or shows a tissue-specific degree of methylation [25,31,32,43].
This yielded a series of novel, functional CGI profiles that allowed us to measure the extent of tissue-specific CGI methylation within the genic environment.We found that the different functional regions of genes were not equally affected by methylation.Furthermore, we were able to determine biological attributes that influence both the functionality and the methylation status of the CGIs, allowing us to use this knowledge for the computational prediction of their methylation.
In addition to the insights provided by our approach we demonstrate that the attribute set used is able to predict four methylation classes conserving the accuracy provided by leading binary methylation classification methods.
Tissue-specific CGI methylation
The methylation data of the Human Epigenome Project (HEP) were used for the analysis of tissue-specific CGI methylation [31].They specify 1.9 million CpG methylation values, from 2,524 sequences ("amplicons") across human chromosomes 6, 20 and 22. Methylation levels were measured in 12 different healthy tissues and cell types.Methylation values stemming from the same tissue and CpG were averaged and only unique CpGs, whose methylation status has been measured in at least one of the twelve tissues, were retained.The CGIs selected for this study were determined via the CpGcluster algorithm [38] and the degree of CGI methylation was then calculated by averaging all methylation values per CGI.This was done separately for each of the twelve tissues and only CGIs that had at least two tissues where over 70% of their CpGs were defined by a methylation value were then included in the database.In order to minimize the impact of missing methylation values during the detection of the CGI methylation profiles, a CGI was determined to have a degree of methylation of 0.5 if it was not defined in a particular tissue.This value indicates neither a high nor low degree of methylation and introduces the least amount of bias without having to limit the database to CGIs whose methylation status was known in all 12 tissues.
CGI biological attributes
Biological attributes belonging to the following categories were then used to characterize each of the CGIs in the database: (1) CGI-specific attributes: This category included the p-value calculated using the CpGcluster algorithm as a measure of CGI significance, the CpG Observed/Expected ratio (O/E ratio), the sequence content in Guanine and Cytosine (G+C content) [58] as well as the average distance between the CpGs of each CGI (CpG-distance), which is a measure of CpG spacing.Furthermore it included the standard deviation of the CpG distances (SD), calculated as: where N is the number of CpGs in the CGI, χ i the distance between two consecutive CpGs and c is the average distance between neighboring CpGs of a CGI.Furthermore, the frequency of the 16 possible dinucleotides was measured (Dinucleotide content).
(2) Repetitive sequences: This category included both the number of repetitive elements intersecting with a CGI (Repetitive elements) and the fraction of a CGI covered by a repetitive element (Repetitive content).The human repetitive elements were identified using the RepeatMasker program [59].
(3) Evolutionary conservation: Conservation was measured via the fraction of each CGI overlapping with a PhastCon and the number of PhastCon elements per CGI.The PhastCon elements we used were highly conserved across 17 vertebrate genomes [60].We obtained the "most conserved"PhastCons that demonstrate a log-odds conservation score of 100 or better via the UCSC Genome Browser [61].
(4) Structural and physicochemical properties: This category included the local sequence bending (Bending) and the macroscopic sequence curvature (Curvature), calculated using the banana algorithm from EMBOSS [62].Furthermore it included the four attributes quantifying the number of DNA helix turns (Turns), the number of bases per turn (Bases per turn), the degree of DNA sequence twist (Degree of twist)and the base-pair stacking energy (Stacking energy), measured in kilocalories per mol, were calculated via the btwisted algorithm of the EMBOSS toolkit [62] and averaged over the length of the sequence.The stacking energy is measured in negative kcal/mol and the normalization was performed to between 0 and 1, values close to zero indicate that higher energy is needed to separate a region of doublestranded DNA.
The amount of DNA cleavage (DNA cleavage) indicates the solvent-accessible surface area of the DNA [63,64].This information and was provided for each individual CGI by Thomas D. Tullius of the Department of Chemistry and Eric Bishop of the Program in Bioinformatics at the University of Boston.DNA cleavage was computed by averaging the single nucleotide cleavage values over the length of the CGI.The remaining attributes are based on a recent method described in Goni et al. [40] for calculating the six helical forceconstants used to measure the average deformability of the CGI sequence: rotational parameters twist, tilt and roll (measured in kcal/mol degree 2 ), translation-related parameters shift, slide and rise (measured in kcal/mol Å 2 ) [65].
Prior to the profile searching, we performed a filtering of potentially uninformative CGI attributes via the Principal Components Analysis (PCA) method [66] using the Spotfire ® DecisionSite ® system [67].The principal components that included 90% of the cumulative Eigenvalue were then chosen for further analysis.The contribution of each attribute to the principal components was analyzed via the eigenvector plots shown in Table 2 of the Supplementary data.CGI attributes that did not have a coefficient value greater than 0.1 or smaller than -0.1 in any of the selected principal components [28] were determined to be uninformative and removed from the database.Though the principal components themselves represent a reduction of the dimensionality of the data, meaning that they capture the same variability of the data but with fewer attributes, they were not used for any of the cluster analyses since neither the CGIs nor the actual values of the attributes that form part of the principle components are known.
Genes and pseudogenes
The CGIs were assigned to 7 classes based on their association with pseudogenes acquired from http:// www.pseudogene.org/and protein coding genes annotated in the AceDB [68] which summarizes all curated cDNA data from GenBank [69], dbEST [70] and the RefSeq [71].This database was chosen because it provides a richer view of the human transcriptome, with three to five times more transcripts than the UCSC Known Genes, RefSeq or Ensembl annotations [68].
CGIs overlapping with a TSS or the promoter proximal region were defined as two separate classes (TSS, Promoter), since the TSS-overlapping CGIs are generally thought to have a higher G+C content and higher degree of CpG enrichment than non-TSS overlapping CGIs even if they are in the vicinity of the promoter [72].In addition, we determined if a CGI was part of the 3'UTR (3'UTR) and separated purely intronic CGIs (Intron) from those overlapping partially or completely with a CDS on either strand (CDS).[77] if not indicated otherwise.The hierarchical clustering was performed using the Euclidean distance and the complete linkage approach.number of clusters was calculated using inconsistency threshold (ICT) and coefficient [75] as validity indices [78].The k-means clustering was performed using the Euclidean distance.
To reduce the sensitivity of the algorithm to the initial random cluster centroids, each of the k-means runs was repeated ten times and the best solution was chosen.We used the silhouette method [79] to estimate the number of clusters.The potentially optimal number of k-means clusters was then chosen in order to maximize the average distance between silhouette means (ADSM).
Combining clusters from independent sources of information into CGI profiles
The probability of intersection (PI) was used to determine the most significant intersections between the attribute and methylation data clusters.It is based on the hypergeometric distribution and is an adaptive measure that is sensitive to small sets of examples while retaining specificity with large datasets.The PI is more sensitive to relationships between smaller but highly similar groups than other measures that are based solely on the number of instances in the intersection [37,80].This measure gives the chance probability of observing at least p candidates from a profile V i within another profile V j [37] as: where V i represents a cluster of CGIs defining profile of size h, V j is a cluster of CGIs defining a profile of size n, p is the number of CGIs in the intersection between two clusters and g is the total number of CGIs in the database.The PI was computed using custom Matlab ® scripts.
Summarizing profiles and labeling instances from the datasets
Identifying the right number of clusters is an unsolved problem [78,81].As expected, different validity indices [78] as well as distinct clustering methods provide inconclusive results.Therefore, we selected more than one option (2.1) and instead of optimizing typicality inter and intra clustering measurements we optimized the posterior probability of matching between two different sources of clusters (2.3).This process generated several redundant profiles originating from redundant clusters.To remove this redundancy, we re-clustered the centroids [34] of the profile methylation classes and obtained a reduced set of classes including constitutive unmethylated, constitutive methylated, unmethylated in sperm, and differentially methylated.Then, we replaced the original classes in the profiles by the reduced ones and used them to label each instance in the CGI sequence attribute database.In other words, we transformed an unsupervised problem into a supervised one [73].
Selecting the relevant features from each profile
This transformation into a supervised problem (i.e., labeled data) allowed us to apply typical feature selection strategies to identify the most relevant attributes for each profile.We use the entropy as a discriminative measurement [73] implementing a decision tree [73,82] (CART, Matlab version R2007a) with default parameters.For each profile we use the labeled data covered by it.This process was locally carried out for each profile to identify which are the relevant features for a particular methylation pattern.This process could also be performed by a global decision tree including all profiles but with less interpretable results (i.e.very long rules).
Predicting CGI methylation from profiles
The functional CGI methylation profiles were used to predict based on a classification tree with default parameters as described above, and labeled observations (2.4).The classifier performance was evaluated using 10fold cross-validation (crossvalind Matlab, version R2007a), the accuracy (Acc), which represents the fraction of CGIs whose methylation profile was predicted correctly (equation 3) and the correlation coefficient (CC) on the test subset which combines both sensitivity and specificity (equation 4): To compare results with other data sources we used another implementation of the decision tree in Epi-GRAPH [32] and the support vector machine classifier from the same source.The CC was only used in the binary classification, where a CGI is classified as "methylated" if it was not constitutively unmethylated.Finally, we compensated the unbalanced number of observations per class in the non-binary experiments by oversampling the unmethylated in sperm and differentially methylated classes [57].
Figure 1
Figure 1Overview of the profile-based approach to the analysis of tissue-specific CGI methylation.
Figure 2
Figure 2 Determining non-redundant CGI profiles.Elimination of redundant CGI profiles.Initially, 55 profiles (relations between CGI sequence attributes and methylation classes linked by the probability of intersection) were identified.We grouped all profiles recognizing the same observation using a column/row hierarchical clustering, and summarize each cluster by their most representative prototype (i.e., the most supported relation of each cluster).The validity index we used (see methods) suggests a partition into 9 final profiles.
Figure 3
Figure 3 Linking clusters and Feature selection of new methylation classes.Summarization and feature selection of CGI profiles.A) Identification of 9 CGI profiles by linking CGI sequence attribute clusters (lower left corner) and methylation clusters (upper left corner) by the probability of the intersection (PI), which is calculated based on the hypergeometric measurement (blue color).The attributes were normalized within the colourmap intervals.Notably, the relations are built based on the PI (line color; dark blue: low p-value; light blue: high p-value), which substantially differs from the typical support of intersection measurement (line weight; thin: few; tick: many).For example, the fifth relation (5th column from left) is supported by just ~40 observations (thin line) but most of the CGI sequence attribute observations correspond to the 4th methylation class and only few belong to others classes.This approach can generate cohesive relations even if they aren't highly supported.The nine methylation profiles are summarized by similarity of their prototypes, constituting 4 final methylation classes (I-IV).These classes were used to label all CGI sequence attributes observations.B) Feature selection for each class based on the dataset labeled in A).This process has been carried out locally by using decision trees (Matlab) where the desired class (labeled read leaf) was distinguished from all of the others (unlabeled black leaf).
Table 1 :
Unique CpGs and CGIs defined by the HEP data *Number and percentage of CpGs measured in at least one of the twelve tissues.**Number and percentage of CGIs where at least 70% of the CpGs were covered by the HEP data, in at least two tissues.
Table 2 :
Validity indices used to estimate the optimum number of data clusters
Table 3 :
CGI profiles: Methylation values of each non-redundant CGI profile Average and tissue-specific methylation values of the nine non-redundant CGI profiles.The methylation values mentioned during the comparison of the profiles are marked in bold and all differences between profiles are supported by a MWW p-value lower than 0.01.
Table 4 :
CGI profiles: Attribute values of each non-redundant CGI profileThe attribute values in Table4are normalized between 0 and 1.This normalization is performed before the clustering process in order to prevent biased clusters caused by attributes with high absolute values.The significance at a p < 0.05 is relative to these normalized values.The non-normalized values are available in the supplementary information.The attribute values mentioned during the comparison of the profiles are marked in bold and all differences between profiles are supported by a MWW p-value lower than 0.01.
Table 5 :
Profile support
Table 6 :
Distribution of CGIs over the gene association classes
Table 7 :
Coincidence between gene association classes and PBCs Only the p-value (PI of significant intersections (< 0.01) is shown and marked in bold.
Table 8 :
Re-classification using functional CGI profiles
Table 9 :
Distribution of conserved and not conserved CGIs over PBCs and gene association classes *Absolute number and percentage of conserved and not conserved CGIs of each PBC in the gene-association classes.
Table 10 :
Comparison of accuracy using binary methylation classification
|
2016-10-31T15:45:48.767Z
|
0001-01-01T00:00:00.000
|
{
"year": 2009,
"sha1": "eb497f93759b5622f462b3940fbd253c9f9a14d9",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-10-116",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "93464713d3033df58517768238ff308cd7846d20",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
]
}
|
231223166
|
pes2o/s2orc
|
v3-fos-license
|
Unlawful provisions of a contract (abusive clauses) – forms of review and the system of eliminating them from economic trading
As a rule, legal transactions are governed under the principle of the freedom of contract, though this does not mean that each provision included in a contract will be lawful, especially if a consumer is one of the parties to the contractual relationship. The legislator, whose aim is to equate the positions of the undertaking and the consumer, introduces a number of regulations addressing the inclusion of abusive clauses in a contract/standard contract. The paper discusses the definition of unlawful provisions correlated with forms of reviewing them with reference to amendments introduced in this scope.
Introduction
Unlawful contractual provisions, in other words abusive clauses, the need to regulate which under Polish law resulted from the obligation to implement Council Directive 93/13/EEC 1 of 5 April 1993 on unfair terms in consumer contracts, 2 are regulated by Articles 385 1 -385 3 of the Act of 23 April 1964 The Civil Code 3 (hereinafter CC). As rightly noted in the judicature, these regulations are "the core of the system of consumer protection against undertakings' using their stronger contractual position associated with the possibility of unilateral shaping the contents of provisions binding the parties, in order to reserve clauses unfavourable to the consumer (abusive clauses)". 4 to be highlighted that these regulations introduce an instrument of enhanced 5 "review of the content of provisions imposed by an undertaking with regard to respecting consumer interests". 6 It needs to be highlighted already here that the discussed regulations refer only to relationships occurring between consumers and undertakings which are referred to as B2C (business-to-consumer). These regulations, however, are not ignored in commercial transactions (B2B, business-to-business), as they directly affect them.
The fundamental purpose of this study it to analyse and assess the changes in the system of review of provisions of standard contracts executed with consumers by entrusting it to the President of the Office of Competition and Consumer Protection (hereinafter the Office), maintaining substantive review over its decisions by the Court of Competition and Consumer Protection.
Consumer as a party to a contractual relationship
It is worth beginning reflections on the subject-matter of abusive clauses and forms of their review by presenting the legal specificity of the position of the consumer in relation to the undertaking.
In legal trading the consumer is seen as a weaker party to the contractual relationship. 7 This position is particularly justified under Article 76 of the Constitution of the Republic of Poland 8 and the established line of decisions of the Constitutional Tribunal, 9 and should not raise great doubts. Despite that, it is worth recalling arguments supporting it. It is known that in order for a natural person to be attributed the status of a consumer they must perform a legal act with an undertaking, but this activity must not be directly related to his business or professional activity (Article 22 1 CC). With reference to the said definition the following model is construed: a non-professional entity -consumer, executes a contract (or performs another legal acts) with a professional entity -an undertaking. Already at this level of analysis, when defining the parties, a clear disproportion can be seen -only one of the entities 5 Enhanced in terms of general principles specified in Article 58 § 2, Article 353 1 and 388 CC. As noticed by the Constitutional Tribunal, "the legislator considered it certain that the consumer is a weaker party to a legal relationship and thus requires protection, that is certain rights which would lead at least to relative equation of the position of counterparties. Consumer's rights correspond to certain obligations from the other side -seller or service-provider. Whereas jointly the rights of one and the obligations of the other party are to compensate the consumer his incomplete possibility to exercise the principle of autonomy of will of the parties to the contract" and further -"protective obligations resting on public authorities cover the need to ensure minimum statutory guarantees to all entities, who (…) take a weaker position in relation to professional participants of the market game". listed in Article 22 1 CC should be attributed the name of a professional entity; naturally, the undertaking. Therefore, one cannot place the equals sing between the undertaking and the consumer looking through the prism of a legal act which may bind the parties in the future. This results from the fact that in a majority of cases the consumer does not have, or even does not have the opportunity to gain such professional knowledge on a given product or service that the undertaking has. Consumer's position to a large extent depends on the behaviour of the undertaking. Information passed to him by e.g. a salesperson will have great impact on his decisions. Continuing the reflections -the fact that the consumer does not have the same specialist knowledge in terms of the good or service as the undertaking does is not the only premise that advocates considering the consumer a weaker party. At the moment of executing a contract with an undertaking, most often this contract has already been drawn up and the only thing that individualises it is the parties' data; this then means incorporation of a standard contract. It is the professional entity that creates the legal relationship. On the ground of the above comments the legitimacy of consumer protection seems natural. However, it is worth noting as in E. Łętkowska, that this protection should not be seen as the authority's protectionist favouring the consumer, but as actions for compensating his lack of knowledge and awareness caused by the mass scale of production and trading in general. Measures and instruments serving to protect the consumer, concludes E. Łętkowska, do not aim to "give" the consumer something extra, but to restore equality of opportunity, hence consumer protection means "an instrument of fighting for a truly free market -for all its participants, active and passive". 10 K. Zaradkiewicz has a similar view on this matter indicating, that the legislator's introducing various forms of consumer protection and calling him "a weaker party" does not aim to favour him in relations with a professional entity but to execute the principle of equality of the parties in real terms. 11
Premises for abusiveness
In order to protect consumers as weaker parties in the relations with undertakings, the legislator has introduced a security measure for consumer interests -an institution that allows considering some of contract provisions occurring in consumer transactions as unlawful. Under Article 385 1 § 1 CC unlawful clauses need to be understood as such provisions of a contract which have not been agreed individually and at the same time set forth his rights in contrary to good practice, grossly violating consumer's interests. At the same time, it must be noted that this does not apply to all contractual provisions -it does not refer to provisions shaping the parties' main performances. 12 The legislator clarified here that it involves mainly the price or remuneration, as long as these provisions were expressed in the contract in a way 10 See E. Łętowska, Prawo umów konsumenckich, Warsaw 1999, p. 19 that does not raise doubt. It needs to be flagged up that the regulation of Article 385 1 CC does not only refer to individual contracts but also to standard contracts referred to in Article 384 CC. 13 In order to move on to discussing forms of reviewing unlawful clauses positive premises that must be met in order for a given provision to be called abusive must be analysed. Following Article 385 1 CC's regulations in order all features of abusiveness of contractual provisions will be discussed.
Firstly, one needs to respond to the question of the statutory phrase that "a provision has not been agreed individually with the consumer". Article 385 1 § 3 CC addresses it saying that provisions which are not agreed individually are those on which the consumer had no actual influence. Then, how should "actual influence" be understood? The Court of Appeal in Białystok among others addressed this expressing a view that the mere fact of the consumer being aware of the content of the provision and, what is more, understanding its meaning does not necessarily prejudge that it has been individually agreed with him. 14 The court's position is well-worth approving as the very fact of being aware of something does not equate with having actual influence on a given state of affairs. Legal scholars and commentators express a view according to which it is impossible to conclude that the consumer had actual influence on a given provision if he only had the possibility to choose between a few solutions offered by the undertaking. Actual influence can thus be talked about e.g. where the consumer himself suggested a given provision of a contract and it was accepted by the undertaking. 15 Article 384 CC is worth mentioning here which talks about standard contracts. Should in a given individual contract provisions of a standard contract be included it needs to be considered by default that the consumer did not have influence on such provisions. 16 Secondly, another premise of abusiveness involves "setting forth consumer's rights and obligations in a way that is contrary to good practice". It is difficult to find a legal definition of "good practice", yet based on a firm stand of legal scholars and commentators and the established line of judicial decisions it needs to be concluded that provisions contrary to good practice will be those that mislead the consumer or those that set forth consumer's position with violation of the principle of freedom of contract. 17 The Supreme Court in one of its judgments also stated that it is contrary to good practice to formulate contractual provisions in such a way that gives the professional entity freedom to set the extent of parties' obligations during the duration of the legal relationship or to exchange the product named in the contract with a similar product though in essence different to the one specified in the contract. 18 The last of the determinants of abusiveness involves "violating consumer's interests". Any provision that causes an unfounded and unfavourable to consumer disproportion between his rights and obligations and the rights and obligations of the undertaking constitutes a gross violation of the interests of the weaker party to the relationship. 19 Considering a given contractual provision as abusive, pursuant to Article 385 1 § 2 CC, results in the fact that parties are not bound by such a provision under the law ex tunc. However, a contract is binding on the parties in the remaining scope. The Supreme Court pointed to two possibilities of solutions in the case of occurrence of abusive clauses in a contract. One of them is deciding that after excluding abusive clauses the contract is binding on the parties in the remaining scope. The other one involves deciding about the contract's nullity or declaring a contract null and void due to the lack of possibility to execute it by the parties as a result of exclusion of abusive clauses. 20 Moving on to forms of reviewing abusive clauses it is worth mentioning the catalogue of abusive clauses included in Article 385 3 CC. The article in question constitutes a set of 23 sample provisions that need to be considered unlawful. 21 The catalogue of abusive clauses is related to the regulations expressed in Article 385 1 CC; thus one needs to agree with the position of the Supreme Court which says that where a given provision is identical to one named in the catalogue it constitutes at the same time violation of provisions expressed in Article 385 1 CC. The mere fact that a given provision of a contract or a standard contract is similar to the one included in the catalogue does not deem the provision abusive. According to the Supreme Court, this catalogue should provide support where there are doubts as to the fact whether a given provision should be considered unlawful or not. 22 Clauses included in Article 385 3 CC are referred to by legal scholars and commentators as the so-called grey clauses; the purpose of this institution is to create a potential possibility to maintain provisions convergent with the catalogue, though in practice it is quite unlikely. 23 Placing definitions of clauses in regulations and creating a sample catalogue alone does not provide consumers with full protection. This is why two forms of reviewing abusive clauses have been introduced into the Polish legal order.
Forms of reviewing abusive clauses
The Polish legal system distinguishes between incidental and abstract reviews of abusive clauses.
Reviewing abusive clauses of incidental/specific nature does not refer directly to a standard contract but to a given specific legal relationship. This is a matter of establishing a contractual obligation between the consumer and the undertaking. The Court of Appeal in Warsaw presented the core of incidental review of abusive clauses clearly in its judgement stating that the essence of a specific review of provisions of a contract in terms of abusiveness involves protection of a specific consumer. The court examines then the provision of a given contract, where- as if the contract includes provisions of a standard contract, then it is also subject to review. 24 Therefore, a conclusion needs to be made that incidental review may cover only such provisions of a contract which were not individually agreed with the consumer, or those which the undertaking imposed on the consumer, maintaining, naturally, the remaining premises under Article 385 1 CC. Only the consumer and undertaking bound by a specific contractual relationship may be parties to court proceedings in the course of which incidental review is carried out. Therefore, a ruling on the abusiveness of given provisions is binding inter pares. 25 As for the subject-matter of incidental review, as mentioned earlier, it covers provisions of a contract which were not individually agreed with the consumer. If a specific contract was drafted by means of a standard contract, then the provisions of this standard contract also become subject to review. Certain doubts arise in terms of reviewing the standard contract itself during incidental review. What where a consumer claims abusiveness of only one provision of a standard contract incorporated to the contract, the court decides about this abusiveness and then the consumer brings another action to court concerning abusiveness of the entire contract due to bad incorporation of the standard contract? M.
Bednarek asks a legitimate question here -how should the court respond to this and how does it refer to Article 321 of the act of 17 November 1964 The Code of Civil Procedure 26 (hereinafter CCP)? It is well-founded to express approval for the view presented by M.
Bednarek in reference to the above -where the contract binding the parties was incorporated from a standard contract, the court should examine ex officio the correctness of such incorporation. 27 With regard to incidental review, it needs to be emphasised that it is carried out in each proceeding if the party (consumer) puts forward an allegation of including an unlawful clause in a given contract. Continuing reflections in this regard it needs to be stated that court proceedings may involve incidental review even if it was the undertaking, not the consumer, who was the claimant. 28 It is indeed possible that an undertaking should bring an action against a consumer and in the course of the proceedings the consumer puts forward a plea of including an unlawful clause in the contract binding the parties. The court will then review the contract. It needs to be highlighted here that the judgment in the case will not set forth a new legal situation for the parties; therefore, it will have a declarative character. 29 This results from the fact that an unlawful clause is invalid from the very beginning and the court's tasks involves examining only whether a given provision does or does not bear features of abusiveness. Therefore, it is not the court's competence to put provisions the court deems appropriate in place of abusive provisions. 30 This would entail violation of the freedom of con- tract, a view supported by the European Court of Justice (hereinafter ECJ). 31 32 The Supreme Court expressed its opinion on the above in the same way, pointing out that when assessing abusiveness of a given provision the court does not consider how a given provision should be amended and whether such a change should take place at all. 33 Naturally, when evaluating the provisions, the court takes into account the moment the contract was executed. 34 What occurred after the contract had been signed should not be taken into account when assessing abusiveness. 35 When assessing the provisions of the contract the court should examine not only the contract itself but also connected contracts, which results directly from Article 385 2 CC.
Summing up the reflections on incidental review of abusive clauses one needs to agree with B. Wyżykowski who states that in order to decide about abusiveness of provisions of a given contract a specific legal relationship needs to be assessed and not the mere fact of deeming a similar provision in the past unlawful. 36 In turn, abstract review of abusive clauses, contrary to incidental review, does not refer to a specific legal relationship but to reviewing the standard contract (definition of a standard contract -Article 354 CC). 37 For this form of review it does not matter whether a given standard contract was incorporated to an individual contractual relationship or not. 38 In this form of review the standard contract is treated as an independent entity. 39 Initially (until 17 April 2016) the issue of abstract review had been regulated by the provisions of the CCC (479 36 -479 45 ). Abstract review then had a court and administrative form, it was performed by the Regional Court in Warsaw -the Court for Competition and Consumer Protection (hereinafter: the Court). 40 It was the Court that decided about abusiveness of a given standard contract. 41 Before the amendment the review had been performed in two dimensions -primary and secondary. An entity authorised under no longer applicable Articles 479 36 -479 44 had to file a complaint with the Court. On considering a given standard contract contrary to applicable law the court issued a judgment prohibiting the use of this provision in a standard contract. Secondary review, on the other hand, consisted in the President of the Office of Competition and Consumer Protection examining in adminis- trative proceedings whether given standard contracts are identical to those included in the register of unlawful clauses. 42 The substantive and legal basis in the case of abstract review was and is the same as in the case of incidental review 43 (Articles 385 1 and 385 3 CC), though today other regulations form part of this basis, which will be discussed in further parts.
The then form of review is problematic if only in the question of a great number of complaints filed with the Court, and thus a prolonged time of waiting. Moreover, the manner of interpreting judgments issued by the Court also caused problems -whether a given judgment is only binding on the parties or whether it is also binding erga omnes. Initially, a view was supported that a judgment in an abusiveness case was binding non only on the entity with regard to which it was issued but also on all professional entities (undertakings). 44 This position was backed up by the fact that the register of clauses is open and thus everyone can read it. 45 With time, though, a different position was shaped, in accordance with which any consumer may refer to such a judgment, though only with regard to the undertaking for whom such a judgment was passed. 46 In the reasoning alone for the draft project on amending the act on competition and consumer protection it was pointed out, that amending the form of abstract review is necessary due to the need to "eliminate doubts about effects of the court ruling in the case of declaring a standard contract unlawful". 47 In order to overcome these inconveniences the so-called consumer amendment was introduced on 17 April 2016, whereby the aspects of reviewing standard contracts are no longer specified by the CCP, but by the act of 16 February 2007 on competition and consumer protection 48 (hereinafter the Act). The current Article 32a of the said Act speaks about the prohibition of application in standard contracts executed with consumers of provisions referred to in Article 385 1 § 1 CC.
Administrative review of standard contract provisions carried out by the President of the Office of Competition and Consumer Protection
In line with Article 23b of the Act the authority that may carry abstract review and issue a relevant decision in this regard is the President of the Office of Competition and Consumer Protection. The proceedings themselves in terms of abstract review are proceedings instituted ex officio, which is directly specified in Article 49 (1) of the Act but the proceedings themselves will be discussed still in further parts.
Classifying provisions of a given standard contract as abusive is done by way of a decision. The content of the decision is specified in detail in Article 23b of the Act. In the discussed Article the legislator pointed to elements which are obligatory for the President of the Office to point to and those that are optional. These optional elements aim to remove the effects of the occurrence in legal transactions of a given standard contract whose provisions have abusive nature. What is more and what has been pointed out in Article 23b of the Act it needs to be remembered that a decision on declaring a standard contract abusive is an administrative decision, thus it must include elements which are specified in the act of 14 June 1960 The Code of Administrative Procedure 49 (hereinafter CAP). 50 The President of the Office may in the decision among others oblige the undertaking to inform consumers who have executed contracts drawn up on the basis of a standard contract that a given provision of this standard contract has been defined as abusive. Moreover, the decision may specify an obligation for the undertaking to submit, in a specified manner and in a specified form, a single statement or multiple statements referring to the abusiveness of a provision of a given standard contract. Additionally, the legislator gave the authority the power to order in the decision to have its content published in a specified way, along with information about its validity -the cost of the publication to be borne by the undertaking. Naturally, when selecting measures the President of the Office should bear in mind the legitimacy of its application in order for it to be proportional to the violations. 51 And what is more, when assessing a given standard contract the authority should also take into account whether or not the undertaking employs a given standard contract in legal transactions. 52 As a result of the conducted proceedings the President of the Office may issue three types of decisions. Namely, he may issue a decision on discontinuing the proceedings under Article 105 § 1 CAP in connection with Article 83 of the Act. The legislator does not prescribe in the Act for a possibility to issue a decision which would specify that a given provision of the standard contract is not abusive; the only thing the reviewing authority may do is discontinue the proceedings. 53 However, the President of the Office's discontinuing the proceedings does not exclude the possibility of the consumer invoking the abusiveness of the provision in the course of incidental review. 54 He may also issue a decision declaring a given provision abusive and impose a prohibition of applying it. He may also issue a decision about abusiveness and the prohibition of applying a given provision along with obliging the undertaking to meet certain requirements referred to in Article 23b and 23c of the Act. 55 In terms of obligatory elements of the decision expressed in the Act -it must be a decision on the abusiveness of a given provision of a standard contract along with pointing to its content and an expression of the prohibition of applying the abusive provision in relations with consumers.
On top of what has been described above, the authority may impose a fine on the undertaking for violating Article 23b of the Act. Such a power is expressed in Article 106(1)(3) of the Act, according to which the President of the Office may impose upon an undertaking a fine of 10% of the turnover generated in the financial year preceding the year in which the fine is imposed for violating (even if unintentionally) among others Article 23b of the Act. Before the consumer amendment the Court had not been able to impose a fine for applying abusive provisions. 56 And thus in one decision the President of the Office imposed a fine upon an undertaking for applying the following provision in a standard contract -"The Parties represent that the provisions of this contract were individually agreed with the Buyer (which shall be understood as accepting them without comments or changing it by way of negotiations of both parties,(…)". The authority justified its opinion demonstrating that theoretically this wording implies individual agreement of the contract provisions with the consumer, while in fact it is the undertaking's unilateral provision introduced into the standard contract. If an undertaking already at the stage of drawing up the standard contract, which nota bene is to be implemented in an unspecified number of individual contracts, represents that the provisions of the contract have been individually agreed with the buyer it is clear that this provision raises doubts as to the possibility for the consumer to influence the provisions included in the contract. 57 Irrespective of the above, the legislator conferred upon the President of the Office in Article 99d of the Act the power to make the decision immediately enforceable in whole or in part where consumers' important interest requires so.
An undertaking may oblige itself before a decision referred to in Article 23b is issued to take certain measures or to forbear from doing so, the aim of which is to cease the violation of the prohibition to apply unlawful clauses or to remove the effects of violation thereof. In such a situation the authority may impose on the professional entity in the decision an obligation to perform such tasks with the requirement to submit reports on their performance at a specified time. Should the President of the Office employ the possibilities under Article 23c(1-3) the authority shall not issue a decision pursuant to Article 23b of the Act. Should the undertaking fail to perform tasks it was obliged to perform, the President of the Office may overrule the decision issued on this basis and impose on the undertaking a fine referred to in Article 106 of the Act. Regulations of Article 23c are in no way binding on the authority. Even if the undertaking undertakes to carry out the specified measures the President of the Office may not exercise the right expressed in the discussed regulation. In one of his decisions the President of the Office refused to exercise the right under Article 23c of the Act pointing out 55 that the measures undertaken by the undertaking are not sufficient to talk about the cessation of violation of Article 23a. 58 It is worth pointing out that the decision in question addressed an inclusion of an abusive provision in the rules and regulations of an online shop, which is why it needs to be stressed that rules and regulations should also be considered a standard contract. 59 It was flagged up in the introduction to the discussion of abstract review that a lot of problems were caused by the fact that it was not clear ultimately who the judgment given by the Court was to be applied to. At the moment this problem is solved by Article 23d of the Act, which clearly specifies for whom the decision issued as a result of abstract review is effective. Therefore, in compliance with the legal basis, a (final) decision issued by the President of the Office in relation to the performance of an abstract review of a given provision of a standard contract, is binding not only on the undertaking for whom it was issued, but it is also effective with regard to all consumers who executed a contract with a given undertaking on the basis of the standard contract specified in the decision. The literature calls such a state of affairs a unilateral extended effect of a final decision. 60 This regulation will be applicable not only to decisions issued on the basis of Article 23b but also on the basis of Article 23c of the Act. Even though the performance of abstract review is detached from any specific legal relation, it is due to the above-mentioned regulation that it has a direct effect on an individual consumer. They may then invoke the decision of the President of the Office with regard to an undertaking (only a final decision will be material for individual consumer interests).
As a rule, proceedings for abstract review are instituted ex officio, but for some entities the legislator has introduced the right to notify the President of the Office of violations of the prohibition referred to in Article 23a of the Act. The right to submit such notification was specified in Article 99a of the Act. In subsection 1 of the said regulation the legislator included a closed catalogue of entities who may submit such notification. Under the regulation in question a consumer among others may submit notification of the infringement of the prohibition of applying abusive clauses. Before the introduction of the consumer amendment the legislator had not specified that the notification could be submitted by a person enjoying the status of a consumer. 61 In Article 99a(2) the legislator also listed obligatory elements that such notification mush have, such as the undertaking against which the notification is submitted or pointing out the clause in the standard agreement that may be considered abusive. 62 In Article 99b(1) of the Act the legislator represented that everyone against whom proceedings have been instituted for declaring a given provision of the contract abusive is a party to the proceedings. When performing an interpretation in connection with the definition of a consumer (Article 22 1 CC) and an undertaking (Article 43 1 CC and Article 4(1) of the Act) and having regard to the content of Article 23a of the Act, it needs to be recognized that the notion of a party to proceedings in terms of abstract review should be understood as referring solely to an undertaking. 63 A decision of the President of the Office may be appealed against at the Court which is regulated in detail in Article 81 of the Act.
At the end it is worth mentioning the institution of the register of unlawful clauses. Prior to 17 April 2016 a final judgment of the Court had been referred to the President of the Office, who then placed a provision which the Court had declared abusive in the register of unlawful clauses (Article 479 45 CCP). 64 On entering the provision deemed abusive in the register of clauses it gains broader material legitimacy and at the same time enhanced gravity of the matter ruled on, and thus no one can invoke before the Court abusiveness of the same provision with regard to the same undertaking. 65 What is essential, the register of abusive clauses is maintained on the basis of judicial decisions taking into account actions in terms of abusiveness of a given provision. The then principles for entering abusive provisions in the register made it completely incomprehensible and the clauses were often duplicated. 66 What is more, the register did not contain any reasoning or context for recognizing a given provision as abusive, thus an analysis of the register was rather difficult. 67 At present the register of unlawful clauses is maintained only for proceedings instituted before the entry into force of the consumer amendment and for which the old legal basis is applicable. This register in accordance with the amendments introduced by the act of 5 August 2015 on amending the act on competition and consumer protection and certain other acts 68 will function for 10 years from the date of the amendment, that is until 17 April 2026. 69 At the moment, pursuant to Article 31b of the Act, decisions of the President of the Office are published on the website of the Office of Competition and Consumer Protection, not including business secrets or any other types of information protected under the law along with information about the decision being legally binding.
Conclusion
Introducing changes in terms of reviewing abusive clauses has a certainly positive character. Firstly, abstract review has actual significance for the protection of consumer rights, it allows elimination from legal transactions of provisions of standard contracts before they become an element of a specific contractual relationship. There are no doubts that transforming the form of abstract review from judicial and administrative into administrative greatly facilitated the process of eliminating abusive clauses. These proceedings are much shorter and publishing an entire decision about abusiveness allows for learning the context of placing a given provision in the standard contract and for learning the arguments for a given provision to be specified as unlawful. What is also approval-worthy is the fact that even if the President of the Office does not see an abusive provision in a given standard contract and as a result discontinues the proceedings, the consumer is not barred from pursuing his claims in judicial proceedings. To sum up, the consumer amendment of 17 April 2016 has a real impact on consumer protection, actually influences the scope of protection of their rights at the same time guarding the principle of contractual equality.
|
2020-10-29T09:03:26.307Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6374d7ebeb892c8acc42a9cbefb6918267e17be7",
"oa_license": "CCBYSA",
"oa_url": "https://wnus.edu.pl/sa/file/article/view/18677.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "75d55e991b348651eef002f5ac2c492c7aac2b24",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Business"
]
}
|
209716554
|
pes2o/s2orc
|
v3-fos-license
|
Photocatalytic carbanion generation from C–H bonds – reductant free Barbier/Grignard-type reactions†
We report a redox-neutral method for the generation of carbanions from benzylic C–H bonds in a photocatalytic Grignard-type reaction. The combination of photo- and hydrogen atom transfer (HAT) catalysis enables the abstraction of a benzylic hydrogen atom, generating a radical intermediate. This radical is reduced in situ by the organic photocatalyst to a carbanion, which is able to react with electrophiles such as aldehydes or ketones, yielding homobenzylic secondary and tertiary alcohols.
Introduction
Novel catalytic methods generally aim to produce a desired chemical compound from ever-simpler starting materials, maximizing the atom and step economy. 1 Hence, the functionalization of C-H bonds has received great attention, as it illustrates the most straightforward retrosynthetic path for the synthesis of a targeted product. 2 There are several methods for C-H functionalizations summarized in comprehensive reviews. 3 A prominent example is the C-H activation by metal insertion, 3c-f comprising cases of very high and catalyst controlled regioselectivity. 4 Another prevalent method is hydrogen atom transfer, 3g which is used to generate carbon centred radicals for subsequent functionalization from unreactive C-H bonds by the abstraction of a hydrogen atom.
Recently, the combination of hydrogen atom transfer (HAT) and photocatalysis has evolved into a powerful method yielding carbon radicals under mild conditions oen without the need of a sacricial oxidant or reductant. 5 With this approach, several impressive examples for C-C and C-X bond formations were reported, utilizing C-H bonds in order to arrive at the desired product in high or even full atom economy. 6 While photocatalysis, especially in combination with HAT catalysis, mainly revolves around the generation and subsequent reaction of radical species, 7 some groups have recently proposed the generation of carbanions as crucial intermediates in photocatalytic transformations. 7a,8 The formation of carbanionic intermediates is of particular interest as they are the reactive intermediates in the widely used Grignard and Barbier reactions (Scheme 1a). 9 However, these reactions produce stoichiometric amounts of metal salt waste 9c and require organohalide starting materials which oen have to be prepared. 10 In our previous report we aimed to overcome those drawbacks by using carboxylates to generate carbanionic intermediates in a photocatalytic reaction (Scheme 1b). 8g However, only aldehydes were efficient electrophiles and CO 2 was released as a stoichiometric by-product. Developing this method further, we wondered if C-H bonds could directly be activated to form the desired Grignard analogous products, maximizing the atom economy.
The most straightforward C-H activation giving potential access to carbanion intermediates from unfunctionalized starting materials is the deprotonation of the respective C-H bond. However, with a pK a value of approximately 43 (in DMSO), 11 even benzylic C-H bonds would require the use of Scheme 1 (a) Grignard reaction. (b) Photocatalytic carbanion generation from carboxylates and addition to aldehydes. (c) Envisioned photocatalytic carbanion generation from C-H bonds for Grignardtype reactions in full atom economy.
highly active bases like n-BuLi (pK a approx. 50) exceeding e.g. LDA (pK a ¼ 36 in THF) 12 in reactivity, which limits the functional group tolerance and gives rises to potential side reactions. Additionally, many of these strong bases can directly add to carbonyl compounds or be quenched by the deprotonation of the more acidic proton in alpha position of the carbonyl (pK a of acetone ¼ 26 in DMSO), 13 which may also be the case for the desired benzyl anion. Additionally, waste products resulting from the use of metal bases again diminish the atom economy. The generation of carbanions by the combination of HAT-and photocatalysis could overcome these issues and illustrates a valuable method for a redox-neutral, waste-free synthesis of Grignard-type products without the use of metals or strong bases (Scheme 1c).
In a recent report, our group could show the applicability of this concept for the photocarboxylation of benzylic C-H bonds via carbanionic intermediates. 14 In this work, we aim to extend this method to the synthesis of secondary and tertiary homobenzylic alcohols from unfunctionalized starting materials and aldehydes or ketones in a photocatalytic two-step deprotonation reaction.
Results and discussion
We chose ethylbenzene (1a) as model substrate, because its benzylic C-H bonds have a low bond dissociation energy (BDE ¼ 85.4 kcal mol À1 ) 15 and benzylic radicals can be converted into the corresponding carbanion by single electron transfer (SET) using a reduced photocatalyst. 8g Acetone (2a) was chosen as electrophile, as ketones do not bear a carbonyl hydrogen, which has shown to be prone to C-H abstraction by electrophilic radicals. 16 Product formation was observed using a combination of 4CzIPN (A) as photocatalyst and ( i Pr) 3 SiSH as HAT catalyst. Together with K 2 CO 3 as base and dry MeCN as solvent, the coupling product (3a) between 1a and 2a was detected in traces (Table 1, entry 1). A higher yield of 21% was obtained by adding grinded 4Å molecular sieves to the reaction (Table 2, entry 2). Increasing the amount of 2a by using it as a co-solvent in a 1 : 1 mixture with dry acetonitrile gave a yield of 49% (Table 1, entry 3). Reducing the amount of ( i Pr) 3 SiSH and molecular sieves gave a slightly enhanced yield (Table 1, entry 4). Using 3DPA2FBN (B) as a photocatalyst increased the yield to 50% when 10 eq. 2a were used and 86% when acetone was used as a co-solvent (Table 1, entries 5 and 6). The reaction improved slightly by reducing the loading of photocatalyst B to 3 mol% and the amount of K 2 CO 3 to 10 mol% (Table 1, entry 7). Control experiments showed, that the yield is signicantly lower when the reaction is performed without base (Table 1, entry 8) and no product was detected in absence of light, photocatalyst or HAT catalyst (Table 1, entries 9-11). a The reaction was performed using 1 eq. (0.2 mmol) 1a in 2 mL degassed solvent. b Yields were determined with GC-FID analysis using n-decane as an internal standard. c Reaction was performed in the dark.
The kinetic prole of the reaction shows a quite fast linear increase of product formation in the rst hours. However, aer 5 hours, the conversion of starting material stops at a product yield of 50 to 55%, which increased only slightly by prolonging the reaction time (Fig. 1).
To exclude the possibility, that the termination of the reaction is caused by the decomposition of either the photocatalyst or the hydrogen atom transfer catalyst, both compounds were added to the reaction separately or in combination aer several hours (Table 2, entries 1-3). However, the yield of the desired product 3a could not be increased for any of the combinations. To test if the reaction was inhibited by the formation of the product, 2-methyl-1-phenyl-2-propanol 4 was added due to its structural similarity to product 3a. Indeed, the yield decreased to 39% when 0.5 eq. 4 was added and to 11% with 1 eq. 4 (Table 2, entries 4 and 5). The addition of 1 eq. 1-heptanol also decreased the yield to 21% (Table 2, entry 6), indicating that the presence of alcohols causes the reaction to stop, presumably due to the protic hydroxy groups quenching the carbanion.
The scope of the reaction was investigated for various ethylbenzene derivatives, ketones and aldehydes (Table 3). In most cases, good yields were obtained when the electrophile acetone was used as a co-solvent in a 1 : 1 mixture with acetonitrile, while using 10 eq. of electrophile led to moderate yields. Besides ethylbenzene 1a (41%/72%, 3a), 4-or 2-ethyltoluene were also viable substrates for the reaction (3b and 3c). Notably, 4-ethyltoluene 1b was the only substrate where using less electrophile seemed to be benecial for the reaction, as a yield of 62% was obtained for 10 eq. 2a, while using acetone as a cosolvent only lead to 55% of the desired product 3b. Using cumene 1d decreased the yield to 29% (11% with 10 eq. 2a), presumably due to enhanced steric hindrance in the benzylic position (3d). The reaction proceeded well with isopentylbenzene 1e, yielding the corresponding product 3e in 47% and 79%, respectively. Ethylbenzene derivatives containing electron donating substituents, such as methoxy-(3f-3i) or amide-groups (3j) led to signicantly increased yields of up to 87% (3f and 3j). In contrast, no product was obtained with electron decient substrates such as 4-ethylbenzonitrile or 1ethyl-4-(triuoromethyl)benzene, presumably due to a kinetically more hindered hydrogen atom abstraction 17 or the lower reactivity of the corresponding carbanion intermediate. While unsubstituted toluene did not lead to any product formation due to the bond dissociation energy of the benzylic C-H bond exceeding the capability of the hydrogen atom transfer catalyst (toluene: BDE ¼ 89 kcal mol À1 , ( i Pr) 3 SiSH: BDE ¼ 87 kcal mol À1 ), 18 4-methoxytoluene 1i gave the corresponding product 3i in 19% and 53%, respectively. Chlorine and uorine substituents at the aromatic ring were also well tolerated in the reaction (3k and 3l) and using triethylbenzene 1m led to 87% of the triple substituted product 3m when acetone was used as a co-solvent. For this substrate, no product could be isolated when only 10 eq. 2a was used, as an inseparable mixture of single, double and triple substituted product was obtained. p-Phenyl substituted ethylbenzene could also be used in the reaction, yielding 62% of product 3n (31% with 10 eq. 2a). In contrast, 2-ethylnaphthalene 1o gave only low yields of 7% and 22%, respectively (3o). Heteroaromatic substrates were also viable substrates for the reaction as moderate to good yields were obtained when 2-ethylthiophene 1p or -benzofurane 1q were used (3p and 3q). Moving to ketones, the effect of steric hindrance was investigated rst. A good yield can still be obtained when the carbon chain is extended at on side (3r), whereas the yield is notably affected when both sides bear longer chains (3s and 3t) or an additional group is present in aposition (3u and 3v). No ring opening products were observed when a cyclopropane ring was present in a-position, indicating that no radical processes are involved in the addition to the electrophile. The reaction proceeds well with cyclic ketones (3w and 3x), especially with cyclobutanone (3x), altogether 1-Heptanol (1 eq.) 21 a The reaction was performed using 1 eq. (0.2 mmol) 1a and 10 eq. 2a in 2 mL degassed solvent. b Yields were determined with GC-FID analysis using n-decane as an internal standard. c Additional catalyst was added aer 14 h. This journal is © The Royal Society of Chemistry 2019 Table 3 Scope of the reaction a The reaction was performed using 1 eq. (0.2 mmol) 1 and 10 eq. of the respective ketone in 2 mL dry, degassed MeCN. b The reaction was performed using 1 eq. (0.2 mmol) 1 and 2a as co-solvent in a 1 : 1 mixture with dry MeCN in 2 mL degassed solvent mixture. c The reaction was performed using 1 eq. (0.2 mmol) 1 and the respective ketone in the amount given in the table in 2 mL dry, degassed MeCN. d The reaction was performed using 1 eq. (0.15 mmol) 5 and 3 eq. 1f in 2 mL dry, degassed MeCN.
displaying the signicant inuence of steric hindrance. In terms of functional group tolerance, alkenes (3y), alkyl chlorides (3z), ethers (3aa), esters (3ab) and protected amines (3ac) are viable substrates. However, the amount of electrophile has to be reduced in these cases, causing a decrease in yield.
Notably, if an a,b-unsaturated system is used, the 1,4-addition product (3ad) is obtained, while the 1,2-addition product was not observed. As noted above, aldehydes are prone to C-H abstraction from the carbonyl position, 16 seemingly leading to deleterious side reactions. Hence, the reaction conditions were adapted, mainly by using an excess of the ethyl benzene instead of the electrophile (see ESI for all optimization parameters †). Under the modied reaction conditions, aldehydes are feasible substrates, but yields are generally only low to moderate (up to 43% for 6a). As with ketones, steric hindrance has a signicant effect (6a-6e). Thioethers are tolerated (6f) despite the presence of C-H bonds in a-position to the heteroatom. Further, employing aromatic aldehydes gave the desired products as well (6g and 6h), and the yield increased with an additional electron withdrawing ester group (6h).
To investigate the mechanism of the reaction, a carbanion test system based on a molecule used by Murphy et al. to conrm the generation of aryl anions (Scheme 2a) was used. 19 According to Murphy, radicals are not capable of adding to esters. Therefore, ethyl-5-phenylpentanoate 7a was subjected to the standard reaction conditions. The formation of the cyclic ketone 8a indicates the presence of the anionic intermediate 7a À (Scheme 2b).
In addition to this, uorescence quenching studies were performed to conrm the interaction of the excited state of the photocatalyst with the deprotonated HAT catalyst ( i Pr) 3 SiS À . Efficient uorescence quenching was observed for the photocatalysts B and C upon addition of ( i Pr) 3 SiS À , indicating the oxidation of the deprotonated hydrogen atom transfer catalyst by the excited state of the photocatalyst (ESI, Fig. S3 and S4 †). To further conrm this, cyclic voltammetry measurements were performed (ESI, Fig. S5 †). Indeed, a potential of 0.67 V vs. SCE in MeCN was obtained for a 1 : 2 mixture of ( i Pr) 3 SiSH and K 2 CO 3 which is well in the range of photocatalyst B and C (E 1/2 (3DPA2FBN*/3DPA2FBNc À ) ¼ 0.92 V vs. SCE, E 1/2 (3DPAFIPN*/3DPAFIPNc À ) ¼ 1.09 V vs. SCE). 20 Lastly, the formation of benzylic radicals (1 ) during the reaction is indicated by the presence of small amounts of the homocoupling product 9 in the reaction mixture (ESI, Fig. S6 †).
Conclusions
In summary, we have developed a method for the photocatalytic generation of carbanions from benzylic C-H bonds, which react with electrophiles, such as aldehydes or ketones, to generate homobenzylic alcohols as products. The reaction represents a formal two-step deprotonation of the non-acidic benzylic C-H bond and could be a mechanistic alternative to classic C-C bond forming reactions such as the Grignard or Barbier reaction, giving the same products. However, instead of using stoichiometric amounts of a zero-valent metal and halogenated precursor, an organic photocatalyst, catalytic amounts of a hydrogen atom transfer reagent and visible light are used to generate carbanionic intermediates directly from C-H bonds, yielding the desired product in a redox neutral reaction with full atom economy.
Conflicts of interest
There are no conicts to declare.
Horizon 2020 research and innovation programme (grant agreement no. 741623). We thank Dr Rudolf Vasold for GC-MS measurements, Regina Hoheisel for cyclic voltammetry measurements and Willibald Stockerl for NMR measurements.
|
2019-11-14T17:08:51.332Z
|
2019-11-12T00:00:00.000
|
{
"year": 2019,
"sha1": "3601d181adfe1b3c984073c5e4a122c247cb34af",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/sc/c9sc04987h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f07c74ef8d14f7cf4e230b2c3bd3ee3e558a2bb6",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
51678469
|
pes2o/s2orc
|
v3-fos-license
|
Evidence for varicose vein treatment: an overview of systematic reviews
ABSTRACT BACKGROUND: Varicose veins affect nearly 30% of the world’s population. This condition is a social problem and needs interventions to improve quality of life and reduce risks. Recently, new and less invasive methods for varicose vein treatment have emerged. There is a need to define the best treatment options and to reduce the risks and costs. Since there are cosmetic implications, treatments for which effectiveness remains unproven present risks to consumers and higher costs for stakeholders. These risks and costs justify conducting an overview of systematic reviews to summarize the evidence. DESIGN AND SETTING: Overview of systematic reviews within the Discipline of Evidence-Based Health, at Universidade Federal de São Paulo (UNIFESP). METHODS: Systematic reviews on clinical or surgical treatments for varicose veins were included, with no restrictions on language or publication date. RESULTS: 51 reviews fulfilled the inclusion criteria. Outcomes and comparators were described, and a narrative review was conducted. Overall, there was no evidence that compression stockings should be recommended for patients as the initial treatment or after surgical interventions. There was low to moderate evidence that minimally invasive therapies (endovenous laser therapy, radiofrequency ablation or foam sclerotherapy) are as safe and effective as conventional surgery (ligation and stripping). Among these systematic reviews, only 18 were judged to present high quality. CONCLUSIONS: There was evidence of low to moderate quality that minimally invasive treatments, including foam sclerotherapy, laser and radiofrequency therapy are comparable to conventional surgery, regarding effectiveness and safety for treatment of varicose veins.
current knowledge and identifying gaps in the literature to guide future sound research.
The primary objective of this study was to summarize evidence derived from systematic reviews focusing on interventions to treat varicose veins.In addition, the following secondary objectives were defined: 1.To describe comparisons applied in studies; 2. To verify outcomes chosen to evaluate treatment; 3. To assess the methodological quality of systematic reviews on the topic; 4. To describe the strength of evidence according to different outcomes.
METHODS
This study was an overview of systematic reviews, conducted within the Discipline of Evidence-Based Health, at the Federal University of São Paulo (Universidade Federal de São Paulo, UNIFESP).
The inclusion criterion for the systematic reviews was that they needed to focus on clinical or surgical interventions for lower-limb varicose veins, provided that the abstracts contained the terms systematic review and/or meta-analysis and that a full report was available.In cases of updates of the same review, only the most recent version was considered for inclusion.The following types of study were excluded: narrative reviews, conference proceedings, structured abstracts and systematic reviews focusing on the healing of lower limb ulcers without venous interventions.
A search strategy was run in the following databases: MEDLINE, EMBASE, LILACS and CENTRAL (last updated on September 3, 2017), applying the terms "varicose veins" or "varices" or "telangiectasias".Regarding the LILACS database, 286 references were retrieved using the term "varicose veins" and synonyms, thus dispensing with the need for filters.For all other databases, a filter that had been developed for retrieval of systematic reviews was used.There were no limitations regarding language or publication date.We conducted a hand search of references presented in the studies included in our review.
Two authors independently screened studies (RAO and ACPM), and any disagreements were resolved by a third author (RR), through use of Rayyan software. 11Two independent authors conducted data extraction (RAO and ACPM), and disagreements were resolved by reaching a consensus.
The AMSTAR tool (assessment of multiple systematic reviews) was applied to assess the methodological quality of the systematic reviews included. 12This tool encompasses 11 items for methodological evaluations, each scoring from 0 to 1. Studies with a total score of 0 to 4 were considered to present low methodological quality; 5 to 8, moderate quality; and 9 to 11, high quality. 13
RESULTS
The search strategy yielded 1,245 studies.107 studies were considered for inclusion after screening of titles and abstracts, with further retrieval of full texts.Among these, 51 reviews fulfilled the inclusion criteria (Figure 1).
The reviews included were combined into 13 distinct groups of interventions, which were described as follows: 1. Clinical treatment of varicose veins: Amsler and Blättler concluded that compression levels of 10 to 15 mmHg are effective in treating chronic venous insufficiency, despite the weakness of evidence due to heterogeneity across studies. 14Two studies suggested that the effectiveness of compression stockings is overestimated, since adherence to treatment under real-world conditions is low, only reaching around 37% of the patients. 15,16Thus, it was claimed that there was no high-quality evidence to support use of compression stockings as the initial type of treatment.Smyth et al.
found that rutosides, reflexology and water immersion improved the symptoms in pregnant women with edema relating to varicose veins, although those findings were only based on a moderate level of evidence. 17Boada and Nazco concluded that use of venotonics might alleviate the symptoms of fatigued legs.However, the quality of evidence was not assessed. 18
Techniques and complications relating to sclerotherapy:
Foam sclerotherapy is effective and safe, although the quality of studies has been considered to be low. 19Cerebrovascular events associated with foam sclerotherapy are a rare but still a possible complication that has mostly been reported in the form of case reports. 20,21These side effects seem to be mild, considering that it has been reported that the majority of patients were discharged from hospital without neurological sequelae.
One study evaluated sclerosing agents to treat telangiectasias and concluded, based on very low-quality evidence, that one particular agent is not superior to another. 22 Liquid versus foam sclerotherapy: Foam sclerotherapy increases the technical success rates (venous occlusion), in comparison with liquid sclerotherapy.23 The quality of the evidence for this finding was not assessed in that report.Despite methodological limitations to evaluations on appropriate methods, dosages, formulations and compression levels, the current evidence supports the use of sclerotherapy in clinical practice.24
Surgical techniques:
The CHIVA technique (ambulatory conservative hemodynamic correction of venous insufficiency) reduces disease recurrence in comparison with ligation and stripping and has been correlated with fewer adverse events. 25ese findings are based on a few studies with high risk of bias, because of the impossibility of blinding and the small number of incidents reported.Better esthetic results are achieved through use of transillumination, but with a higher number of hematomas and more intense pain in the postoperative period. 26e quality of evidence for these findings was not assessed in that report.Studies with high risk of bias have suggested that use of tourniquets reduces bleeding. 27Mumme et al. described the valvuloplasty technique and concluded that it was suitable for preserving veins in specific patients who were at high risk of atherosclerotic disease.The quality of the evidence was not assessed. 28Pearson et al. took the view that surgery should continue to be used to treat varicose veins in public healthcare systems, although without indicating the most cost-effective technique. 29Due to the methodological limitations of the primary studies in that review, no meta-analysis was conducted.Rudström et al. assessed complications relating to the surgical approach and found that despite their infrequency, they were potentially harmful.The most common complication was bleeding after injury to the femoral vein or arterial lesions.The quality of the evidence was not appraised. 30
Surgery versus sclerotherapy:
There was no evidence that one treatment was superior to any other.However, it was suggested that sclerotherapy was associated with lower cost of treatment and better results after one year of follow-up. 31Surgical outcomes are long-lasting, but it is unknown whether sclerotherapy outcomes also are.The overall quality of the studies included was considered low, mostly due to inadequate randomization.Complications relating to sclerotherapy were infrequent, but the data were deemed to be insufficient for conclusions to be drawn, and the methodological quality of the primary studies was considered low. 32Surgery versus endolaser therapy (EVLT): All studies concluded that EVLT was as safe as conventional surgery.Van den Bos et al. 33 and Darwood and Gough 34 found that rare but potentially harmful complications might be associated with EVLT treatment.The mild complications included ecchymosis, pain, superficial thrombophlebitis, nerve lesion, arteriovenous fistula and matting.The wavelengths applied in EVLT treatment ranged from 810 to 1320 nm, and these were associated with recanalization in 5% of the patients in the first year.34 Liu et al. 35 and Pan et al. 36 concluded that the results from the two types of treatment were similar over a follow-up period of two years when fibers of 810 nm and 980 nm were used.The quality of the evidence was not appraised.Pan et al. 36 found that technical failure (saphenous reflux) was more frequent with EVLT, while Xiao et al. 37 concluded that there were no differences in the results from EVLT and conventional surgery. Rik of bias was assessed in this study, but not the quality of the evidence. Hogan et al. 38 and Mundy et al. 39 came to contradictory conclusions, based on evidence that was of low quality because of ineffective randomization and losses during the follow-up.38 Hoggan et al. 38 concluded that the rates of reflux resolution were comparable, and Mundy et al. 39 pointed out that EVLT was associated with higher rates of recanalization.
Similarly, Lynch et al. 40 reported that there was a higher risk of recanalization over a twelve-month period, although EVLT was less frequently associated with nerve lesions, infections and skin pigmentation.The findings of that study were based on low-quality evidence.Ruiz-Aragón et al. 41 also reported that there were fewer complications in the EVLT group, although it was assumed that a risk of bias existed due to exclusion of unpublished studies.7. Surgery versus radiofrequency ablation (RFA): Radiofrequency ablation was found to be beneficial over the short term, due to lower risk of ecchymosis, hematoma and pain, a more positive impact on quality of life and faster return to work. 42On the other hand, radiofrequency ablation increased the risk of recanalization after 12 months. 42It was noteworthy that there was no reliable evidence supporting superiority of radiofrequency ablation over conventional surgery. 43The rates of complications like deep venous thrombosis reached 1.8%, and recurrence remained to be clarified.Patient satisfaction and preference were found to favor surgery.In Canada, the costs of radiofrequency ablation were considered lower, based on evidence of low to moderate quality. 44
Surgery versus thermal ablation (EVLT or RFA):
Conventional surgery and thermal ablation were found to share comparable results over the long term, 45 with no difference in recurrent rates. 46Compared with surgery, thermal ablation was considered safe and effective, with the advantage of being associated with faster recovery over the short and medium terms. 47The quality of evidence was not appraised in any of these studies.
EVLT versus RFA:
The outcomes were considered comparable over the short term 48 and over a longer term of five years. 49He et al. 481][52] considered that minimally invasive techniques were as effective and safe as surgery.Thermal ablation was considered superior to surgery. 53According to Murad et al., 54 surgery and minimally invasive treatments were safe and effective, although minimally invasive procedures resulted in less disability and postoperative pain.Carrol et al. 55 concluded that alternative therapies were a possible substitute for surgery, and pointed out that foam sclerotherapy was probably more cost-effective.Paravastu et al. 56 found that the rate of recanalization of the small saphenous vein over the short term was higher in the conventional surgery group than in the EVLT group, and that the rate was uncertain for foam, compared with surgery.Overall, the quality of evidence either was considered low due to the small number of events and use of surrogate outcomes or was not appraised.
Compression versus surgery for leg ulcers:
One author considered compression to be the first-line treatment for leg ulcers. 57. Surgery for leg ulcers: Samuel et al. 58 did not identify any clinical trial.Mauck et al. 59 recommended surgery and considered that surgical treatment might improve healing.This finding was mostly based on observational studies.According to Howard et al., 60 surgery was associated with rates of healing similar to those for compression alone, but presented lower levels of recurrence.The quality of evidence was not assessed.
Any postoperative intervention: Postoperative compression
may reduce the extent of hematomas and incidence of thrombophlebitis in treatments for telangiectasias and reticular veins over a three-week period. 61Conversely, Huang et al. 62 concluded that compressive therapy lasting for more than seven days was not associated with clinical benefits regarding pain, edema, complication rate and absenteeism.In two studies by El-Sheikha et al., 63,64 no meta-analysis could be conducted because of substantial heterogeneity.Overall, the quality of evidence was either considered low or was not appraised.
The methodological quality of the systematic reviews described above was appraised through using the AMSTAR tool. 12Out of these 51 reviews, 18 presented high methodological quality, 21 were of moderate quality and 12 were of low quality (Annex 1).
Potential bias in conducting this overview
No study protocol was developed a priori for this analysis.
However, we followed the goals and methods that were initially planned.
No additional search was conducted in the gray literature.
However, we did conduct a hand search of references presented in the studies included in our review.
There may also be bias in relation to endolaser technology if studies using interventions at different stages of its development are compared.
DISCUSSION
This overview revealed heterogeneity in relation to many aspects of varicose disease, including terminology and classification.
While some authors described varicose veins as enlarged veins of more than 3 mm in diameter, 4 others defined them as veins larger than 4 mm in diameter 2 or included telangiectasias and reticular veins within the definition. 5There is still a need for standardization of terminology. 65garding prophylactic issues, Robertson et al. 66 did not find any good-quality studies that would enable assessment of whether lifestyle modifications might be useful as prophylaxis and for avoid- Techniques and complications relating to sclerotherapy Surgery versus thermal ablation (laser and radiofrequency) Surgery versus laser or radiofrequency or foam sclerotherapy Critical appraisal of studies included, through using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. 12= high methodological quality; M = moderate methodological quality; L = low methodological quality; NA = not applicable; u = unclear.Total score of 0 to 4 was considered to represent low methodological quality; 5 to 8, moderate quality; and 9 to 11, high quality. 12idence for varicose vein treatment: an overview of systematic reviews Sao Paulo Med J.
saphenous veins that are not very tortuous and absence of previous thrombophlebitis).In real life, patients present heterogeneous disease concomitantly in the same limb.Therefore, there is frequently a need to make use of a combination of techniques to achieve the best results, 31 based on the characteristics and clinical presentation of the varicose veins. 52It is crucial to establish criteria for choosing the most suitable technique for different clinical scenarios. 45lerotherapy is currently considered to be the first-line treatment for telangiectasias.Other therapies have been proposed as alternatives, but evidence to justify their choice is sparse and indirect. 16,52,55In fact, surrogate outcomes are frequently reported in trials.Thus, conclusions are based solely on technical parameters 38,67 for heterogeneous populations 68 with short follow-ups, 54 which serves to increase the uncertainties rather than to resolve them.
Ligation and stripping are frequently chosen as the comparator because of their safety, effectiveness, cost issues and time span, and these have been used as a gold standard. 55The complications associated with surgery include nerve lesions, hematomas, postoperative pain and pigmentation.However, severe complications are rare. 30nimally invasive treatments have been developed with the aim of reducing the risks and discomfort, as well as for reducing the time taken to return to work and optimizing cost-effectiveness.
Their efficacy and effectiveness are comparable to those obtained through conventional surgery, regardless of the parameters chosen for this comparison.Minimally invasive therapies or surgery cannot always be applied to particular patients. 60wever, foam sclerotherapy seems to be particularly useful in this context since it can be used alone or in combination with other interventions.For instance, it may improve the results after surgery, bearing in mind that no surgical technique is capable of eliminating all varicose veins.The limitations associated with foam sclerotherapy include higher risk of recanalization and pigmentation, 56 along with the need for multiple sessions in order to obtain satisfactory results.These restrictions are surpassed by the benefits regarding cost-effectiveness. 55We therefore considered it odd that we did not find any studies focusing on foam sclerotherapy for leg ulcers.Since fibrotic tissue may prevent the possibility of stripping some varicose veins, which consequently could maintain the pathological condition and hence the ulcers, foam sclerotherapy might potentially be a better treatment for this population.
There is no evidence that compressive stockings might bring benefits for patients with primary varicose veins. 15,16Questions arise regarding the technical attributes of stockings (the type of elastic material and level of compression), the anatomical characteristics of the lower limbs and patients' mobility while using these stockings. 69Furthermore, there is low compliance due to discomfort, pruritus, skin irritation and edema. 70,71Adherence to compressive treatment over a four-week period is as low as 40%, 70 thus compromising the accuracy of any estimates of treatment effect. 63To date, the causal relationship between symptoms and varicose veins remains uncertain. 72These factors may lead to many unnecessary treatments.On the positive side, stockings can be used to reduce the incidence of hematomas and thrombophlebitis 61 and leg ulcers, 57 thereby reducing the time taken for healing 73 and the recurrence rate.However, it is logical to claim that the best intervention should aim to treat the primary cause of leg ulcers.It has been found that surgery is just as effective in healing leg ulcers as are compression stockings, and it additionally reduces the recurrence rate. 60This should always be considered in cases of leg ulcers that are associated with varicose disease. 74en though use of stockings in the postoperative period has been recommended by some authors, 63 the effectiveness of this intervention was not found to be superior over the short term (seven days) or medium term (three weeks). 62garding the implications for practice of our analysis, the important question to be formulated is how much longer should be waited before the paradigms for varicose vein treatment are changed. 75is question remains to be answered, considering the current body of literature.According to Chalmers and Glasziou, 76 gaps in knowledge occur when study questions are not well formulated, studies are not well designed, studies are not published, or there is still a lack of data on a particular topic.Surgery seems to be the most frequent intervention for varicose vein disease in many countries, but new endovascular techniques may provide an alternative for reducing costs and risks.Nonetheless, the studies underpinning these observations have presented serious limitations that have had a negative impact on the strength of the derived evidence, due to the indirectness, low number of events and small sample sizes of these studies.
CONCLUSIONS
There is evidence of low to moderate quality to suggest that minimally invasive treatments, including foam sclerotherapy, laser and radiofrequency are comparable to conventional surgery, regarding their effectiveness and safety in treating lower-limb varicose veins.
Figure 1 .
Figure 1.PRISMA flow chart for study selection process.
|
2018-08-01T20:48:44.219Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "52725e3f7daa2298aef5af20d71ac347f59485ff",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/spmj/v136n4/1806-9460-spmj-1516-3180-2018-0003240418.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8600af63a230f60c3795963c543839c58e44f816",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222381644
|
pes2o/s2orc
|
v3-fos-license
|
Viewmaker Networks: Learning Views for Unsupervised Representation Learning
Many recent methods for unsupervised representation learning involve training models to be invariant to different"views,"or transformed versions of an input. However, designing these views requires considerable human expertise and experimentation, hindering widespread adoption of unsupervised representation learning methods across domains and modalities. To address this, we propose viewmaker networks: generative models that learn to produce input-dependent views for contrastive learning. We train this network jointly with an encoder network to produce adversarial $\ell_p$ perturbations for an input, which yields challenging yet useful views without extensive human tuning. Our learned views, when applied to CIFAR-10, enable comparable transfer accuracy to the the well-studied augmentations used for the SimCLR model. Our views significantly outperforming baseline augmentations in speech (+9% absolute) and wearable sensor (+17% absolute) domains. We also show how viewmaker views can be combined with handcrafted views to improve robustness to common image corruptions. Our method demonstrates that learned views are a promising way to reduce the amount of expertise and effort needed for unsupervised learning, potentially extending its benefits to a much wider set of domains.
INTRODUCTION
Unsupervised representation learning has made significant recent strides, including in computer vision, where view-based methods have enabled strong performance on benchmark tasks (Wu et al., 2018;Oord et al., 2018;Bachman et al., 2019;Zhuang et al., 2019;Misra & Maaten, 2020;He et al., 2020;Chen et al., 2020a). Views here refer to human-defined data transformations, which target capabilities or invariances thought to be useful for transfer tasks. In particular, in contrastive learning of visual representations, models are trained to maximize the mutual information between different views of an image, including crops, blurs, noise, and changes to color and contrast (Bachman et al., 2019;Chen et al., 2020a). Much work has investigated the space of possible image views (and their compositions) and understanding their effects on transfer learning (Chen et al., 2020a;Tian et al., 2019;Purushwalkam & Gupta, 2020).
The fact that views must be hand designed is a significant limitation. While views for image classification have been refined over many years, new views must be developed from scratch for new modalities. Making matters worse, even within a modality, different domains may have different optimal views (Purushwalkam & Gupta, 2020). Previous studies have investigated the properties of good views through the lens of mutual information (Tian et al., 2020;, but a broadly-applicable approach for learning views remains unstudied. In this work, we present a general method for learning diverse and useful views for contrastive learning. Rather than searching through possible compositions of existing view functions (Cubuk et al., 2018;Lim et al., 2019), which may not be available for many modalities, our approach produces views with a generative model, called the viewmaker network, trained jointly with the encoder network. This flexibility enables learning a broad set of possible view functions, including input-dependent views, without resorting to hand-crafting or expert domain knowledge. The viewmaker network is trained adversarially to create views which increase the contrastive loss of the encoder network. Rather than directly outputting views for an image, the viewmaker instead outputs a stochastic perturbation that is added to the input. This perturbation is projected onto an p sphere, controlling the effective strength of the view, similar to methods in adversarial robustness. This constrained adversarial training method enables the model to reduce the mutual information between different views while preserving useful input features for the encoder to learn from.
In summary, we contribute: 1. Viewmaker networks: to our knowledge the first modality-agnostic method to learn views for unsupervised representation learning 2. On image data, where expert-designed views have been extensively investigated, our viewmaker-models achieve comparable transfer performance to state of the art contrastive methods while being more robust to common corruptions.
3. On speech data, our method significantly outperforms existing human-defined views on a range of speech recognition transfer tasks.
4. On time-series data from wearable sensors, our model significantly outperforms baseline views on the task of human activity recognition (e.g., cycling, running, jumping rope).
RELATED WORK
Unsupervised representation learning Learning useful representations from unlabeled data is a fundamental problem in machine learning (Pan & Yang, 2009;Bengio et al., 2013). A recently successful framework for unsupervised representation learning for images involves training a model to be invariant to various data transformations (Bachman et al., 2019;Misra & Maaten, 2020), although the idea has much earlier roots (Becker & Hinton, 1992;Hadsell et al., 2006;Dosovitskiy et al., 2014). This idea has been expanded by a number of contrastive learning approaches which push embeddings of different views, or transformed inputs, closer together, while pushing other pairs apart (Tian et al., 2019;He et al., 2020;Chen et al., 2020a;b;. Related but more limited setups have been explored for speech, where data augmentation strategies are less explored (Oord et al., 2018;Kharitonov et al., 2020).
Understanding and designing views Several works have studied the role of views in contrastive learning, including from a mutual-information perspective , in relation to specific transfer tasks (Tian et al., 2019), with respect to different kinds of invariances (Purushwalkam & Gupta, 2020), or via careful empirical studies (Chen et al., 2020a). Outside of a contrastive learning framework, Gontijo-Lopes et al. (2020) study how data augmentation aids generalization in vision models. Much work has explored different handcrafted data augmentation methods for supervised learning of images (Hendrycks et al., 2020;Lopes et al., 2019;Perez & Wang, 2017;Yun et al., 2019;Zhang et al., 2017), speech (Park et al., 2019Kovács et al., 2017;Tóth et al., 2018;Kharitonov et al., 2020), or in feature space (DeVries & Taylor, 2017).
Adversarial methods Our work is related to and inspired by work on adversarial methods, including the p balls studied in adversarial robustness (Szegedy et al., 2013;Madry et al., 2017;Raghunathan et al., 2018) and training networks with adversarial objectives (Goodfellow et al., 2014;Xiao Figure 2: Diagram of our method. The viewmaker network is trained to produce adversarial views restricted to an 1 sphere around the input. et al., 2018). Our work is also connected to the vicinal risk minimization principle (Chapelle et al., 2001) and can be interpreted as producing amortized virtual adversarial examples (Miyato et al., 2018). Previous adversarial self-supervised methods add adversarial noise on top of existing humandefined transformations (Kim et al., 2020); in contrast, we do away entirely with human-defined views, using only learned adversarial ones. Outside of multi-view learning paradigms, adversarial methods have also been used for representation learning with GANs (Donahue & Simonyan, 2019) or choosing harder negative samples (Bose et al., 2018), as well as for data augmentation (Antoniou et al., 2017;Volpi et al., 2018;Bowles et al., 2018). Adversarial networks that perturb inputs have also been investigated to improve GAN training (Sajjadi et al., 2018) and to remove "shortcut" features (e.g., watermarks) for self-supervised pretext tasks (Minderer et al., 2020).
Learning views Outside of adversarial approaches, our work is related to other studies that seek to learn data augmentation strategies by composing existing human-designed augmentations (Ratner et al., 2017;Cubuk et al., 2018;Zhang et al., 2019;Ho et al., 2019;Lim et al., 2019; or by modeling variations specific to the data distribution (Tran et al., 2017;Wong & Kolter, 2020). By contrast, our work requires no human-defined view functions, does not require pretraining a generative model, and can generate perturbations beyond variation observed in the training data.
METHOD
In contrastive learning, the objective is to push embeddings of positive views (derived from the same input) close together, while pushing away embeddings of negative views (derived from different inputs). We focus mainly on the successful SimCLR contrastive learning algorithm (Chen et al., 2020a), but our method is general and we also consider a memory bank-based algorithm (Wu et al., 2018) in Section 4. Formally, given positive views i, j the SimCLR loss is and s a,b is the cosine similarity of the embeddings of views a and b.
We generate views by perturbing examples with a viewmaker network V , trained jointly with the main encoder network M . There are three attributes desirable for useful perturbations, each of which motivates an aspect of our method: 1. Challenging: The perturbations should be complex and strong enough that an encoder must develop useful representations to perform the self-supervised task. We accomplish this by generating perturbations with a neural network that is trained adversarially to increase the loss of the encoder network. Specifically, we use a neural network that ingests the input X and outputs a view X + V (X).
Faithful:
The perturbations must not make the encoder task impossible, being so strong that they destroy all features of the input. For example, perturbations should not be able to zero out the input, making learning impossible. We accomplish this by constraining the perturbations to an p sphere around the original input. p constraints are common in the adversarial robustness literature where perturbations are expected to be indistinguishable.
In our experiments, we find the best results are achieved with an 1 sphere, which grants the viewmaker a distortion budget that it can spend on a small perturbation for a large part of the input or a more extreme perturbation for a smaller portion.
3. Stochastic: The method should be able to generate a variety of perturbations for a single input, as the encoder objective requires contrasting two different views of an input against each other. To do this, we inject random noise into the viewmaker, such that the model can learn a stochastic function that produces a different perturbed input each forward pass. Figure 2 summarizes our method. The encoder and viewmaker are optimized in alternating steps to minimize and maximize L, respectively. We use an image-to-image neural network as our viewmaker network, with an architecture adapted from work on style transfer (Johnson et al., 2016). See the Appendix for more details. This network ingests the input image and outputs a perturbation that is constrained to an 1 sphere. The sphere's radius is determined by the volume of the input tensor times a hyperparameter , the distortion budget, which determines the strength of the applied perturbation. This perturbation is added to the input image and optionally clamped in the case of images to ensure all pixels are in [0, 1]. Algorithm 1 describes this process precisely.
IMAGES
We begin by applying the viewmaker to contrastive learning for images. In addition to SimCLR (Chen et al., 2020a), we also consider a memory bank-based instance discrimination framework (Wu et al., 2018, henceforth InstDisc), and note that our method can be naturally extended to other view-based algorithms such as MoCo (He et al., 2020), Local Aggregation (Zhuang et al., 2019), and BYOL (Grill et al., 2020).
We pretrain Resnet-18 (He et al., 2015) models on CIFAR-10 (Krizhevsky, 2009) for 200 epochs with a batch size of 256. We train a viewmaker-encoder system with a distortion budget of = 0.05. We tried distortion budgets ∈ {0.1, 0.05, 0.02} and found 0.05 to work best; however, we anticipate that further tuning would yield additional gains. As we can see in Figure 1, the learned views are diverse, consisting of qualitatively different kinds of perturbations and affecting different parts of the input. We compare the resulting encoder representations with a model trained with the expert views used for SimCLR, comprised of many human-defined transformations targeting different kinds of invariances useful for image classification: cropping-and-resizing, blurring, horizontal flipping, color dropping, and shifts in brightness, contrast, saturation, and hue (Chen et al., 2020a).
TRANSFER RESULTS ON IMAGE CLASSIFICATION TASKS
We evaluate our models on CIFAR-10, as well as eleven transfer tasks including MetaDataset (Triantafillou et al., 2019), MSCOCO (Lin et al., 2014), MNIST (LeCun et al., 1998), and FashionM-NIST (Xiao et al., 2017). We use the standard linear evaluation protocol, which trains a logistic regression on top of representations from a frozen model. We apply the same views as in pretraining, freezing the final viewmaker when using learned views; we apply no views during validation. Table 1 shows our results, indicating comparable overall performance with SimCLR and InstDisc, all without the use of human-crafted view functions. This performance is noteworthy as our 1 views cannot implement cropping-and-rescaling, which was shown to be the most important view function in Chen et al. (2020a). We speculate that the ability of the viewmaker to implement partial masking of an image may enable a similar kind of spatial information ablation as cropping. Is random noise sufficient to produce domain-agnostic views? To assess how important adversarial training is to the quality of the learned representations, we perform an ablation where we generate views by adding Gaussian noise normalized to the same = 0.05 budget as used in the previous section. Transfer accuracy on CIFAR-10 is significantly hurt by this ablation, reaching 52.01% for a SimCLR model trained with random noise views compared to 84.50% for our method, demonstrating the importance of adversarial training to our method.
THE IMPORTANCE OF INTER-PATCH MUTUAL INFORMATION AND CROPPING VIEWS
Cropping-and-resizing has been identified as a crucial view function when pretraining on ImageNet (Chen et al., 2020a). However, what properties of a pretraining dataset make cropping useful? We hypothesize that such a dataset must have images whose patches have high mutual information. In other words, there must be some way for the model to identify that different patches of the same image come from the same image. While this may be true for many object or scene recognition datasets, it may be false for other important pretraining datasets, including medical or satellite imagery, where features of interest are isolated to particular parts of the image.
To investigate this hypothesis, we modify the CIFAR-10 dataset to reduce the inter-patch mutual information by replacing each 16x16 corner of the image with the corner from another image in the training dataset (see Figure 3 for an example). Thus, random crops on this dataset, which we call CIFAR-10-Corners, will often contain completely unrelated information. When pretrained on CIFAR-10-Corners, expert views achieve 63.3% linear evaluation accuracy on the original CIFAR-10 dataset, while viewmaker views achieve 68.8%. This gap suggests that viewmaker views are less reliant on inter-patch mutual information than the expert views.
ROBUSTNESS TO COMMON CORRUPTIONS
Image classification systems should behave robustly even when the data distribution is slightly different from that seen during training. Does using a viewmaker improve robustness against common types of corruptions not experienced at train time? To answer this, we evaluate both learned views, (a) Accuracy on CIFAR-10 and CIFAR-C. * Overlap with CIFAR-C corruptions.
(b) Accuracy gain on CIFAR-C by from adding our learned views atop expert views. expert views, and their composition on the CIFAR-10-C dataset (Hendrycks & Dietterich, 2019), which assesses robustness to corruptions like snow, pixelation, and blurring. In this setting, corruptions are applied only at test time, evaluating whether the classification system is robust to some types of corruptions to which humans are robust.
As shown in Table 4a, SimCLR augmentations perform quite well, preventing a large drop in accuracy on corrupted data. This accuracy is somewhat expected, as the expert views overlap significantly with the CIFAR-C corruptions: both include blurring, brightness, and contrast transformations. Remarkably, however, our learned views, which have no explicit overlap with the CIFAR-C augmentations, also enable significant robustness, though to a slightly lesser degree. More importantly, when we train a viewmaker network while also applying expert augmentations ("Combined," Table 4a), we can further improve the robust accuracy, with notable gains on noise and glass blur corruptions (Figure 4b). We also notice a smaller decline in contrast corruption accuracy, possibly due to interactions between changing pixel magnitudes and the p constraint. In the Combined setting, we use a distortion budget of = 0.01, which we find works better than = 0.05, likely because combining the two augmentations at their full strength would make the learning task too difficult.
These results suggest that learned views are a promising avenue for improving robustness in selfsupervised learning models.
SPEECH
Representation learning on speech data is an emerging and important research area, given the large amount of available unlabeled data and the increasing prevalence of speech-based human-computer interaction (Latif et al., 2020). However, compared to images, there is considerably less work on self-supervised learning and data augmentations for speech data. Thus, it is a compelling setting to investigate whether viewmaker augmentations are broadly applicable across modalities.
SELF-SUPERVISED LEARNING SETUP
We adapt the contrastive learning setup from SimCLR (Chen et al., 2020a). Training proceeds largely the same as for images, but the inputs are 2D log mel spectrograms. We consider both view functions applied in the time-domain before the STFT, including noise, reverb, pitch shifts, and changes in loudness (Kharitonov et al., 2020), as well as spectral views, which involve masking or noising different parts of the spectrogram (Park et al., 2019). To generate learned views, we pass the spectrogram as input to the viewmaker. We normalize the spectrogram to mean zero and variance one before passing it through the viewmaker, and do not clamp the resulting perturbed spectrogram.
SPEECH CLASSIFICATION RESULTS
We evaluate on three speech classification datasets: Fluent Speech Commands (Lugosch et al., 2019), Google Speech Commands (Warden, 2018), and spoken digit classification (Becker et al., 2018), as well as speaker classification on VoxCeleb (Nagrani et al., 2017) and Librispeech (Panayotov et al., 2015), all using the linear evaluation protocol for 100 epochs. We report results with both the same distortion budget = 0.05 as in the image domain, as well as a larger = 0.1, for comparison. Both versions significantly outperform the preexisting waveform and spectral augmentations. The gains for real-world tasks such as command identification are compelling. One notable exception is the task of LibriSpeech Speaker identification. Since LibriSpeech is the same dataset the model was pretrained on, and this effect is not replicated on VoxCeleb1, the other speaker classification dataset, we suspect the model may be picking up on dataset-specific artifacts (e.g. background noise, microphone type) which may make the speaker ID task artificially easier. An interesting possibility is that the worse performance of viewmaker views may result from the model being able to identify and ablate such spurious correlations in the spectrograms.
WEARABLE SENSOR DATA
To further validate that our method for learning views is useful across different modalities, we consider time-series data from wearable sensors. Wearable sensor data has a broad range of applications, including health care, entertainment, and education (Lara & Labrador, 2012). We specifically consider whether viewmaker views improve representation learning for the task of human activity recognition (HAR), for example identifying whether a user is jumping rope, running, or cycling.
SELF-SUPERVISED LEARNING SETUP
We consider the Pamap2 dataset (Reiss & Stricker, 2012), a dataset of 12 different activities performed by 9 participants. Each activity contains 52 different time series, including heart rate, accelerometer, gyroscope, and magnetometer data collected from sensors on the ankle, hand, and chest (all sampled at 100Hz, except heart rate, which is sampled at approximately 9Hz). We linearly interpolate missing data, then take random 10s windows from subject recordings, using the same train/validation/test splits as prior work (Moya Rueda et al., 2018). To create inputs for our model, we generate a multi-channel image composed of one 32x32 log spectrogram for each sensor timeseries window. Unlike speech data, we do not use the mel scale when generating the spectrogram. We then normalize the training and validation datasets by subtracting the mean and then dividing by the standard deviation of the training dataset. We train with both our learned views and the spectral views (Park et al., 2019) that were most successful in the speech domain (for multi-channel spectral masking, we apply the same randomly chosen mask to all channels). We also compare against a variant of these views with spectrogram noise removed, which we find improves this baseline's performance.
SENSOR-BASED ACTIVITY RECOGNITION RESULTS
We train a linear classifier on the frozen encoder representations for 50 epochs, reporting accuracy on the validation set. We sample 10k examples for each training epoch and 50k examples for validation. Our views significantly outperform spectral masking by 12.8% absolute when using the same = 0.05 as image and speech, and by 16.7% absolute when using a larger = 0.5 (Table 3). We also find that a broad range of distortion budgets produces useful representations, although overly-aggressive budgets prevent learning (Table 3). These results provide further evidence that our method for learning views has broad applicability across different domains.
SEMI-SUPERVISED EXPERIMENTS
An especially important setting for self-supervised learning is domains where labeled data is scarce or costly to acquire. Here, we show that our method can enable strong performance when labels for only a single participant (Participant 1) out of seven are available. We compare simple supervised learning on Participant 1's labels against linear evaluation of our best pretrained model, which was trained on unlabeled data from all 7 participants. The model architectures and training procedures are otherwise identical to the previous section. As Figure 4 shows, pretraining with our method on unlabeled data enables significant gains over pure supervised learning when data is scarce, and even slightly outperforms the hand-crafted views trained on all 7 participants (cf.
CONCLUSION
We introduce a method for learning views for contrastive learning, demonstrating its effectiveness across image, speech, and wearable sensor modalities. Our novel generative model-viewmaker networks-enables us to efficiently learn views as part of the representation learning process, as opposed to relying on domain-specific knowledge or many costly pretraining runs. There are many interesting avenues for future work. For example, while the 1 constraint is simple by design, there may be other kinds of constraints that enable richer spaces of views and better performance. In addition, viewmaker networks may find use in supervised learning, through the lens of data augmentation and robustness. Finally, it is interesting to consider what happens as the viewmaker networks increase in size: do we see performance gains or robustness-accuracy trade-offs (Raghunathan et al., 2019)? Ultimately, our work is a step towards reducing the amount of expertise, time, and compute needed to make unsupervised learning work across a much broader set of domains. For pretraining, we use a ResNet18 encoder without the maxpool layer after the first convolutional layer. We found removing this to be crucial for performance across all models when working with smaller input images of 32x32 pixels. We use an embedding dimension of size 128 and do not use an additional projection head as in Chen et al. (2020b). For the SimCLR objective, we use a temperature of 0.07. For the InstDisc objective, we use 4096 negative samples from the memory bank and an update rate of 0.5. We optimize using SGD with batch size 256, learning rate 0.03, momentum 0.9, and weight decay 1e-4 for 200 epochs with no learning rate dropping, which we found to hurt performance in CIFAR-10.
For the viewmaker, we adapt the style transfer network from Johnson et al. (2016), using a PyTorch implementation, 1 but use three residual blocks instead of five, which we found did not hurt performance despite the reduced computation. To add stochasticity, we concatenate a uniform random noise channel to the input and the activations before each residual block.
Additionally, we performed preliminary experiments with a U-Net architecture (Ronneberger et al., 2015) for the viewmaker but found significantly worse performance. We leave a more in-depth investigation of the role of architecture and model size in the effectiveness of the viewmaker.
During transfer (linear evaluation), we use the pre-pooling features after the last convolutional layer of the ResNet18, totaling 512*7*7 dimensions. We load the parameters from the final iteration of pretraining. We optimize a logistic regression model with the frozen ResNet18 model using SGD with learning rate 0.01, momentum 0.9, weight decay 0, batch size 128 for 100 epochs. We drop the learning rate by a factor of 10 on epochs 60 and 80. We preprocess the training and validation datasets by subtracting and dividing by the mean and standard deviation of the training dataset, respectively. For models trained with a viewmaker network, we load and freeze the final viewmaker checkpoint to supply augmentations during transfer training. Otherwise, we use the same expert views used during pretraining.
The CIFAR-10-Corners experiments were conducted in the same way, except that the transfer task is the original CIFAR-10 dataset.
For the robustness experiments on CIFAR-10-C, the final transfer model trained on CIFAR-10 was evaluated without further training on the CIFAR-10-C dataset.
B.2 SPEECH EXPERIMENTS
The setup for the speech experiments is almost identical to images. The primary distinction is in preprocessing the data. In our experiments, pretraining is done on two splits of LibriSpeech: a smaller set containing 100 hours of audio and a larger set containing 960 hours of audio. Each instance is a raw waveform. We pick a maximum limit of 150k frames and truncate waveforms containing more frames. We randomly pick whether to truncate the beginning or end of the waveform during training, whereas for evaluation, we always truncate from the end. Next, we compute log mel spectrograms on the truncated waveforms as the input to our encoder. For 100hr LibriSpeech, we use a hop length of 2360 and set the FFT window length to be 64, resulting in a 64x64 tensor. For the 960hr LibriSpeech, we wanted to show our method generalizes to larger inputs, so we use a hop length of 672 with an FFT window length of 112 for a tensor of size 112x112. Finally, we log the spectrogram by squaring it and converting power to decibels.
For expert views, we consider both a method that applies views directly to the waveforms (Kharitonov et al., 2020) and a method that does so on the resulting spectrograms (Park et al., 2019).
For the former, we use code from the NLPAUG library 2 to take a random contiguous crop of the waveform with scale (0.08,1.0) and add Gaussian noise with scale 1. We randomly mask contiguous segments on the horizontal (frequency) and vertical (time) axes for the latter.
To do this, we also use the NLPAUG library and employ the FREQUENCYMASKINGAUG and TIMEMASKINGAUG functions with MASK FACTOR set to 40. Having done this, we are left with a 1x64x64 tensor for the 100-hour dataset or a 1x112x112 tensor for the 960-hour dataset. For the former, we use the same ResNet18 as described above; pretraining and transfer use the same hyperparameters. In the latter, we use a ResNet50 encoder with an MLP projection head with a hidden dimension of 2048. We use TORCHVISION implementations (Paszke et al., 2019). We still use the pre-pooling features for transfer in this setting as we found better performance than using post-pooling features. Otherwise, hyperparameters are identical to the 100-hour setting (and the CIFAR10 setting).
B.3 WEARABLE SENSOR EXPERIMENTS
The experimental paradigm for wearable sensor data largely follows that for speech. To generate an example, we randomly sample a subject (from the correct training split) and activity; we next randomly sample a contiguous 10s frame, linearly interpolating missing data. We generate spectrograms for each of the 52 sensors without Mel scaling, using 63 FFT bins, a hop length of 32, and a power of 2, then take the logarithm after adding 1e-6 for numerical stability. This process yields 52 32x32 spectrograms, which we treat as different channels, giving a tensor of shape [52,32,32]. We then normalize the spectrograms by subtracting and dividing by the mean and standard deviation of 10k samples from the training set.
B.4 TRAINING COSTS
We train all models on single NVIDIA Titan XP GPUs. On our system, training with a viewmaker network roughly increased training time by 50% and GPU memory utilization by 100%.
C ADDITIONAL GENERATED VIEWS C.1 CIFAR-10 VIEWS We visualize more views for CIFAR in Figure 5. We also visualize the difference between examples and their views (rescaled to [0,1]) in Figure 6. These figures further demonstrate the complexity, diversity, and input dependence of the viewmaker views.
C.1.1 APPLYING PERTURBATION IN THE FREQUENCY DOMAIN
Are there other natural ways of generating perturbations with bounded complexity? One other technique we considered was applying views in the frequency domain. Specifically, we apply a Discrete Cosine Transform (Ahmed et al., 1974, DCT) before applying the 1 -bounded perturbation, then apply the inverse DCT to obtain an image in the original domain. We use a PyTorch library 3 to compute the DCT, which is a differentiable transform. After a coarse hyperparameter search, we achieved the best results with = 1.0: 74.4% linear evaluation accuracy on CIFAR-10, much lower than our other models. However, the views are still illustrative, and we show some examples in Figure 7.
C.2 LIBRISPEECH VIEWS
We visualize some views for random LibriSpeech spectrograms in Figure 8, as well as showing deltas between spectrograms and views in Figure 9. The figures show how the viewmaker applies a variety of kinds of perturbations across the entire spectrogram.
C.3 WEARABLE SENSOR VIEWS
We visualize deltas between Pamap2 spectrograms and their views in Figure 10. Each 3x3 panel shows data and views from a different sensor and example. Color scale endpoints set to -2.5 (red) to +2.5 (blue), although some values exceed these endpoints. Spectrograms are 64x64 log mel spectrograms from LibriSpeech 960 hours. Distortion budget is = 0.05. Figure 10: Difference between random Pamap2 spectrograms and their viewmaker views. Original spectrogram shown in center, diffs shown on perimeter. Each 3x3 panel shows data from a different example and sensor. Color scale endpoints set to -2 (red) to +2 (blue), although some values exceed these endpoints. Distortion budget is = 0.05.
|
2020-10-16T01:01:08.374Z
|
2020-10-14T00:00:00.000
|
{
"year": 2020,
"sha1": "600881eb8a24caee65633f796d5a7ef40c1cc9b0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c511eee7f7107f75fe92285f034c1ef0a8de59cb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
24668205
|
pes2o/s2orc
|
v3-fos-license
|
Repair of Injury in Freeze-Dried Salmonella anatum
Repair of injury induced by freeze-drying Salmonella anatum in nonfat milk solids occurred rapidly after rehydration. Injury in surviving cells was defined as the inability to form colonies on a plating medium containing deoxycholate. Death was defined as inability to form colonies in the same medium without this selective agent. The rate of repair of injury was reduced by lowering the temperature from 35 C to 10 C and was extremely low at 1 C. Repair was independent of influence of pH between 6.0 and 7.0. Repair did not require synthesis of protein, ribonucleic acid, or cell wall mucopeptide, but did require energy in the form of adenosine triphosphate (ATP) synthesized through oxidative phosphorylation. The requirement for ATP was based on dinitrophenol or cyanide interference with repair. Dinitrophenol activity was pH-dependent; no repair occurred at pH 6.0 and some repair was observed at pH 6.5 and above. Injured cells were extremely sensitive to low concentrations of ethylenedinitrilotetraacetate. This indicated that freeze-drying injury of S. anatum may involve the lipopolysaccharide portion of the cell wall and that repair of this damage requires ATP synthesis.
Injury of freeze-dried microorganisms and repair of this damage upon rehydration in a suitable environment has been demonstrated (24,26). The exact site(s) of this injury has not been specifically determined, but there are indications that cell wall, cell membrane, and ribonucleic acid (RNA) might be involved (26,29). Freeze-dried cells, with defective permeability due to membrane damage, release RNA fragments and amino acids upon rehydration (26,29).
The nature of the repair process that occurs upon rehydration of damaged cells has not been fully characterized. After rehydration, injured cells first regain their altered permeability and then initiate RNA synthesis followed by protein synthesis. This repair process ceases as deoxyribonucleic acid (DNA) synthesis begins (26).
In thermally stressed cells, alteration of permeability and degradation of RNA, especially of ribosomal RNA, have been reported (1,11,27). During repair, the heat-stressed cells restore their altered permeability and synthesize ribosomal RNA and protein (16,21). Release of biologically active peptides from frozen damaged cells also has been observed (15).
In this paper, the process of repair of injury in 1 Salmonella anatum freeze-dried in milk is described and characterized. A preliminary report of these findings was presented previously (B. Ray, J. J. Jezeski, and F. F. Busta, Bacterial Proc., p. 3,1970).
MATERIALS AND METHODS
The test organism, S. anatum NF3, was propagated, frozen, and freeze-dried in reconstituted sterile 10% solids nonfat dry milk as described previously (24). The plating media used were xylose-lysine-peptoneagar (XLP) and XLP with 0.25% sodium deoxycholate added (XLDP). Freeze-dried samples obtained from 10 ml of 10% solid nonfat milk containing S. anatum cells were stored at 25 C for 24 hr before use.
Each sample was rehydrated rapidly (5 sec) with 10 ml of sterile water at 25 C and mixed for 1 min on a Vortex mixer (Scientific Products, Evanston, Ill.). These samples contained approximately 107 cells/ ml. A l-ml portion was diluted with 9 ml of water and mixed for 30 sec. From this diluted sample, a 1-ml portion was added to 9 ml of test solution in a screwcap test tube (150 by 25 mm) and was incubated at 25 C. The test solution contained one or more of the chemical agents or 9 ml of water as a control. At indicated time intervals, the test solutions were sampled, serially diluted, and plated on the two media. The first plating time after rehydration was noted for each test system. A 0.1-ml portion was surface-plated in each of three plates of each medium. When fresh cells were used as a control, a 24-hr milk culture incubated at 35 C was diluted with milk to give about 107 cells/mi, and 10-ml test quantities were tested in the same manner. The method for calculating 401 the percentage of injury or death has been stated previously (24 All of the chemicals were dissolved in sterile water in 10 times concentration, and 1-ml portions were used in the test solutions. The required amount of sterile distilled water was added to make a 9-ml volume of test solution. The test solutions with the sample contained about 0.1% milk solids (except for the control, which contained 9 ml of water). The stock solutions were stored at 5 C for no more than 7 days. Dinitrophenol (DNP) was dissolved in water by mild heat treatment. EDTA was prepared by dissolving disodium (2H20) and tetrasodium (anhydrous) salts separately and then mixing them in proportion to give
RESULTS
Effect of incubation temperature on repair. Portions of a rehydrated and diluted freeze-dried sample were transferred to test solutions pretempered at 1 to 35 C. The test solutions were incubated at these temperatures for the 2-hr test period. Apparent populations on XLP remained essentially constant up to 2 hr at all test temperatures except for a gradual reduction in counts observed at 1 C (Fig. 1). The counts on XLDP increased at different rates at all temperatures. The slowest rate of increase was observed at 1 C. After 2 hr of incubation, only about 15 to 20% of the viable cells remained injured at 35 and 25 C, whereas about 80% remained damaged at 1 C. At 10 and 15 C, the rate and extent of repair was slower than at 25 and 35 C.
Effect of pH on repair. Repair was evaluated at four pH levels from 5.5 to 7.0 by adjustments of the test solutions with predetermined amounts of 0.1 N HCI and 0.1 N NaOH. The data in Fig. 2 show that, during a 2-hr test period, numbers observed on XLP remained constant at pH levels of Effect of incubation temperature on repair of injury offreeze-dried Salmonella anatum. Times of first plating after rehydration were 3 min for 35 C sample, 5 min for 25 C, 7 min for 15 C, 9 min for 10 C, and 11 min for I C. The samples were plated on xylose-lysine-peptone-agar (XLP) and XLP with 0.25% sodium deoxycholate (XLDP). The test solutions contained about 0.1% milk solids.
INCUBATION TIME (min)
FiG. 2. Effect of pH on repair of injury of freezedried Salmonella anatum. Times of first plating after rehydration were 3 min for pH 7.0, 5 min for pH 6.5, 7 min for pH 6.0, and 9 min for pH 55. The samples were plated on xylose-lysine-peptone-agar (XLP) and XLP with 0.25% sodium deoxycholate (XLDP). The test solutions contained about 0.1% milk solids. Incubation was at 25 C. 6.0, 6.5, and 7.0, but dropped gradually at pH 5.5. The increase in numbers of colonies on XLDP, indicating the rate and extent of repair, was rapid and essentially the same at pH levels of 6.0, 6.5, and 7.0 but relatively less at pH 5.5. The initial high recovery on XLDP at pH 5.5 and 6.0 was due to difference in time elapsed between rehydration and first plating.
Effect of antimicrobial agents on repair and growth. Several antimicrobial agents which either prevent synthesis of specific cell components or interfere with cellular normal functions were used in the test solutions ( Table 2). The pH levels of the test solutions after the addition of the sample were between 6.0 and 7.0. Since repair was unaffected in this pH range, no adjustment was made. The time of the first plating, the pH of the test solution, and the concentration of each agent are given in Table 1. The variation in the percentage of injury with different compounds at 0 hr of plating was due largely to the difference in time of their initial plating.
In water (containing 0.1% milk solids), the initial 90% injury was reduced to 17% after 2 hr. In the presence of penicillin and D-cycloserine, there was reduction in injury at 1 hr followed by an increase at 2 hr. Considerable reduction of injury occurred in the presence of actinomycin D, chloramphenicol, and tetracycline. Sodium deoxycholate, when present in the XLDP, inhibited the colony-forming ability of the injured cells. However, in the test solution it did not prevent repair. Sodium dodecyl sulfate produced similar results.
In the presence of about 0.1 mm available EDTA, partial repair occurred as the original 89% injury was reduced to 55% after 1 hr. The percentage of injury changed little in the next hour; however, about 47% of the freeze-dried cells were killed by EDTA. Under similar conditions with fresh cells, only 18% were killed, but about 60% of the survivors were injured from exposure to EDTA.
In the presence of DNP, no reduction in injury occurred during 2 hr. About 25% of the cells were killed during this test period. Partial recovery was observed in the presence of sodium cyanide.
To test the effect of actinomycin D, chloramphenicol, DNP, and cyanide on fresh and freezedried S. anatum cells, growth of cells incubated at 25 C for 7 hr was measured on XLP. The results presented in Fig. 3 show that the growth of control fresh and freeze-dried cells had lag phases of 2 and 3 hr, respectively. With actinomycin D, the fresh cells had about a 2 hr lag, whereas the freeze-dried cells did not start growth until after 4 hr. In the presence of chloramphenicol, both the fresh and freeze-dried cells started growth after about 3 and 4 hr, respectively. Growth rates in the presence of chloramphenicol were slower than with control fresh or freeze-dried cells. Both fresh and freeze-dried cells showed no initiation of growth in the presence of DNP or sodium cyanide during the 7-hr test period. Of deoxycholate on repair was studied with XLP and XLDP broths (identical to solid plating media but without agar) and with deoxycholate in 100l water (Fig. 4). XLDP broth was prepared by , DNP pH6.0 adding 0.25% deoxycholate to XLP broth immediately before adding cells, and this broth or water 80 = -_-_ * uNP + CN p[: 7.0 was used as the test solution (Fig. 4) solution, the initial numbers on XLP indicated 1lg/rn) or sodium cyanide (CN, SO isg/ml) or both on solution, the initial numbers on XLP indicated repair of injury offreeze-dried Salmonella anatum. The 90% death. Little injury was observed in XLDP test solutions contained about 0.1% milk solids. The broth, because the initial counts obtained on pH levels of the test solutions were adjusted with 20 XLP and XLDP were similar. These counts re-mm phosphate buffer (monoand dibasic, sodium). mained constant during the 2-hr test period.
Incubation was at 25 C.
APPL. MICROBIOL.
Effect of DNP and sodium cyanide on repair. Both DNP and cyanide interfered with repair of injury in freeze-dried S. anatwn cells (Fig. 5). This inhibition of the repair process by DNP was pH-dependent. Repair in test solutions adjusted with 20 mm sodium phosphate buffer indicated that pH levels from 6.0 to 7.0 did not influence the rate of repair, and at the end of 2 hr about 20% of the cells remained injured (Fig. 5). DNP inhibited completely the repair process at pH 6.0 but at pH 6.5 and 7.0 about 20 and 40% of the cells were repaired in 2 hr. No pH dependency was observed with cyanide. Between pH 6.0 and 7.0, about 50%0 of the cells remained injured after 2 hr in the presence of cyanide. However, in the presence of both DNP and cyanide at pH 7.0, about 75 to 80% remained injured. The repair process appeared to require energy in the form of adenosine triphosphate (ATP); however, ATP supplementation (75 ,ug/ml) did not counteract the DNP inhibition of the repair process (data not shown). DISCUSSION About 90% of the surviving freeze-dried S. anatum cells exhibited injury when plated within 3 min after rehydration, but these damaged cells were repaired rapidly after rehydration in a suitable medium. At 25 C, most of the cells were repaired within 2 hr and grew after about 3 hr. Sinskey and Silverman (26) reported that rehydrated freeze-dried Escherichia coli cells incubated at 37 C started repair after about 5 hr and had about an 8-hr lag period. This delayed repair and extended lag may have been due to the minimal broth used by these workers. They observed only about 25%7,, injury among the survivors. This was low in comparison to the 90% injury that was observed in freeze-dried S. anatwn cells used here. These apparent differences could be due to the use of different plating media in determining the amount of injury. Sinskey and Silverman (26) used a minimal agar medium and Trypticase Soy Agar with yeast extract as a complete medium. In the present study, a selective agent added to the medium was used to determine damage, and this may have affected more cells with different degrees of injury than did a minimal medium.
The rate of repair was dependent on the temperature and pH of the medium. At low temperatures and low pH, the process was relatively slow.
The repair of freeze-drying injury in S. anatwn apparently did not involve the synthesis of mucopeptide, proteins, or RNA. The freeze-dried cells were repaired in the presence of penicillin (12) and D-cycloserine (18), which are inhibitors of cell wall mucopeptide synthesis. S. anatum cells were susceptible to these two agents, as about 90%/ of the fresh cells were killed within 2 hr in the presence of penicillin and D-cycloserine. These fresh and uninjured cells (with a lag period of about 2 hr at 25 C) were probably synthesizing different cell constituents, including mucopeptide, during this lag period and became sensitive to penicillin and D-cycloserine. Most of the freezedried cells were repairing their injury during this same time period. This repair did not appear to involve mucopeptide synthesis, and thus the cells remained unaffected by these two agents.
The cells repaired their injury in the presence of actinomycin D, an inhibitor of RNA synthesis (6). This indicated no involvement of RNA synthesis in repair. Synthesis of RNA during repair of freeze-dried E. coli cells was reported by Sinskey and Silverman (26). These contradicting observations could be due to differences in the composition of test solution, methods of determination of injury, and test organisms. Although fresh S. anatum cells were not affected by actinomycin D, freeze-dried cells initially were sensitive to this antimicrobial material. However, this sensitivity was lost as repair of the cells occurred. Actinomycin D added 1 and 2 hr after rehydration of freeze-dried S. anatum cells did not have an effect (data not presented).
The data also indicated that freeze-drying of S. anatum caused some surface alteration of the cells so that permeability was impaired. The cells became permeable to molecules like actinomycin D, which otherwise could not enter the cells. However, this surface alteration was repaired quickly, and the repaired cells again became impermeable to these molecules. Similar permeability losses due to different sublethal stresses including freeze-drying have been reported by many workers (4,13,15,20,26,29). Chloramphenicol and tetracycline inhibit protein synthesis (25). Other workers have observed protein synthesis during repair of freeze-drying injury (26) or heat injury (16). However, our data showing repair in the presence of these agents indicated a lack of involvement of protein synthesis.
Two surface-active agents, deoxycholate and dodecyl sulfate, were used in the test solutions on the basis that if injury involved the lipoprotein of the cell membrane (7), the repair process might be affected (19). The inability of the injured cells to repair themselves and grow on XLDP implied action by deoxycholate; however, repair occurred in solutions that contained deoxycholate and dodecyl sulfate. These results did not eliminate the possibility of damage in the cell membrane. However, these data indicated that deoxycholate alone was not responsible in itself for preventing repair and colony formation by the injured cells.
Studies with XLDP broth confirmed this observation.
Injury repair in the presence of EDTA indicated that injured freeze-dried cells, although able to repair themselves, were extremely sensitive to a low concentration of EDTA. Under similar conditions, the fresh cells of S. anatum showed little death but a large amount of apparent injury. When the effect of EDTA was neutralized with calcium, the repair of EDTA-injured S. anatwn cells occurred rapidly. This repair process was also retarded by DNP at pH 6.2 (data not presented). EDTA caused solubilization of the lipopolysaccharide part of the cell wall in gramnegative bacteria, and in low concentration also produced cell injury by a steric or chemical change in the cell surface of gram-negative bacteria (9,31). This injury was reversible and did not require synthesis of protein, RNA, or cell wall mucopeptide, but required energy and was inhibited by DNP (14). The similarities between the findings of Leive (14) in EDTA-injured gramnegative bacteria including salmonellae and the findings of the present investigations with freezedried S. anatum suggested that the nature of injury from either stress was similar. The damage may be located in the lipopolysaccharide part of the cell wall. A similar suggestion was made by Bretz and Kocka (4) working with frozen cells.
Repair of freeze-dried injury in S. anatwn was completely or partially inhibited in the presence of DNP and sodium cyanide. Both DNP and cyanide interfere with ATP synthesis via oxidative phosphorylation in the electron-transport system (3,10). This inhibition of injury repair by DNP was pH-dependent. At pH 6.0 no repair occurred, whereas at pH 6.5 and above partial repair occurred. A similar pH dependency of DNP inhibition of energy-requiring processes in Staphylococcus aureus has been reported (8). This pH dependency may have not been due to the inhibitory effect of undissociated DNP ions (32). Rather, the net ATP synthesis in the presence of DNP may have been reduced by DNP stimulation of adenosine triphosphatase which otherwise remained in a "latent state" (23) or simply by pH dependency of adenosine triphosphatase (17). DNP-stimulated adenosine triphosphatase activity in oxidative phosphorylation also has been reported in bacteria (5,22). The partial inhibition of repair of injury by cyanide also suggested the involvement of energy synthesis through the electron-transport system. In the presence of cyanide, electron transport can be maintained through inorganic ions, e.g., nitrate or sulfate (10). In test solutions containing 0.1% milk solids, such inorganic ions may be present, and consequently some ATP might be synthesized. This could then produce the partial repair. In the presence of potassium nitrate and cyanide, injured freezedried S. anatum cells reduced nitrate in 1 or 2 hr after rehydration (data not presented).
Supplementation with ATP from an external source did not counteract DNP inhibition of repair at pH 6.0. This might result from inability of the highly charged ATP molecules to enter the cells; however, ATP has entered EDTA-injured E. coli cells (14). Repair of injury produced by EDTA in gram-negative bacteria (14) or produced by aerosol dehydration of E. coli (30) has been reported to be energy-dependent. Energy metabolism during repair of injury was also suggested in heat-stressed S. aureus cells (2), heatstressed S. typhimurium cells (28), and freezedried E. coli cells (26).
The injury in freeze-dried S. anatum cells seemed to be located, at least partially, in the lipopolysaccharide part of the cell wall and was reversible. After rehydration, this injury was repaired rapidly, and the repair required energy in the form of ATP from the electron-transport system.
|
2018-04-03T04:30:08.045Z
|
1971-09-01T00:00:00.000
|
{
"year": 1971,
"sha1": "2e64db3bdea8e867d246dcf29f2352ad082bcf4b",
"oa_license": null,
"oa_url": "https://aem.asm.org/content/aem/22/3/401.full.pdf",
"oa_status": "GOLD",
"pdf_src": "ASMUSA",
"pdf_hash": "c12f50021c5b4d2904be1b61fc1abcbf3bcd615c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214167778
|
pes2o/s2orc
|
v3-fos-license
|
The role of evaluation in SMMEs’ strategic decision-making on new technology adoption
ABSTRACT The complex nature of small business operations has led to adopting IT as a tool to enhance the productivity, efficiency and growth of Small, Medium and Micro Enterprise (SMMEs). Despite the increased spend on IT, many SMMEs do not understand the importance of IT investment evaluation, which adversely impacts their technology and decision-making ability to realise benefits. The study explores the role evaluation plays in SMMEs’ decision-making to adopt the technology, and the ability to evaluate technology potentials thereof. Case studies were conducted, data collected analysed using the thematic analysis, with hermeneutics employed to derive deeper and richer meanings from the findings. SMMEs often base their decision to adopt the technology on speculative and empirical knowledge from personal judgement, communication preferences and individual experiences. Implications of TE potential may lead to the adoption of inappropriate or non-adoption of the technology, with adverse effects on business sustainability.
Introduction
SMMEs play a significant role in the economies of countries, stimulating economic growth through increased job creation and innovation (OECD 2019a). Al-Qirim (2007) posits that IT is an important means of sustaining, facilitating and promoting SMMEs' operations. SMMEs need to identify and invest in technologies that assist in increasing the efficiency and development of Business Processes (BP). OECD (2019a) describes IT as a tool that enables SMMEs to steadily develop, enhancing crosscountry relationships and transactions in the global world. SMMEs need to identify and invest in technologies that can assist in increasing the efficiency and development of BP. The impact of the technology on organisational performance is visible in the aspects of profitability, efficiency, market value, productivity, quality and competitive advantages. OECD (2019b) emphasises the importance of SMMEs keeping up with current development, stating that there is a gap in the adoption of digital technologies by SMMEs. SMMEs are missing out on the benefits of operating in a global market due to the low usage of digitalisation.
SMMEs need to proactively invest in digital and emerging technologies such as big data analytics, drones, artificial intelligence and IoT apps, amongst others (OECD 2019a). The diffusion of these technologies among SMMEs will improve BPs across all industries. Thus, emphasis should be directed on the knowledge and acquisition of technologies that can be utilised to improve BPs (Govender and Pretorius 2015). The importance of strategic business planning and operations is evident that 70-80 per cent of SMMEs in South Africa do not succeed within the first two years (Reynolds, Fourie, and Erasmus 2019). For SMMEs to realise the beneficial impact of the technology, the technology must be BP aligned (Palvalin, Lönnqvist, and Vuolle 2013). Such alignment between business and Technology Evaluation (TE) adoption strategies improves the functionality of business, resulting in increased profits (Love et al. 2013).
Despite many adoption models of the technology, there is still a slow uptake of the technology by SMMEs (Oliveira and Martins 2011). Mittal et al. (2018) reported that in the USA, a study on SMMEs in manufacturing in West Virginia, USA, a low drive was reported towards adopting smart technologies. Eze et al. (2018) argue that in the UK, similar decision-making problems were faced, largely influenced by the fear and uncertainty of acquiring the technology. This situation arises as a result of a lack of planning and evaluation when considering the technology (Reynolds, Fourie, and Erasmus 2019). SMMEs need to plan and evaluate the technology against their business strategy, processes, and adopt innovation as a strategy to overcome technology chasm, and maintain competitiveness (Dyerson, Harindranath, and Barnes 2009). An and Ahn (2016) posit that SMMEs need to be proactive in identifying players within their value chain and take up the associated risks of the new technology. The non-evaluation of the significance and appropriateness of the technology is a challenge as many SMMEs acquire the technology without TE, which leads to practices that endanger the businesses, placing them in a precarious situation (Halicka 2017).
New technology in an organisation involves a broad decision-making process that affects not only the individuals but also the stakeholders. Decision-making process should draw from the technology acceptance models that embrace social, environmental, organisational and governmental factors contributing to the user's perception and acceptance of the new technology (Abulrub, Yin, and Williams 2012). Notwithstanding the factors involved in the choice and adoption approaches implemented, the ability to successfully adopt, integrate and manage the new technology lies largely in the evaluation procedures which lay the foundation for successful adoption and integration (Love et al. 2005;Cragg, Mills, and Suraweera 2010). Decisions on the technology are crucial because of the high capital outlay and degree of uncertainty (Love et al. 2013). The challenges faced by SMMEs are linked to the problems that emanate from the non-TE before adoption (Chan, Chong, and Zhou 2012). As a result of non-evaluation and non-adoption of the technology, SMMEs forfeit the opportunity to gain a competitive advantage and improve the chances of survival of their business (Nguyen, Newby, and Macaulay 2013).
Challenges continue to be the lack of resources and knowledge required for accurate TE to be of benefit to the business (Mittal et al. 2018). Govender and Pretorius (2015) stated that the rationale for technology adoption is no longer based on the perceptions such as cost-saving, ease of use or degree of usefulness in an organisation, but is based on the strategic role the technology plays going forward. Jha and Bose (2016) assert a need for more research in the field of technology adoption in developing economies, to uniquely address the diversity and peculiarity of different geographies in a bid to understand the IT adoption discourse better.
To explore the effect of TE on the decision-making process of SMMEs before the adopting technology, the following questions are asked: (i) What are the evaluation challenges of SMMEs to adopt technology? and (ii) How does TE affect the strategic decision-making of SMME on technology adoption? The objectives of the questions are to (i) determine factors that affect SMME evaluation and choice of technologies and (ii) the significance and contribution of TE towards the decisions SMMEs make on the adoption of technologies. A further aim is addressing the knowledge gap, by proposing a set of practical TE guidelines to improve SMMEs' decision-making on technology adoption.
Review of the literature
TE by SMMEs remains a challenge in the context of the developing world. To explore and gain an understanding of these challenges, a review of the literature was conducted spanning online databases such as Google Scholar, ProQuest, EBSCOhost and Emerald. This section starts with a description and definitions of what is meant by evaluation, then different approaches to evaluation, TE models and methods, the evaluation process, technology investment decision-making, are discussed ending with a summary.
Evaluation approach
TE is defined as the process spanning from having no knowledge of the technology, to the situation where the SMME understands and is able to articulate how technology fits the business plan, explain how technology can benefit and create a competitive advantage and can elucidate on how easily the change created by technology be aligned to the business processes.
Evaluation is an integral part of adoption (Schumpeter 1947) in the Diffusion of Innovation (DOI) theory. Schumpeter (1947) acknowledged the role of evaluation by articulating the initial steps that need to be taken when considering the adoption of the technology. These steps lead to the awareness of a need to evaluate the functionality of the technology before adopting or rejecting it. Research has led to the development of adoption models which obfuscated the initial part of adoption, with its importance eroded by many of these models evolving 'into themselves' without recourse to the fundamental steps in any adoption process (Jha and Bose 2016). Serafeimidis and Smithson (2000) and Haider (2011) argue that TE is complex and important to organisational processes. As such, organisations should adopt the interpretive approach to the entrepreneurial activity which has more relevance to the current business practices and discard the traditional approach on quantifiable metrics. Symons (1993) argues that an interpretive approach tends to seriously consider the experience and history of the organisation in a realistic context. TE would be advanced by using an interpretive approach to determine suitable means (Berghout and Remenyi 2005). This argument is supported by Haider (2011, 1), who posits that 'information systems, therefore, are not objective entities, such that they could be implemented without considering their interaction with technical, organisational, economic, social, and human factors'. Jha and Bose (2016) argue that IS research is largely dominated by a positivist approach. This approach has been challenged in the areas of social research because of the inability to include the elements of uncertainty, external and internal factors, and other forms of contextual considerations of TE for business (Lu et al. 2015). The concept of economic measurement, in solely determining the appropriateness, impact and value of the technology on the sustainability of the business, has been rejected (Feniser et al. 2017).
The evaluation needs to happen before the adoption theories can be effectively applied by SMMEs. The DOI and Technology, Organisation, and Environment (TOE) frameworks are two prominent technology adoption models that acknowledge the essence of decision-making and elements of technology diffusion in an organisational context (Oliveira and Martins 2011). Introducing the technology involves a decision-making process influenced by individual users and a range of other factors. This is in alignment with many of the technology acceptance models that embrace the fact that social, environmental, organisational and governmental factors contribute to the user's perception and acceptance of the technology (Chan, Chong, and Zhou 2012). TE would be suitably advanced by using an interpretive approach to determine suitable means (Berghout and Remenyi 2005). According to Oliveira and Martins (2011), the construct of the TOE framework is based on three contexts, namely: (i) Technological, (ii) Organisational and (iii) Environmental.
Using the TOE framework to analyse the effect of organisational components on decision-making to adopt the new technology indicates that the three factors represent 'both constraints and opportunities for technological innovation' (Dalipi, Idrizi, and Kamberi 2011, 113). The three influential factors describe the way a business identifies the need for the new technology, conducts a search for it, and decides to adopt the new technology. Oliveira and Martins (2011, 112) describe the DOI and TOE frameworks as the only two prominent technology adoption models that are an organisation based and posit that 'the TOE framework makes Rogers (2003) innovation diffusion theory better able to explain intra-firm innovation diffusion'. In a bid to create a sustainable decision-making framework for Small Medium Enterprise (SME) manufacturers' adoption of robotics, economic, environment and social aspects of the technology were assessed and fully incorporated in the evaluation (Epping and Zhang 2018). The need to factor in mediating elements in the evaluation of the technology is captured by the author's remark However, sustainability decision-making in manufacturing systems is a complex process, because economic, environmental, and social pillars are linked and their relationships need to be explored in order to make a holistic decision. (Epping and Zhang 2018, 14)
TE models
Since 1989 the Benefits Evaluation of Systems and Technology (BEST) method, the Information Accounting Framework (INFACC), the Investment Expert System Toolkit (InVEST), IT Investment Appraisal (ITIA), and the Rigorous Appraisal and Processing of Investment Data (RAPID) were introduce, but failed to stand the test of time. The Balanced Scorecard, the Simulation Analysis and the Dynamic Systems Development Methodology (DSDM) are the models most often used (Berghout and Remenyi 2005). In 2005, Berghout and Remenyi identified three models of evaluation that have received the most interest over a period of 11 years, from 1994 to 2005, namely the Balanced Scorecard, the Simulation Analysis and the DSDM. The concept of technology assessment has grown over the years, with other technology decision-making models being developed with different types of adaptation, variations and combinations.
Recent technology assessment models are based on the computational and mathematical tools that are categorised under generic terms such as Multiple Criteria Decision Model (MCDM) and Multiple Attribute Decision Model (MADM) (Mardani et al. 2015). Examples of these models include Interpretive Structural Modelling (ISM), Electre and Quad Serial Interface Module (QSIM). The development of these hybridised and adapted models is largely based on the earlier models such as Analytic Hierarchy Process (AHP), VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR), a multiple criteria decision making model and others (Mardani et al. 2015). These models are based on the quantitative elements associated with the technology, where objective measurements are utilised to determine the financial implications in terms of time and resources (Mardani et al. 2015). The method is used as a ranking and comparison tool for IT projects but is subject to scepticism from the financial managers and supporters of financial models. The failure or inadequacy of the earlier models to address simple and practical guides for SMMEs in terms of technology decisions gave credence to the argument that it is necessary to evaluate the value and benefits of IT in individual contexts in relation to its observable conditions. Various models, using the traditional approach, have been developed for large organisations, with little or no applicability to SMMEs (Palvalin, Lönnqvist, and Vuolle 2013).
Given SMMEs' limited knowledge and understanding of business strategy, Reynolds, Fourie, and Erasmus (2019) argues that it is difficult to utilise the various models of TE in the context of small enterprises. Arguments are made that intangible benefits, uncertainty and other decision factors can be measured only qualitatively (Palvalin, Lönnqvist, and Vuolle 2013). Serafeimidis and Smithson (2000) mention that more attention has been focused on prescribing how to carry out the evaluation instead of analysing and understanding the technology's role, interactions, effects and organisational impacts. Thus, the acknowledgement of the subjectivity, indeterminism and context-dependency of evaluation distances the entrepreneurial approach from the positivistic paradigm and aligns it much closer with interpretivism (Serafeimidis and Smithson 2000, 94). In a study on SMME Maturity model for digitalisation, Wiesner et al. (2018) investigated four existing models and found that no model fully provides the required guidelines to adopt the new digital technology. The authors argue for a new model that contextualises SMMEs and is custom-made to guide decision-making on technology adoption that adds value to BPs.
2.3. The evaluation process of the technology TE starts with no knowledge or the first knowledge of the technology, to the increased knowledge of technical features and characteristics, to an in-depth evaluative consideration, which results in a logical and predictive conclusion on the suitability of the technology (Cowan and Daim 2011). Surrounding factors, such as environmental, political, cultural and organisational, together with others affecting the business needs, should be incorporated (Landt, Damstrup, and Pedersen 2013). The evaluation should be properly investigated, documented to show the advantages and disadvantages of the potential technology. The result of the impact of the technology on the business over a set period and range of time needs to be identified by the TE (Halicka 2017).
Although the life span and continuing relevance, estimated cost implications, and the expected ROI are important considerations, when the knowledge of the new technology has been obtained, the adaptability, applicability, compatibility and capability of the technology affect the decision to potentially accept, adopt and implement the technology (Dyerson, Harindranath, and Barnes 2009). Love et al. (2005) conclude that the suitability to BP, implementation considerations and organisational development are ranked to justify investment decisions. Whatever the factors involved in the choice and adoption approaches implemented, the ability to successfully adopt, integrate and manage the technology lies in the evaluation (Cragg, Mills, and Suraweera 2010). The integration of the technology with the required technical skills must be in place upon implementation. This should be done before any desired and potential impact on the deliverable products, services and ROI can be realised. Landt, Damstrup, and Pedersen (2013) posit that to predict the value of the technology, there are four areas of concern namely, Performance, Integration, Penetration and Payback. Furthermore, technologies are put through a comprehensive and rigorous evaluation process where all aspects are incorporated into other applicable factors to determine the most suitable option. The ability to determine, which technology has the potential to deliver the desired result, is a major barrier (Lundmark 2008). It cannot be generally assumed that the technology will be efficient or that efficiency will be guaranteed because of the conditions under which an organisation or individual might reject the technology based on a conservative attitude towards decision-making. As such, many SMMEs are plagued with indecision and bad decision-making due to their inability to evaluate the potential and suitability of the technology before adoption.
SMME technology decision-making and investment
The evaluation process provides a knowledge base to assist in making and defending informed decisions. Without evaluating the potential of the technology, it is difficult for SMMEs to understand the potential when adopting the technology. Abulrub, Yin, and Williams (2012) and Cowan and Daim (2011) argue that TE procedures need to evaluate each technology and SMME according to their unique context or characteristics. However, the lack of TE before adoption and integration often leads to SMMEs not adopting the technology that can hold a potential advantage. Nguyen, Newby, and Macaulay (2013, 2) state that the key to this lack of success appears to be a disconnection between vision and execution: organisations do not do enough research and planning before implementing the new technology, often because management is unclear about how and why their firms are adopting IT in the first place.
Decision-makers need to employ a holistic approach to measure and compare the technology in terms of business needs, benefits, cost implications and risk while considering the suitability to BPs, implementation and organisational development to justify investment decisions (Love et al. 2005). TE is a tool that can be leveraged to obtain a competitive advantage in the market (Bloem da Silveira Junior et al. 2018). Ghobakhloo et al. (2011) and Govender and Pretorius (2015), argue that the rationale for technology adoption is no longer based on cost-saving, ease of use, or degree of usefulness in an organisation, but on the strategic role the technology plays and understanding its implication on the business going forward. This is attainable when SMMEs understand the value and ramifications of the key decisions that can be guaranteed only by the process of TE. Nevertheless, the practicalities involved require a change in orientation on how TE is perceived (Imre 2016).
The more detailed planning and analysis of the new technology, the better knowledge is gained of the potential impact of the technology and its usefulness to the business. On the other hand, the new technology adopted regardless of planning and the factor relationship that exists within the dynamics of evaluating the new technology jeopardises the potential benefit and realisation of the benefits accruable (Aleke, Ojiako, and Wainwright 2011;Halicka 2017). Coates and Coates (2003, 113) describe the effect of non-evaluation as follows: 'The results of this are that we are often unable to see big changes, foresee impending negative consequences or anticipate enormous benefits in the future'. Similarly, Cowan and Daim (2011), as well as Cragg, Mills, and Suraweera (2010), declare that the non-adoption of the technology is often based on the lack of planning and the evaluation of the potential and constraints relating to the adoption and utilisation of the new technology. It has been established that many SMMEs find it challenging to incorporate the technology into the business, while TE remains a compelling factor for technology adoption. Models, methods and frameworks have been proposed to assist in solving the problems of non-evaluation of the technology; however, the challenge starts before the adoption as SMMEs are not equipped to plan for a TE process adequately.
Methodology
The research adopted a multiple case study design with a focus on the social construction of reality. This design allows the exploration of a phenomenon to provide a better understanding and knowledge about the cases as they relate to each other (Stake 2006). Yin (2009) states that case studies are a good way of exploring new theories, while also providing a challenge to the existing theories by asking new questions. Thus, the multiple case study design was applied to understand better the perception owners and managers of SMMEs have towards the evaluation and adoption of the new technology in a bounded context. The research methods applied in the study allow the establishment of inherent factors affecting the SMMEs' ability to evaluate the new technology potential and support the call for more geographically dispersed and diverse contextual research on IT adoption (Jha and Bose 2016).
Fifteen SMMEs were non-randomly and purposively selected based on the operational size, function and geographical coverage ( Table 1). The business sectors were in the services, manufacturing and financial service providers (FSPs), as defined by the National Small Business Act No. 102 of 1996, South Africa. The selected SMMEs ranged 10-100 employees with an annual turnover of less than 40 million ZAR. The 15 units of analysis selected were geographically located within a 50-kilometre radius of Cape Town city centre, Western Cape. The units of observation were decision-makers in the business and technology management sections. The assumption made was participating SMMEs utilise some form of the technology in their business process. Data were collected using an interview guide with a semi-structured questionnaire using one-onone interviews (Miller and Glassner 2004). The interviews lasted 45 and 60 minutes. Interviews were recorded, transcribed and given to the participants (Ps) for verification and validity. The data were analysed using a thematic coding system, reading through all data, summarising and taking note of all the similarities in the data, grouped key concepts into categories and themes (Quinlan 2011). The principle of hermeneutics was applied to interpret and derive meanings from the data obtained.
Research findings
Twelve Ps acknowledge the importance of TE and the impact thereof on technology adoption. However, it was noted that SMMEs do not have an existing structure or steps to evaluate the new technology. The lack of knowledge required to fully understand the functionality of the technology is evident. Ps accept that there is a need to research continuously and acquire knowledge on modern technologies.
SMMEs acknowledge the importance of informed decisions for the survival of the business. Some Ps (8) state that TE gives a feeling of satisfaction when decisions are based on the relevant facts. Ps are of the opinion that SMMEs risk failing, due to the impetuous and unnecessary investment and regardless of TE. SMMEs act on gut feelings and are influenced by current trends without attention to the functionality and appropriateness of the technology. As a result, SMMEs are left with a feeling of inadequacy when adopting the wrong technology.
SMMEs state that the value technology offers are an influence on the decision to adopt the technology. When describing their experiences of using the technology, SMMEs view the cost of the technology as relative to its potential benefits. Technology gives the ability to deliver quality goods and services over that of competitors, attracting more customers to the business. It remains imperative to recognise that TE provides a better understanding of the technology, contributing to informed decision-making.
The findings extrapolated from the key responses of the Ps, where synthesised and corresponding responses were categorised accordingly, to arrive at a consensus of the major issues ( Table 2). The findings were categorised into two themes: (i) Evaluation and (ii) Decision-making.
Factors affecting SMMEs in terms of TE
SMMEs are faced with challenges, such as the lack of information and ability, to obtain adequate knowledge of the new technology. They do not understand the implications of adopting the inappropriate technology or the non-adoption of the potentially beneficial technology.
Adoption decisions are based on the speculative and perceived knowledge stemming from personal judgement and individual experiences. Steyn and Leonard (2012) reported that many SMMEs seek the assistance of friends, relatives or other business owners in the initial process of the adopting technology. P8 stated 'I reach out to [the] network of people I know who know about it. I look at suitability, ask people's opinion, and form an impression and base my decision on that.' Giving the lack of ICT knowledge, SMMEs do not necessarily adopt the suitable technology (Palvalin, Lönnqvist, and Vuolle 2013). According to Buonanno et al. (2005), the decision-making of technology adoption is affected by spontaneous actions, social activities and trends rather than established BP objectives and good TE. The knowledge required to understand the functionality of the technology can be accessed only by asking the right questions about business requirements, adaptability, capability, compatibility and applicability of the technology.
SMMEs take a conservative stance on adoption because of their perception of the untried technology and the risk that is associated with it (Mittal et al. 2018). The nature of uncertainty surrounding the ROI on the technology is a concern as they are not able to discern the possibilities and associated risks involved. Nguyen, Newby, and Macaulay (2013) point to risk and uncertainty, resulting in a low technology adoption rate. P3 stated that the difficulty lies in 'understanding the risk associated with technology, the evolving nature of technology and the numerous varieties of technology and solutions'. Kilic, Zaim, and Delen (2015) argued that the challenges of uncertainty, economic conditions and the constant evolving of the technology have an effect on the ability of SMMEs to process and keep up with the advancements. P2 said: 'What I do understand is that technology changes all the time; there is little time to get accustomed to it'.
The evolving nature of the technology causes SMMEs to develop guarded attitudes towards TE and adoption. P5 demonstrated this saying: 'One needs to tread carefully, because there is danger being on the edge. It is sometimes good to wait for the teething problems with technology or releases to be sorted out.' The comments of P7 encapsulate the unwillingness to take steps towards adopting technology 'because of their earlier experience and inability to properly evaluate the technology to make an informed decision. We often sit back and tread cautiously to look how SMMEs are concerned about their understanding the dynamics and operational design of the new technology and its level of applicability to the business process.
Evaluation
The nature of uncertainty surrounding return on the investment made on the new technology by SMMEs is a major source of concern for business managers because they are not able to discern the possibility and weight of risks involved. SMMEs take a conservative stance on the new technology adoption because of their perception of untried technology and the weight of the risk that might be associated with it. The lack of knowledge of new TE limits the ability of SMMEs to evaluate and adopt new technology to support the business. SMMEs often obtain information on new technology potential by consulting informally with their friends and family rather than with professionals and experts in the industry. SMMEs agree to the need for a culture of research and knowledge acquisition of newly available technologies by asking the right questions about the business requirements, technology capability and suitability. SMMEs usually act on gut feeling and are easily influenced by current buzzing trends in the environment without paying attention to the functionality and appropriateness of the technology applicable to their business SMMEs have a need for an evaluation assessment tool to help make informed decisions on appropriate new technology for the business process. Research question 2: How does the evaluation of new technology affect the strategic decision-making of SMMEs to adopt new technology SMMEs understand that evaluation of the technology gives a better understanding of the suitability of new technology, contributing towards an informed decision-making.
Decisionmaking
Evaluation of the new technology helps SMMEs to make informed decisions on facts and verifiable information that place the business in a good position of sustainability. The evaluation of the new technology gives a gratifying feeling of enjoying the technology based on the decisions made from relevant facts obtained on the technology. SMMEs do not have a formalised approach or process of identifying business needs that ensure their understanding of how the new technology can meet business objectives and deliver on organisational goals. Small businesses end up failing due to impulsive and excessive buying of the technology regardless of the evaluation of technology for the business process. SMMEs are left with a feeling of inadequacy when they adopt the wrong technology and end up losing money, often not knowing the capacity of the technology they acquired to solving their problems.
things play out'. This approach may lead to the inability to leverage the technology or consolidate on its potential (Feniser et al. 2017). SMMEs lag behind in adopting the early, and are found to keep the, technology for too long or not adopting ones with potential. When the technology is adopted late as a result of stubbornly holding on to their present status, it limits the potential and ability to exert influence on market share. The ability to discern what is appropriate in terms of operations and required investment consideration is only made possible by the process of evaluation. P4 admitted ' … SMMEs don't realise the urgency, risks and benefits of having technology in the first place, they later spend a fortune to acquire one which might be inadequate or over board'. This attitude is aligned to Brown and Russell (2007), who found that organisations in South Africa are in the habit of holding back on the technology until they can observe their competitors' success.
Evidence shows that SMMEs do not have the existing structure or formalised directions to evaluate the technology which has been, or is yet to be, incorporated into their businesses. They, therefore, suffered various losses, especially in the early stages of their business. Ps expressed the need for an evaluation tool assisting them in making better decisions on the potential technology.
The effect of TE on SMMEs' decision-making
SMMEs jeopardise their survival making unnecessary investments in the technology. Rantapuska and Ihanainen (2008) argue that when SMMEs decide to adopt the technology, they often base their decisions on their perception, intuition, etc., without considering operational needs. P6 recounted the experience of the non-evaluation of the technology, resulting in purchasing the wrong technology: ' … we didn't have the experience or knowledge about the technology. We failed to measure the relevance and significance of the technology at that time'. I8 stated 'we work by gut feel; we don't do any pre-evaluation of any sorts'.
P7 demonstrates the impetuousness of technology investments, saying 'sometimes we change technology because it's cool, not necessarily more effective or profitable'. This attitude leads to investment losses and the misalignment of resources and objectives. P33 asserted 'people often don't make the right choices because they don't evaluate the right choice, the business ends up failing due to excessive buying and disregard for evaluation'. Serafeimidis and Smithson (2000) argue that the unsuitable technology brings problems of technology mis-match or mis-fit to BPs. Such misalignment presents considerable risk to the business in terms of operations, and the costly nature of the problem impacts the business negatively (Halicka 2017).
The lack of evaluation poses a problem since decisions taken consequently are uninformed, and based on little or no information. P3 said, 'I don't think SMMEs evaluate properly before adoption'. The lack of evaluation of the significance and appropriateness of the technology is shown by P4, stating 'technology feels inadequate because of lack of prior evaluation of its capabilities'. P12 relates 'the inadequacy of the technology costs the company valuable money'. Palvalin, Lönnqvist, and Vuolle (2013) stress that the failure to do TE and the lack of understanding the implications of adopting the technology may lead to the adoption of inappropriate or non-adoption of the potential technology.
SMMEs stated that TE could give them a competitive advantage when decisions are made based on relevant facts, which enables better efficiency in the running of their business. Some of the key points made are 'If evaluation is not done, it cost companies to lose business' (P7). P9 stated 'If they don't evaluate, they wouldn't optimize their BPs'.
The implication of evaluation on the decision-making of SMMEs
The lack of strategic management skills by SMMEs is made evident by Xesha, Iwu, and Slabbert (2014), who states that 50 per cent of SMMEs' failure in South Africa is as a result of poor decisionmaking and management capacity. SMMEs need to understand the level of maturity of the technology and the value of the potential benefit before a decision is made (Kilic, Zaim, and Delen 2015). The advantage of understanding evaluation is the ability to make informed decisions, limiting the risk of adopting the unsuitable technology due to the delays caused by uncertainty.
While narrating their experience of technology adoption, SMMEs have shown the lack of knowledge of the potential benefits of the technology. The Ps agree that operational technology problems are a consequence of non-TE before adopting it. The SMMEs admitted acting on impetuous reasoning rather than verified information. This position supports Rantapuska and Ihanainen (2008), who state that SMMEs often base their decisions on own perception, intuition, trends, attitudes and experience. Aleke, Ojiako, and Wainwright (2011) report that the technology is adopted disregarding the factors and relationship that exists within the dynamics TE. The disregard of TE before adoption often leads to BP failures as the adopted technology does not support the processes in place, creating mistrust and the abandonment of the technology. Such situations are evident in the responses of the Ps, where many losses have been made because of the unsuitability of the technology to their BPs.
Summary of contribution
Most findings from Table 2 resonate with the extant literature on the barriers of adoption and challenges of the decision-making on a new technology by SMMEs. The major findings reveal that technology decision-making is often a daunting task for SMMEs; as such, Steyn and Leonard (2012) assert that decision-making by SMMEs without proper guidance exposes the business to failure in the adoption process and jeopardises the survival and growth of the business. Therefore, SMMEs' unanimous expression of the need for an evaluation tool to guide TE was one that stood out. SMMEs expressed a feeling of profound gratification when the formalised TE processes are used to adopt the new technology that positively impacts their BP, testifying to the importance of TE. These findings emerged as new expositions in the context of understanding SMMEs' TE in South Africa.
Recommendations
Businesses need to plan to incorporate the evaluation and adoption of the technology strategically into the objectives and goals of the organisation. SMMEs must plan strategically for TE as it informs what is currently in use and what is available by identifying the features on offer and what is needed to reach its desired target. It is important to understand the level of technical skills required to implement, enhance and maintain the technology.
A further recommendation is to consider the availability of resources to support the infrastructure needed to run the technology. Ease of training to operate the systems needs to be evaluated. Finally, while evaluating the technology a change management strategy needs to be in place. Good practice and a TE culture among SMMEs will increase the ability to adopt the technology for the benefit of the business.
SMMEs need to establish a functional and standard evaluation practice in their organisations. These practices ensure that the right questions are asked, comparing the features to determine the level of suitability to the business. Requirement analysis need to be done based on the problem to be solved by the technology.
TE should encompass the determination and establishment of key elements such as effectiveness, cost and associated risk, among others, concerning associated factors. SMMEs need to establish evidence of the appropriateness of the technology and its effective utilisation as an advantage over existing ones with lesser cost implications.
To make an informed decision, SMMEs need to use an appropriate evaluation guide or a welldefined process that conforms with their nature of business. Evaluation procedures should be carried out in sequential phases to reduce the risk inherent for the adoption of the technology. Subscribe to industry information bulletins, groups and forums on industrybased advancement and development initiatives.
Establish the potential benefits in terms of the value added to the quality of business services and product delivery.
Determine the standard capacity of the technology to manage the required workload and accommodate an increased production volume while performing at a standard level. 6 Pilot the technology for select customers, as part of business promotion for more efficient service and product delivery, to air their views.
Establish industry demand, the type of the technology in use by other competitors, and trading partners.
Determine the difference in the operational level of effectiveness of the productivity and efficiency level on the existing business process.
Establish the technical skills and knowledge required to properly operate the technology to deliver the optimum output.
7 Orientate employees towards familiarising with the technology, to prevent issues such as anxiety, low self-esteem, insecurity and indifference to the use of the technology.
Identify government business support programmes, technology initiatives and grants on technology acquisition.
Determine the ability of the technology to be leveraged in the market place for a competitive advantage.
Determine the availability of the technological infrastructure needed to support the technology operation.
8 Ensure that users of the technology are incorporated as part of the evaluation and adoption process to promote a sense of inclusivity and obtain employee buy-in into the technology.
Explore available options recommended by industry associations and unions.
Identify the potential areas where there is vulnerability or exposure to risk and the potential impact.
Determine the scalability of the technology, i.e. the ability to handle future estimated volume and growth.
9
When applicable, first test the technology for a period of time in the business environment to determine the technology fit and stability of the business.
TE guidelines
The technology being evaluated needs to be operationally in terms of adaptability, compatibility, scalability, interoperability and reliability to meet the demands of the business while also considering other inherent factors. The following practical guidelines are recommended and are categorised in (i) organisation, (ii) external agents, (iii) economic value and (iv) technology, which adds an expansive dimension to the original TOE constructs as it relates to TE and adoption processes (Table 3). TE guidelines proposed for SMMEs in Table 3, draw from the underlining factors and challenges expressed by SMMEs as those that inhibit their capacity of TE, in order to adopt suitable technology to improve their business process. Findings from Table 2 reveal an unstructured evaluation process adopted by SMMES, due to their lack of understanding of a formalised process, the result of which leads to a pervasive adverse effect on non-evaluation and wrong technology adoption. Therefore a practical TE is recommended to assist SMMEs in making better decisions on the potential technology for the BP.
Conclusion
The evaluation of the technology should encompass the establishment of key elements and factors such as effectiveness, cost and quality, among others. The suitable choice of the technology involves important decision-making processes in an organisation, which allows an optimum value from their BPs. The implementation step is dependent on the choice made, but it is essential before becoming operational. Possible steps to consider include assembling, configuring to match the BP, testing of functionality, training and conversion of data files to match the format required by the system. Govender and Pretorius (2015, 11) sum up the importance of an appropriate IT evaluation process, saying: 'ICT adoption clearly provides a means for organisations to realise their strategic objectives, but it is not without risks and challenges if adopted inappropriately'. SMMEs need to be aware of their business environment actively and measure up in terms of technology adoption and active usage of the technology to promote the development and enhance their sustainability and survival in the market. Further studies should be directed towards SMMEs within other industries to build an expansive adoption profile of SMMEs to create a holistic evaluation assessment tool to assist in making informed decisions on the appropriateness and efficacy of the current and future technology.
Notes on contributors
Ayodeji Olanrewaju Afolayan is currently a doctoral candidate and the B.Tech. (IT) Research Project Coordinator, Information Technology department of the Cape Peninsula University of Technology. He obtained Master's degree (research based) cum laude at Cape Peninsula University of Technology. He is a budding researcher, postgraduate supervisor, and lecturer. His areas of interest include Technology adoption and assessment, Business Technology management, knowledge management and diffusion, Enterprise Architecture and management, information security, access and privacy, Contemporary application of Technology in society.
Dr Andre Charles de la Harpe is an adjunct Professor at the Information Technology department of Cape Peninsula University of Technology. He is an experienced academic with extensive knowledge of IT management and research, he has supervised numerous masters and doctoral students, with a good track of publications. His areas of interest include Specializes in Knowledge management, Enterprise architecture, Business Intelligence, Big data, and Emotional intelligence.
Disclosure statement
No potential conflict of interest was reported by the authors.
|
2019-12-19T09:11:07.383Z
|
2019-12-17T00:00:00.000
|
{
"year": 2020,
"sha1": "e911a88445f2fb9e5df34490ba6e0cf059035263",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09537325.2019.1702637?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c9dc8d9c37e95b08194b4c56ff10961dc8359448",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business",
"Computer Science"
]
}
|
235820505
|
pes2o/s2orc
|
v3-fos-license
|
Consumer Awareness and Comfort with Resident-run Cosmetic Clinics: A Crowdsourcing Study
Background: Resident cosmetic clinics (RCCs) are the training modality of choice among both residents and faculty and are a mainstay at most residency programs.1–4 Despite this, knowledge of RCCs among plastic surgery consumers remains untested. We hypothesize that the public would be aware of and receptive to RCCs. Methods: Participants with prior cosmetic procedures or interest in future cosmetic procedures were recruited using Amazon Mechanical Turk and asked to complete a survey in September 2020. First, prior awareness of RCCs was assessed. After a brief description of RCCs, perceptions of safety and preferences for care were assessed. Results: After screening for quality, 815 responses were included. Forty-five percent of consumers were aware of RCCs. Seventy-six percent of consumers believed that RCCs were just as safe as attending clinics and 65% were comfortable receiving care from fourth-year residents or higher. Belief in RCC safety was associated with 4.8 times higher odds of feeling comfortable receiving care at an RCC [95% confidence interval (3.3–7.1), P < 0.001]. When given a hypothetical choice between residents and attendings in two scenarios, 46% of consumers chose residents for abdominoplasty and 60% chose residents for Botox injections. Belief in RCC safety was associated with choosing a resident or being indifferent in both scenarios. Conclusions: Consumer preference regarding RCCs has largely been untested. This study shows that belief in RCC safety influences consumers’ perceived comfort with receiving care at an RCC. This knowledge can help guide RCC practice and maximize learning opportunities for surgeons-in-training.
INTRODUCTION
Cosmetic surgery is a core discipline of plastic surgery and the demand for cosmetic procedures continues to grow. Resident training in cosmetic surgery has historically been a challenge. 3 Considering most consumers must pay out of pocket, patients have high expectations and little tolerance for complications and revisions. 5 Patients seeking cosmetic surgery consider surgeon reputation, experience, and board certification status, which cannot be achieved until completion of residency. 6,7 For these reasons, graduating residents often feel less prepared to perform cosmetic surgery. [2][3][4]8,9 In 2014, Kraft et al found that only 36% of residents felt comfortable integrating aesthetic surgery into their practice after graduation. 1 Later that year, the Accreditation Council for Graduate Medical Education increased the minimum number of required aesthetic cases from 50 to 150 to address resident preparedness. This new requirement prompted programs to enhance cosmetic surgery training using new methods.
Among these modalities, resident cosmetic clinics (RCCs) emerged as the frontrunner and were voted the most useful source of aesthetic surgery training by both resident and program directors. 1,3,10 As RCCs grew in prevalence, so did resident-reported comfort with aesthetic surgery, from 36% in 2014 to 59% in 2017. 1,3 RCCs have been operating for decades and continue to increase in number. Today, an estimated 60%-70% of programs have a dedicated RCC. 3,11 The structure varies by institution, but most RCCs are held one day a week yearround, and are operated by senior residents [postgraduate year (PGY) [4][5][6]. Residents conduct the initial patient consult, assemble a plan, and then discuss this plan with a supervising faculty member who either oversees or directly assists residents during the procedure. Postoperatively, patients are scheduled to follow up with their resident plastic surgeon, allowing trainees to monitor patient satisfaction and practice longitudinal care.
RCCs offer unique benefits to both patients and trainees. For residents, they enhance cosmetic surgery training with increased autonomy in patient care, which is associated with a higher degree of resident confidence in performing cosmetic procedures. 9,10,12,13 For patients, they provide cosmetic procedures at discounted rates, oftentimes at 50% of the standard surgeon's fee. 11,14,15 They also provide high patient satisfaction and have consistently proved to be safe, with complication rates comparable to the national standard. 9,11,[14][15][16][17] Because RCCs have proved invaluable for resident aesthetic education, continuing to grow this learning modality is important. To do so, it is paramount to consider the point-of-view of the consumer. Although many studies have analyzed resident and attending views on RCCs, none have assessed the opinion of consumers. [1][2][3][4] To our knowledge, this is the first study that explores consumer perceptions of RCCs.
As aesthetic surgery is a "buyer beware" market, wherein many nonplastic surgeons, and even nonsurgeons, continue to market invasive and noninvasive aesthetic procedures, consumer attitudes toward RCCs are important to understand. 18 We hypothesize that plastic surgery consumers are largely unaware of RCCs but receptive toward receiving care at them due to their affordability.
METHODS
The primary aim of this study was to assess a priori knowledge of RCCs in a cohort of plastic surgery consumers. After providing a brief description of RCCs, we also assessed consumers' comfort with receiving cosmetic procedures at RCCs and their beliefs about the safety of RCCs when compared with attending clinics. Secondary aims included identifying the minimum percent discount and the minimum PGY resident provider that consumers deemed acceptable.
This study was approved by the Wake Forest School of Medicine Institutional Review Board (IRB00067931). Potential participants were recruited using mTurk, an online crowdsourcing platform that provides quick, efficient, and reliable workers who complete tasks such as surveys for a nominal fee. Using mTurk, many investigators have gained public insight on topics pertinent to the field of plastic and reconstructive surgery. [19][20][21][22][23][24][25] Amazon Mechanical Turk workers who lived in the United States were 18 years or older, and had an approval rating of 95% or higher were invited to complete a 30-question survey. Participants were screened by whether they had cosmetic surgery in the past or were interested in getting cosmetic surgery in the future. Additionally, participants were asked two attention-check questions about the current month and year. Responses were excluded if participants incorrectly answered attention-check questions, took the survey more than once, or if the survey was incomplete. Following completion, respondents were compensated $0.15.
Data Collection
Demographic information was obtained, and to assess consumer knowledge and comfort with RCCs, respondents were first asked whether they knew the difference between a resident and attending physician, whether they had ever heard of RCCs before, and the minimum PGY trainee from whom they were comfortable receiving cosmetic care.
Next, we provided a brief description of the pathway to becoming a plastic surgeon, highlighting the difference between a resident and attending physician, and RCCs ( Fig. 1). After this information was acknowledged by the respondent, we asked about perceptions of clinic safety; percent discount desired when compared with traditional attending clinics; and level of comfort with five categories of cosmetic procedures: breast, body, face and neck, fat reduction, and noninvasive. These categories aligned with the American Society of Plastic Surgeons' Cosmetic Procedures website, and a link to this website was provided to respondents as a reference. 26 Overall comfort with cosmetic procedures at RCCs was determined by averaging each respondent's answer to these five categories.
To further evaluate preferences, consumers were asked to choose between receiving cosmetic procedures from residents versus attending physicians in two scenarios, abdominoplasty and Botox injections, which were made realistic by providing cost and wait times consistent with the authors' institution. All questions were written in Basic English and used laymen terms for cosmetic procedures, as listed on the American Society of Plastic Surgeons Cosmetic website. 26
Statistical Analysis
Responses were compared by using Pearson's chisquare and Fisher's exact tests for categorical variables and Mann-Whitney and Kruskal-Wallis tests for continuous variables. Multinomial logistic regression models were then constructed to determine the key predictors of prior knowledge of RCCs, beliefs about RCC safety, comfort with RCCs, and provider preference. All models were adjusted for age, gender, race, education, income, marital status, region of residence, past cosmetic procedures, whether respondent has biological children, and whether respondent works in healthcare. Belief about safety of RCCs was also added as a covariate to models where appropriate. Analyses were performed using R Statistical Software (version 4.0.2; R Foundation for Statistical Computing, Vienna, Austria), and a P-value less than 0.05 was considered significant.
RESULTS
After screening for quality, 815 responses were included. On average, consumers were 37.5 years old, predominantly women, White, and graduates of a 4-year or 2-year degree program. Consumers were roughly equally distributed among the five geographic regions of the United States, and the majority earned between $25,000 and $49,999. Sixty percent of consumers had biological children, 37% worked in healthcare, and 65% were married or had a partner (Table 1).
Experience and Interest in Cosmetic Surgery
As shown in Table 2, 409 (50%) consumers had prior cosmetic surgery. The most common prior procedures involved the face and neck (51%). The remaining 406 (50%) consumers were interested in future cosmetic procedures, with the most common being noninvasive (58%) procedures.
Public Knowledge of RCCs and Their Safety
Overall, 703 (86%) of the total consumers knew the difference between a resident and attending. Consumers with children (89% versus 83%, P = 0.021) and those who worked in healthcare (90% verus 84%, P = 0.012) were more likely to know the difference. No other demographic characteristics were predictive.
When asked if they had ever heard of RCCs before, 365 (45%) consumers answered yes. After adjusting for all covariates, knowledge of RCCs was found to be higher in consumers who were men [odds ratio (OR), 2. Table 3]. Additionally, those who had prior cosmetic procedures were more likely to be Once the respondent acknowledged reading through the information, questions regarding preferences for rccs were asked. graphic credit: igrad, "the road Map to Becoming a Doctor. " https://www.igrad.com/infographics/a-holistic-approach-to-supporting-a-childseducation, accessed 14 October 2020. (Table 3). This belief was also shared by consumers who had prior cosmetic procedures [OR 1.9 (1.3-2.9), P < 0.001]; however, those who were in the highest income bracket (>$100,000) were less likely to believe that RCCs are safe [OR 0.3 (0.1-0.8), P = 0.019].
Comfort with RCCs
The overall comfort with receiving cosmetic surgery at RCCs was mixed: 215 (26.4%) consumers were comfortable, 318 (39%) were neutral, and 282 (34.6%) were uncomfortable. Using multivariate regression, predictors were identified for those who were comfortable or neutral with receiving cosmetic procedures at RCCs (Table 3). Consumers who were men [OR 1.9 (1.3-2.7), P < 0.001] and those who worked in healthcare [OR 1.6 (1.1-2.4), P = 0.020] were more likely to feel comfortable or neutral. Unsurprisingly, consumers who believed RCCs are safe were more likely to feel comfortable or neutral [OR 4.8 (3.-7.1), P < 0.001]. When asked about the lowest level of training they would receive cosmetic surgery from, 28% of consumers said PGY-3 or lower and 65% said PGY-4 or higher.
Consumers were most comfortable receiving noninvasive and fat reduction procedures at RCCs and least comfortable getting face, body, and breast procedures at RCCs (P < 0.001, Fig. 2).
Cost Preference
When asked about the minimum percent discount desired at an RCC, the mean was 54.7% ± 20.6% off an attending clinic's price.
Hypothetical Scenarios
When given a choice between a resident with lower cost and wait time or an attending with higher cost and wait time in two scenarios, consumers predominantly chose residents (Figs. 3, 4). Consumers were more likely to choose residents for noninvasive Botox injections versus abdominoplasty (60% versus 46%, P < 0.001).
For abdominoplasty, 46% of consumers chose residents, 37% chose attendings, and 17% were indifferent (Fig. 3). After accounting for all other variables, having biological children or belief that RCCs are safe were predictive of choosing a resident or being indifferent. However, female gender was predictive of choosing an attending. For Botox injections, 60% of consumers chose residents, 26% chose attendings, and 14% were indifferent (Fig. 4). Interestingly, in this scenario, female gender was predictive of choosing residents or being indifferent. Being unmarried and belief that RCCs are safe were also predictive of choosing residents or being indifferent. Having a history of past cosmetic procedures was predictive of choosing an attending.
DISCUSSION
Prior studies show that consumers care most about surgeon reputation, experience, and board certification status, which cannot be achieved until completion of residency. 6,7 Our results conflict with this finding: while most consumers knew the difference between a resident and attending, nearly two-thirds were still comfortable receiving care from senior residents. Interestingly, this did not translate to overall comfort with RCCs, despite them being run primarily by PGY-6s, as consumers were mostly neutral (39%) and only a minority were comfortable (26%) with receiving care at RCCs.
Despite this low self-reported "comfort" with RCCs, in two hypothetical scenarios, consumers predominantly chose residents over attendings. This demonstrates that lower cost and wait time may be just as, or even more, important to patients than the provider's level of training. Furthermore, consumers were more likely to choose residents for noninvasive procedures.
Most RCCs offer a variety of invasive and noninvasive procedures. Walker et al showed that 81% of procedures performed at an RCC over a 13-year period were major procedures, with the most common being abdominoplasties, liposuction, and breast augmentation. Less than 20% of procedures were minor. 16 However, minor cosmetic procedures dominate the total case load in the United States: of the 18.1 million cosmetic procedures reported in 2019, 90% were minimally invasive. This was a 2% increase from 2018, and a 237% increase from 2000. 27 With minor procedures growing in popularity, it is not surprising that our results show consumers are most comfortable receiving minor procedures at RCCs.
Multiple studies have demonstrated RCC safety, showing complication and revision rates comparable to those of attendings. [14][15][16][17] After describing resident clinics and the training process to become a chief resident to our consumers, the majority felt that RCCs are just as safe as attending clinics. Importantly, those who believe RCCs are safe were more likely to feel comfortable or neutral receiving care from RCCs. Better advertising data on the safety of RCCs may help promote consumer confidence and interest in resident clinics. A recent study showed that fewer than 11% of programs have a website for their RCC, and of those, none share before/after photographs, a list of procedures, or prices. 28 Because surgeon reputation is important to patients, lack of this Fig. 3. Hypothetical scenario using abdominoplasty. results of a hypothetical scenario involving an abdominoplasty. Demographic and clinical variables entered into multivariate regression were: age, gender, race, education, income, marital status, region of residence, past cosmetic procedures, whether respondent has biological children, whether respondent works in healthcare, and whether respondent believes rccs are as safe as attendings' clinics.
information may stymie consumer confidence and interest in RCCs. 6,7 Most RCCs provide some form of financial incentive to attract consumers. The most frequently reported is a 50% discount from the standard surgeon's fee (± cost of facility, anesthesia, and supplies). 11 Our study shows that on average, consumers would want a 55% discount to receive cosmetic surgery at an RCC, which is largely consistent with many clinics' existing billing models.
Limitations
Although mTurk is a powerful crowdsourcing tool with results comparable to traditional surveys, this study is not without limitations. 29,30 First, this study is inherently biased in that the survey was only available to those with internet access and mTurk accounts. Additionally, reports show that mTurk workers are often younger and more educated. 31 We are also unable to determine how many workers viewed the survey and chose not to participate and whether there were significant demographic differences between those who did and did not participate.
Furthermore, our study did not comment on consumer preferences on other aspects of RCCs, such as clinic organization, level of attending involvement, and payment structure. Further studies are needed to elucidate consumer preferences on these important topics.
CONCLUSIONS
Nearly a third of residency programs do not have an RCC; thus, an understanding of consumer opinion can help those programs design an RCC that is palatable to consumers. 3,11 For programs with existing RCCs, understanding of consumer opinion can reveal mechanisms for increasing patient volume and improving the delivery of care.
Consumers who believe RCCs are safe are more comfortable with receiving cosmetic procedures at an RCC. For programs who are considering opening RCCs or expanding their influence, emphasizing RCC safety is a must as it shapes consumer behavior. Furthermore, knowing that more than two-thirds of consumers are comfortable receiving care from senior residents, but less than a third are comfortable with junior residents, can help in structuring RCCs. Lastly, that the price point of 55% off standard price is acceptable to consumers can help RCCs decide what prices to offer. The findings of this study can be used to design and improve RCCs to better prepare the next generation of plastic surgeons. . 4. Hypothetical scenario using noninvasive Botox injections. results of a hypothetical scenario involving a Botox injection. Demographic and clinical variables entered into multivariate regression were: age, gender, race, education, income, marital status, region of residence, past cosmetic procedures, whether respondent has biological children, whether respondent works in healthcare, and whether respondent believes rccs are as safe as attendings' clinics.
|
2021-07-15T05:31:25.313Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "d69a5555db63366c1839c9f33622685adc3820d6",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/gox.0000000000003681",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d69a5555db63366c1839c9f33622685adc3820d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
121600084
|
pes2o/s2orc
|
v3-fos-license
|
A METHOD FOR THE STUDY OF WHISKERED QUASI-PERIODIC AND ALMOST-PERIODIC SOLUTIONS IN FINITE AND INFINITE DIMENSIONAL HAMILTONIAN SYSTEMS
We describe a method to study the existence of
whiskered quasi-periodic solutions in Hamiltonian
systems.
The method applies to finite dimensional systems
and also to lattice systems and to PDE's including
some ill posed ones.
In coupled map lattices, we can also
construct solutions of infinitely many frequencies
which do not vanish asymptotically.
Introduction
The goal of this note is to describe an approach to the existence of whiskered tori for symplectic maps and Hamiltonian systems. Besides applying to finite dimensional systems, the method applies to lattice systems and to some Hamiltonian partial differential equations, including ill-posed ones. The method also leads to efficient numerical algorithms and can be used to validate numerical computations. These results are developed in detail in [8] (finite dimensional systems), [9] (lattice systems) and [5] (PDE's) and [15] (numerical algorithms).
Of course, the problem of existence of whiskered tori has a very long history in Hamiltonian mechanics starting with [12,29]. In this short note we cannot do justice to the extensive literature and compare different methods (the papers above contain more than 20 pages of references). The more modest goal of this note is to give some introduction to the ideas in the above papers so as to serve as a reading guide. Since the detailed estimates are presented in the papers mentioned above we will just describe here the results somehow informally.
Our method to study whiskered tori is an a posteriori treatment of an invariance equation. We show that if there is a function which satisfies some non-degeneracy conditions (including twist, approximate hyperbolicity and that the function is an embedding) and satisfies the invariance equation up to a sufficiently small error, then there is a true solution of the invariance equation nearby. This exact solution corresponds to a whiskered torus. Up to some obvious symmetries, this exact solution is unique in a neighborhood. One of the key ingredients in our treatment of lattice systems is the introduction of suitable norms which not only define a topology but also have a good algebraic structure. This is accomplished by taking into account the fact that the interactions between nodes of the lattice have some localization properties.
The a posteriori approach in KAM theory was emphasized in [22,21,23,28]. In these papers, it was shown that analytic results in an a posteriori format together with some quantitative estimates, imply finite differentiability results. In [23] it was shown that a posteriori results imply convergence of some perturbation expansions. They also lead rather automatically to results on regularity with respect to parameters.
A posteriori results also imply the usual persistence results for quasi-integrable systems (just take the exact solutions of the integrable system as approximate solutions of the quasi-integrable system). But the approximate solutions can be obtained by other methods (numerical calculations, formal expansions, etc.) so that a posteriori methods can justify the results of these methods.
We emphasize that the a posteriori results presented here requires neither that the system under consideration is close to integrable, nor that it is expressed in action-angle variables, nor that the hyperbolic bundles are trivial and much less that the motion in the hyperbolic directions is reducible to constant coefficients.
One novelty of the method is that we use the geometric properties of the problem very little; the successive corrections do not require transformation theory, but rather are applied additively. This makes the method well adapted for the infinite dimensional situations, provided that we have a good functional analysis framework. Furthermore, the method also leads to very efficient numerical implementations. A Newton step discretizing the function in N Fourier coefficients requires only O(N ) storage and O(N log N ) operations. See [15].
The functional analysis framework we use for lattice systems is an extension of a framework developed in [18,7]. The main idea is to use function spaces that capture the "local effects" but, at the same time, satisfy several Banach algebralike properties. In this way, the steps in the KAM iteration satisfy estimates very similar to the finite dimensional ones. For PDE's we adapt the two spaces approach of [16]. It is an important remark that we do not need that the PDE defines an evolution in the whole space. To find solutions of the invariance equation, it suffices that we can define forward evolution in some spaces, backwards evolution in others and that these evolutions have exponential rates of decay. If these evolutions are smoothing, one can deal with unbounded non-linearities.
Taking advantage of several of the features of the method we construct solutions of lattice systems of increasing complexity and consider the limit of the corresponding sequences. In particular we can construct solutions of Klein-Gordon lattices which are almost-periodic. These solutions do not decrease at infinity and the frequencies are all bounded. We think that the role of these solutions in the transport of energy in lattice systems deserves further investigations.
The parameterization method for whiskered invariant tori
We recall that a quasi-periodic function of time is a function which can be written in the form x(t) = k∈Z ℓ x k e 2πik·ωt , where ω ∈ R ℓ is a frequency vector. Equivalently, introducing the function -often called the hull function -K(θ) = k∈Z ℓ x k e 2πik·θ , we have x(t) = K(ωt), where K is a function defined on the torus T ℓ .
If ω is rationally independent, the function x(t) = K(ωt) is a solution of a given differential equationẋ = X(x) if and only if K is a solution of If DK has rank ℓ, we can think of K as an embedding of T ℓ into the phase space M of X (which may be infinite dimensional). Equation (2.1) implies that, at a point in Range(K) the vector field X is tangent to Range(K). Hence if (2.1) is satisfied, Range(K) is a diffeomorphic image of T ℓ invariant under X.
In the case of a map F , the invariance equation becomes where T ω (θ) = θ + ω. Equations (2.1) and (2.2) are the centerpieces of our method. For the sake of brevity in this note, we describe only the case of maps, since the geometric considerations are easier to explain. There are simple arguments which show that the results for flows can be deduced from the results for maps. Nevertheless, giving a direct treatment for the case of flows has the advantage that it can serve as a blueprint for results in PDE's. This direct treatment for flows can be found in [8] and the treatment for PDE's can be found in [5].
Results for maps in finite dimension
We recall that ω ∈ R ℓ is Diophantine when there exist κ > 0, τ ≥ ℓ such that We denote the set of Diophantine vectors D ℓ (κ, τ ). The most important hypothesis of our result is that there is a function K 0 : T ℓ → M satisfying (2.2) with enough accuracy compared to the rest of the hypotheses. We will assume that M is a 2n-dimensional Euclidean manifold and we will denote by the same symbol its universal covering and its complex extension. The other hypotheses are roughly that the tangent space at the range of the embedding K 0 admits an approximately invariant splitting such that the hyperbolic directions have dimension 2(n−ℓ), the center directions have dimension 2ℓ and that there is a twist condition along the center directions.
It is well known that if the torus supports an irrational rotation, the symplectic form restricted to it vanishes (i.e. it is an isotropic manifold). Also the tangent vectors do not grow either in the future or in the past. The preservation of the symplectic structure forces that the vectors symplectically conjugate to tangent vectors do not grow exponentially either. Therefore we are asking that these directions are the only center directions.
To make precise the size of the error, we need some spaces and norms. We will consider functions defined on extensions D ρ = {θ ∈ C ℓ /Z ℓ | | Im θ i | ≤ ρ} of the torus T ℓ which are analytic in the interior and extend continuously up to the boundary.
We endow the space of such functions with the norm K ρ = sup θ∈Dρ |K(θ)|, where |K(θ)| denotes some norm in the phase space. is an exact symplectic mapping which is real analytic and extends to U , a complex extension of U . Assume furthermore that ω ∈ D ℓ (κ, τ ) is a given Diophantine vector. Let K 0 : D ρ ⊃ T ℓ →Ũ , ρ > 0, be a real analytic embedding such that We furthermore have: (H2.1) Denoting Π s,c,u K0(θ) the projections corresponding to the splitting in (H2), we assume moreover that there exist We have: Assume that the average of the matrix Then, given 0 < ρ ′ < ρ, there exist constants C, ε 0 depending explicitly on Furthermore, the invariant torus is whiskered, it has an invariant splitting satisfying (3.3). The distance between the original approximately invariant splitting and the final invariant splitting can also be bounded from above by Cε.
Furthermore, there is a constant C 1 such that if there is another solutionK : The bundles E s , E u could be empty. In this case, the theorem reduces to the standard theorem of existence of maximal Lagrangian tori.
Actually, hypothesis (H2.2) is automatically satisfied under the hypothesis that the dimension of the center space is just 2ℓ and that the error in the invariance equation is small enough.
In assumption (H2), we assumed that there is an approximately invariant splitting to emphasize that the conditions of Theorem 1 are verifiable in numerical approximations. As is well known in the theory of hyperbolic systems, the existence of approximately invariant hyperbolic splittings implies the existence of exactly invariant hyperbolic splittings which are close to the approximately invariant ones.
The explicit expressions C and ε 0 are slightly cumbersome to write, but they are very easy to program since they are the composition of a few rather easy expressions.
In particular, κ, ρ ′ /ρ enter into ε 0 just as a factor κ −4 (ρ ′ /ρ) −4τ +2 . As pointed out in [22,28] this leads immediately to estimates on the measure of the tori in the quasi-integrable case and also to finite differentiability results. The dependence on the non-degeneracy conditions (twist, hyperbolicity constants, norm of K and of N ) is also power-like. This leads to "small twist" and "small hyperbolicity" theorems and allows to justify some degenerate perturbation theories. As pointed out in [22,28], given a finitely differentiable problem one can construct a sequence of analytic problems that approximate it. Taking a solution of a problem as an approximate solution for the next one, and applying the quantitative result, we can obtain a sequence of functions that converge to the solution of the original problem.
Sketch of the proof of Theorem 1
The method of proof is an iterative Newton-like method. Having some error 2), the Newton method tries to find ∆ in such a way that in some appropriate norms. Actually we will solve (4.1) up to a quadratic error. As usual in KAM theory, the quadratic estimates will be obtained in a weaker norm than the norm of the original error.
4.1. The use of normal hyperbolicity. The first step is to use standard methods in normal hyperbolicity theory [10,17,25] to show that close to the approximately invariant splitting assumed in (H2) there is a truly invariant one.
The standard argument in [17,25] seeks the invariant space as the graph of a linear map a(θ) from the approximately invariant space to its complement. One can manipulate the invariance equations into a fixed point equation for a of the form: where A, B, C are linear operators and R is quadratic in a. The different rates of contraction assumed in (H2) imply that A(θ) · B(θ) ≤ γ < 1 and the approximate invariance implies that C is small. Therefore, just using elementary algebra, one can show that if we consider the RHS of (4.2) as an operator acting on a, then it is a contraction operator in some ball of a suitable Banach space of analytic functions.
Taking into account that the dynamics on the torus is a rotation, we can obtain that the fixed point a(θ) depends analytically on θ. The key observation is that the 2n × 2ℓ matrix M formed by the columns of the 2n × ℓ matrices DK and J(K) −1 DK N satisfies: where A is the matrix introduced in assumption (H3) and R is an explicit algebraic expression involving DE, the projections on invariant subspaces and derivatives of F . One consequence of (4.4) is that Range(M (θ)) is close -up to terms that are comparable to the error -to E c θ . If we write ∆ c = M W , then (4.3) along the center is equivalent (up to terms which are quadratic in the error E) to the following equation for W : Taking components in (4.5) gives the two small divisors equations for some RHS E Using an argument inspired by Rüssmann's translated curve theorem [27,19] we show that the average of the RHS of (4.7) vanishes. Hence, we can find W 2 using the theory of constant coefficient difference equations and determine W 2 up to an additive constant. If the twist condition (H3) is met, we can choose the average of W 2 so that the average of the RHS of (4.6) is zero. Hence we can find W 1 solving a constant coefficient difference equation. We note that W 1 is unique up to the addition of a constant and that this is the only non-uniqueness in the solution of the linearized equation. This is reflected in the uniqueness results.
Once we have found W , that satisfies the linearized equation approximately, it is standard to show that, provided that ∆ is small enough and that K + ∆ is in the domain of F (which is implied by E ρ (ρ − ρ ′ ) −2τ κ 2 ≪ 1), we have We also note that the invariant splitting for DF • K is approximately invariant for DF • (K + ∆) and that the twist condition has only deteriorated by an amount that can be bounded from above by ∆ ρ ′ and hence by E ρ κ 2 (ρ − ρ ′ ) −2τ .
From (4.8) it is standard in KAM theory that, given sufficiently small initial conditions, the procedure can be iterated and that it converges.
The vanishing of the average in the second component of (4.6) is discussed in [4]. The papers [8,9] use two different methods inspired by the translated curve theorem of [27].
Coupled map lattices and coupled oscillators
In this section, we discuss the adaptation of the method of Section 2 in an infinite dimensional framework. Therefore, we will need to define appropriate spaces of embeddings and spaces of diffeomorphisms. We will also need to make sure that the (rather minimal) geometry used in the iterative step also makes sense in the infinite dimensional setting.
In our treatment, a very important role is played by the fact that the interaction between oscillators decays rapidly with the separation among them. Following [18,7] we will formulate these localization properties using decay functions.
A prototype of the systems we deal with is the coupled Klein-Gordon model with W j having a suitable decay with respect to j (see later). We assume that V is a potential such that the equationẍ = −V ′ (x) admits both an elliptic and a hyperbolic fixed point and we also assume that δ is small enough depending on the non-degeneracy properties of the KAM tori in the one particle system. Our main result is that the systems under consideration admit solutions which are quasi-periodic whiskered breathers. These solutions can be described as follows: most of the sites are close to the hyperbolic fixed point but there are oscillations close to the elliptic fixed point.
The solutions we construct are at the boundary between solutions which are localized and solutions that are propagating energy. We think it would be interesting to study transitions among these solutions and their role in Arnold diffusion.
5.1.
Spaces of localized functions. The main technique we will use is the introduction of some Banach spaces of embeddings which capture the notion that the embeddings are essentially localized around a finite number of centers. These adapted norms are chosen in such a way that composition and multiplication satisfy the same estimates as in the finite dimensional space. Therefore, the proof of Theorem 1 presented in Section 3 goes through when we replace the finite dimensional norms by the adapted ones on lattices. 5.1.1. Decay functions. Following [18], we say that Γ : Z d → R + is a decay function when For example, the function Γ α,θ defined by Γ α,θ (i) = a|i| −α e −θ|i| , if i = 0 and Γ α,θ (0) = a is a decay function for θ ≥ 0, α > d provided that a ≤ a * (α, θ, d). However, the function exp(−θ|i|) is not a decay function (see [18]).
5.1.2.
Phase space of lattice systems. The phase space of the lattice map system will be where M = {(x, y) ∈ C k /Z k × C j | | Im x|, | Im y| ≤ ρ} is the complex extension of an Euclidean exact symplectic manifold, with a symplectic form Ω.
We consider M endowed with the ℓ ∞ topology.
5.1.3.
Diffeomorphisms with decay properties. Given a decay function Γ we define the space C 1 Γ of C 1 diffeomorphisms F in M such that they are real analytic and their derivatives admit the representation We also denote by C 1 Γ the set of vector fields X that satisfy the previous conditions. We will refer to the functions in these spaces as diffeomorphisms with decay or vector fields with decay.
Remark. Prof. L. Sadun remarked that the condition (5.2) for the interactions has a very transparent physical interpretation. Indeed, note that |∂ j F i | can be interpreted as a bound of the direct effect of system j on the system i. On the other hand, |∂ k F i ||∂ j F k | is a bound on the effect of the system j on the system i interacting through the system k. Hence k∈Z d |∂ k F i ||∂ j F k | is an upper estimate of the effect of j upon i through modifications of the medium. Hence condition (5.2) ensures that the bound on the direct interaction between two systems also dominates the interactions through the medium.
Remark. The axiomatic definition of the space C 1 Γ is nontrivial because we have given the ℓ ∞ topology on the lattice. As emphasized in [7] there are linear operators in ℓ ∞ which are not determined by their matrix elements. For example, the linear operator A : ℓ ∞ (Z) → R defined by Ax = lim i→∞ x i on a closed subspace (then extended by Hahn-Banach theorem) satisfies ∂(Ax) ∂xi = 0 for all i ∈ Z so that the matrix of partial derivatives is zero, but clearly A is not identically zero. The existence of these observables with vanishing partial derivatives has the physical meaning of existence of "observables at infinity".
In ℓ ∞ we can consider linear operators A which are given by their matrix representation, i.e. (Av We call them decay operators. By (5.2) we have the Banach algebra property AB Γ ≤ A Γ B Γ . An important consequence of the properties of decay functions is that if F, G ∈ C 1 Γ then F • G ∈ C 1 Γ . More generally we define C r Γ maps as the maps F : B ⊂ M → M such that F ∈ C r (B) and D j F (x) ∈ C 1 Γ (B, ℓ ∞ ), for 0 ≤ j ≤ r − 1. Another result, whose proof can be found in [9], is the following lemma.
admits a unique solution S t ∈ C 1 ([0, T ], C r Γ ). Using Lemma 2, we can obtain results for vector fields from results for maps. Of course, coupled map lattices have been studied extensively on their own [1].
We will consider maps F of the form F = F 0 + εF 1 with F 0 = Z d f , where f : M → M is an analytic, exact symplectic map with a positive measure set of KAM tori with twist condition and a hyperbolic fixed point. By assumption, the map F 0 has a hyperbolic fixed point. We will choose the origin of coordinates in such a way that it corresponds to 0. where the norm K c,ρ,Γ is defined by sup i∈Z d K i ρ min j=1,...,L Γ(i − c j ) −1 .
The key observation is that for all i ∈ Z d and a fixed sequence c whose centers are very separated, we have: where C and D depend on Γ. Then we obtain, when the centers are well separated: We have estimates similar to (5.3) for any decay operator in place of DF .
5.2.
The analogue of Theorem 1 for lattice systems. The version of Theorem 1 for lattice systems requires the following additional hypotheses • The map F is a decay diffeomorphism and is such that F (0) = 0.
• The approximate invariant embedding is a decay embedding with respect to some centers c. • The error of the invariance equation is small in the norm · c,ρ,Γ .
• The projections over the approximately invariant hyperbolic splitting have decay properties. That is, the operators Π σ , σ = s, c, u, are bounded in · c,ρ,Γ .
(5.5)
After these modifications in the hypotheses and the conclusions, the proof of the result goes through line by line provided we change the norms in the above calculations to be the norms in the decay spaces.
In particular, the proof of the perturbation result for invariant spaces is based on the contraction properties which only depend on the Banach algebra properties of · Γ and the assumptions in (5.5).
The equations on the center space are finite dimensional, so that the treatment in Section 3 does not need any change. As a matter of fact, we just need that the pull-back of the formal symplectic form Ω ∞ = i∈Z d Ω on M to the center subspace (which is isomorphic to the cotangent bundle of a torus) makes sense. Since this space is finite dimensional and since we take the pull-back through decay functions, the pull-back is a well-defined symplectic form on the torus.
Solutions with infinitely many frequencies.
Using the a posteriori format of the theorem, the decay properties of the whiskered tori and the translation invariance, we can construct increasingly complicated solutions. The idea is that, given two exact solutions (with two frequencies ω 1 and ω 2 ) localized around two collections of sites c 1 , c 2 , if we translate one of them far away so that the two solutions interact weakly with each other, then we can consider them as an approximately invariant solution localized aroundc ≡ c 1 ∪ (c 2 + k), where k ∈ Z d is the translation on the lattice. Then the existence result shows that, if the frequencies are jointly Diophantine, and the solutions satisfy some non-degeneracy and hyperbolicity conditions, there is a true solution localized aroundc.
The process can be repeated indefinitely. In each step of the procedure, the hyperbolicity and non-degeneracy conditions deteriorate only slightly and we take some slightly weaker decay properties of the interaction. Of course, as we increase the number of frequencies, the Diophantine conditions become worse. The key point is that any smallness condition on the error (in a space corresponding to a slightly slower decay) can be accomplished by translating the trial solutions sufficiently far apart (namely increasing |k|).
The sequence of solutions constructed by iterating the procedure converges in the sense that the motion on each of the sites (i.e. componentwise on the lattice) converges. Of course, each step produces a severe change in some of the sites, so that the convergence is very non-uniform. Nevertheless, the convergence on the sites is enough to guarantee that the limit satisfies the invariance equation.
Solutions in lattice systems with infinite frequencies were constructed also in [11,26,2] based on very different arguments (solutions decreasing [11], frequencies increasing [2]) from the ones used here. The solutions in the above papers are maximal dimensional, rather than whiskered. We also call attention to [24] which produces solutions with infinitely many frequencies in the "beam equations" and to [14,13] which also consider the existence of quasi-periodic breathers with long range coupling. These solutions have many elliptic normal directions, which we do not consider.
For simplicity, we state the main result for the Klein-Gordon models (5.1).
Theorem 3. Consider models of the form (5.1) with analytic potential with decay and assume that the single site equationsẍ = −V ′ (x) admit both • A set Ω 0 ⊂ D ℓ (κ, τ ) of positive measure such that ω ∈ Ω 0 implies that there is a non-degenerate KAM torus of the single site equations. • A hyperbolic fixed point.
Assume also that δ is sufficiently small.
Then for each infinite frequency ω ∈ Ω N 0 satisfying for some κ R > 0 and τ R ≥ Rℓ, and for every α > d, θ ≥ 0, there exists a sequence c ≡ {c i } i∈N ⊂ Z d and an embedding K : (T ℓ ) N → M satisfying (2.1). Furthermore, the embedding satisfies the estimate for all i ∈ Z d where · (Dρ) N is the norm · ρ on the extension of the infinite dimensional torus (T ℓ ) N .
We also note that if we equip Ω 0 with the probability measure Λ obtained by normalizing the Lebesgue measure on it (the Kolmogorov measure), the set of frequencies satisfying (5.6) has full Λ N probability measure.
The result for PDE's
The theory for whiskered tori described in the previous sections is flexible enough to be used to construct whiskered tori for partial differential equations. Particularly, this allows to obtain results in the case of ill-posed (in the sense of Hadamard) equations.
Equation (6.1) (resp. (6.2)) is extremely ill posed, as can be seen by the fact that, when we linearize at u = 0 (resp. u = 0, v = 0), we have the dispersion relation Note that λ ± k ≈ ±4π 2 √ α|k| 2 as k → +∞. Nevertheless, one can prove the existence of analytic finite dimensional quasiperiodic solutions. The existence of center manifolds for (6.1) and (6.2) has been considered in [3]. In the case of PDE's, one has of course to take into account that the evolution is not given by a continuous vector field but rather by some unbounded operators. Of course, in applications, one has to consider the approximate solutions to initialize the KAM procedure. We note that the linear system is degenerateno twist -but if we consider Lindstedt series solutions of small amplitude, the degeneracy gets removed at some order in the expansion. The error can be made smaller to a much smaller order so that we can apply the small twist theorem.
The main result in this context follows from an abstract theorem following the proof in the finite dimensional case for evolution equations defined in an abstract Banach space. We need to assume that the hyperbolic splitting defined in (H2) is a splitting into closed subspaces. The evolution in the stable subspace is assumed to define a semigroup for positive time, while the evolution in the unstable space is assumed to define a semigroup for negative time. Similar ideas are very standard and have been used in the literature [6,20]. In the PDE case, the existence of invariant splitting amounts to spectral properties of the linearization.
|
2019-04-19T13:04:50.856Z
|
2009-05-01T00:00:00.000
|
{
"year": 2009,
"sha1": "0559fd52f2fa9921a6d82d397e6f6aaeccf3f13c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/era.2009.16.9",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "05d5089c5edd18037995132f685ed009dc7f5851",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
56324944
|
pes2o/s2orc
|
v3-fos-license
|
Population density , habitat dynamic and aerial survival of relict cave bivalves from genus Congeria in the Dinaric karst
*hbilandz@irb.hr Citation:
INTRODUCTION
Biodiversity in subterranean realm is generally low compared to surface habitats of the same region.This is presumably due to the very specific and often harsh environmental conditions such as the lack of light and low nutrient availability (Gibert & Deharveng, 2002).In that context, some of the functional species groups are mostly absent in the subterranean ecosystems, i.e., primary producers and primary consumers (herbivores; Mohr & Poulson, 1966).
Unlike surface habitats where low biodiversity is often coupled to high population densities, in caves population sizes do not seem to be large, at least judging from the time and effort researchers require to collect cave fauna (Trajano, 2001).There are exceptions, and certain species can form large aggregations in some caves (e.g., Fenolio & Graening, 2009).One of those exceptions are cave bivalves of the genus Congeria which can cover most or all of available surfaces in some localities (H.B., B.J. pers.observ.).However, the total number of individual bivalves in any of the sites has never been assessed and is not known.
Within the last decade and with the application of molecular phylogenetic methods it became apparent that subterranean diversity is often underestimated, and the existence of cryptic species is a very common phenomenon in the subterranean habitats.As a result, the majority of groundwater species and/or lineages have very narrow distributions, regularly not exceeding 200 km (Trontelj et al., 2009).One of these species is Congeria kusceri, Bole, 1962, which was considered to be the only known troglobiotic bivalve species until recently.Today we know that different populations of previously described C. kusceri represent three distinct species (Bilandžija et al., 2013), including Congeria jalzici, Morton &Bilandžija, 2013 andCongeria mulaomerovici, Morton &Bilandžija, 2013.All three species are found in only 15 cave localities in the Dinaric Karst region; 1 in Slovenia, 6 in Croatia, and 8 in Bosnia and Herzegovina (Jalžić, 1998;Jalžić, International Journal of Speleology, 46 (1), 13-22.Tampa, FL (USA) January 2017 2001; Bilandžija et al., 2013).Their distributions do not overlap and the range of each individual species is actually very small.The most widespread is C. kusceri found in 8 localities within the Neretva River basin, in Croatia and Bosnia and Herzegovina.The second widespread is C. jalzici, known from 4 localities; 1 isolated site in Slovenia and 3 in Croatia, within Lika River basin.Congeria mulaomerovici is known from only 3 localities in the Sana River basin in Bosnia and Herzegovina.The most research has been done on C. kusceri (e.g., biology and anatomy, Morton et al., 1998;phylogeny, Stepien et al., 2001; life history, Morton & Puljas, 2013;growth and longevity, Puljas et al., 2014), as some of the caves it inhabits are easily accessible and available for research throughout the year.
Congeria is one of the three extant genera in the family Dreissenidae.Phylogenetic analyses place Congeria in close relationship with Mytilopsis, while nominal genus Dreissena is positioned in a separate clade (Stepien et al., 2001;Bilandžija et al., 2013).All three species of Congeria are considered Tertiary relicts that begun to diverge at the end of Miocene.The related genera Mytilopsis and Dreissena share similarities in their life history traits such as a life span of two years, a high reproductive rate and rapid growth (Morton, 1969;Morton, 1989;Pathy & Mackie, 1992).In addition, both genera contain highly invasive species on a global scale (i.e., Dreissena polymorpha, (Pallas, 1771), Dreissena bugensis, Andrusov, 1897, Mytilopsis leucophaeata, (Conrad, 1831)).Their reproductive strategy has an advantage for invading new territories and probably explains their success in introduced areas (Borcherding, 1991;Ram et al., 2011).Further, they are most commonly found between 2 and 12 m of depth and can tolerate a wide range of water salinities and temperatures (Erben et al., 1995;Stanczykowska, 1977).
On the contrary, representatives of genus Congeria have developed some specific and unique features presumably due to their adaptation to subterranean environment.Similarly to other stygobiont species, Congeria kusceri has a long life span of more than 50 years (Puljas et al., 2014) and reaches sexual maturity at the age of 10 years.It has an annual reproductive cycle, produces just a few offspring, and broods early larvae in ctenidia and juveniles in mantle pouches (Morton & Puljas, 2013).These features contribute to the low recovery potential of these species, making cave bivalves very susceptible to environmental changes.
Congeria kusceri is listed as Vulnerable (VU) in the European Red List of Non-marine Mollusks based on IUCN criteria (Cuttelod et al., 2011).Since C. jalzici and C. mulaomerovici have been described only recently (Morton & Bilandžija, 2013), their IUCN status has not yet been evaluated on the international level.However, both C. jalzici and C. kusceri in Croatia are listed as Critically Endangered (CR) (Bilandžija & Jalžić, 2009).They are distributed in areas heavily affected by man-made changes: building of hydroelectric power plants and redirection of water courses left many sinkholes (including Congeria sites) without any influx of water.Unfortunately, human interventions are still ongoing and future plans include another dam in the Lika River basin that will affect at least two but more likely all three C. jalzici localities in Croatia and a series of dams in the upper Neretva River basin in Bosnia and Herzegovina that may have an effect on most or all of the C. kusceri sites.
Congeria species are also included in Annexes II and IV of the Habitat Directive (94/93/ECC).As a part of preparation for admittance of Croatia to the European Union (EU) and the obligations to comply with EU's nature conservation legislation, especially related to implementation of Habitats Directive, we conducted research on Congeria including searches for new localities, visits to all known sites, studies of population ecology and assessments of threats.Here we report some of the results of those multiannual studies.
In the first part we focus on the C. kusceri population from Jama u Predolcu cave, located in the town of Metković, Croatia with the goal to determine the population size.In the second part we describe the results of a two-year water temperature and levels surveys in 3 caves inhabited by C. kusceri: Jama u Predolcu, Pukotina u tunelu Polje Jezero-Peračko Blato, and Žira, as well as in a single C. jalzici locality: Markov Ponor (Fig. 1).We also report on the ability of these species to survive long periods living outside of the water.Our research gives us insights into the basic habitat characteristics of these unique species and is valuable from the species conservation perspective.Since C. jalzici and C. kusceri are under a great threat from both past and future pressures it is important to establish reference points for environmental parameters that can serve for planning future interventions as well as subsequent monitoring of Congeria populations.
Density survey
We estimated the density of the Congeria kusceri population in Jama u Predolcu cave, in the town of Metković, Croatia.This 71 m long and 20 m deep cave (Fig. 2) is composed of three chambers: the Entrance Hall, the Speleothem Passage and the Water Passage.A lake at the bottom of the Entrance Hall is divided into the Shallow and Deep Lake by a ridge.The Entrance Hall is illuminated but not much sunlight reaches the level of the lakes which are 10 m below the entrance.The Speleothem Passage leads to the Water Passage which ends in a Third Lake.This site was chosen because the cave and subterranean lakes are easily accessible throughout the year.
The population size was estimated based on the census of 10% of the surface in the Shallow and Deep Lake, and 20% of the surface in the Third Lake.Since the bottom of all three lakes is covered in mud and sediment and does not provide suitable habitat for C. kusceri, counting was performed only on lake walls.Counting lasted over two weeks (in June and September 2012) and was performed by four or five divers.Counting was carried out in squares of International Journal of Speleology, 46 (1), 13-22.Tampa, FL (USA) January 2017 estimation was calculated following Hanson (1967) since both of the presumptions for this method were met; i.e., the animals were sampled at random, and all the animals existing on each plot were counted only once.The method is based on the mean number of animals recorded per each plot, and the variance of the mean, which is used to calculate the 95% confidence limit.The results are then extrapolated to the whole surface of lake walls.
In order to examine a possible correlation between the number of bivalves and depth, correlation coefficient was calculated for each lake separately.Data were grouped by meters of depth for each of the lakes (0 m to maximum 9 m in the Deep Lake) and examined with nonparametric Spearman's rank order correlation test, with significance level of P < 0.05.Analyses were performed using STATISTICA software (StatSoft Inc., 2014).
Water level and temperature data
We placed data loggers (HOBO 250-Foot Depth Water Level Data Logger, Onset, USA) in three C. kusceri and one C. jalzici cave.All three C. kusceri caves are in the Neretva River catchment.The Žira and Jama u Predolcu caves are part of the Trebišnjica river basin, a left tributary of the Neretva River, whereas the Pukotina u tunelu Polje Jezero-Peračko Blato Cave is located on the right bank of the Neretva River.The Žira Cave is a sinkhole of the Trebišnjica River and is located upstream of the Jama u Predolcu Cave.The Jama u Predolcu receives water mostly from the Trebišnjica River but its water level is also influenced by the Neretva River since it is situated directly in the Neretva river valley.
Data loggers recorded water level and temperature every 45 minutes.Data logger was placed in the Shallow Lake of Jama u Predolcu Cave, and recorded data from 23 June 2010 until 19 September 2012, in total for the period of 816 days.In the International Journal of Speleology, 46 (1), 13-22. Tampa, FL (USA) January 2017 Pukotina u tunelu Polje Jezero-Peračko Blato data was collected between 18 September 2010 and 14 September 2012, for a total of 726 days.A third data logger was placed in Žira Cave in the period from 3 September 2011 until 8 September 2012 (370 days).
Water level and temperature data for C. jalzici was monitored in Markov Ponor, a sinkhole in Lika River Basin, located over 250 km northwest from the Neretva River and C. kusceri sites.Data was recorder from 8 September 2010 until 21 September 2012, for a period of 744 days.On 18 October 2011 the data logger was moved to another more secluded place, since it was detached from the original spot and was found lying in the rocks 2 m away.
RESULTS
Fig. 3. Density of individuals recorded on different depths in Jama u Predolcu Cave.
Density survey
We counted a total of 7,412 individuals in all three lakes of Jama u Predolcu cave.The highest number was registered in the Shallow Lake, 4,111 individuals, 3,010 individuals in the Deep Lake, and 291 individuals in the Third Lake.Based on these results, we estimated the number of individuals for all tree lakes; the population ranges between 40,995 and 41,240 individuals in the Shallow Lake, between 30,024 and 30,185 individuals in the Deep Lake, and from 1,435 to 1,481 individuals in the Third Lake.Altogether, the population size for the Jama u Predolcu Cave ranges between 72,454 and 72,906 individuals.
The density of individuals varied considerably, both among and within lakes, and depended on the microhabitat conditions and depth in each of the lakes.The highest total density was recorded in the Shallow Lake, where mean density was 211 ± 305 individuals per square meter (ind/m 2 ).The recorded mean density was 103 ± 201 ind/m 2 in the Deep Lake, and 59 ± 111 ind/m 2 in the Third Lake.
Our analyses showed that there is a correlation between the depth and the number of individuals, in all three lakes, although it was not significant in the Third Lake (Third Lake P > 0.05, r = -0.62;Shallow Lake P < 0.05, r = -0.75;Deep Lake P < 0.01, r = -0.84).The highest density of individuals was observed between 1 and 3 m of depth (Fig. 3); in the Shallow Lake the average density between 1 and 3 m was 267 ± 333 ind/m 2 with maximal value of 1625 ind/m 2 , in the Deep Lake the average was 192 ± 286 ind/m 2 and in the Third Lake 107 ± 153 ind/m 2 .Such high standard deviations are a result of the fact that bivalves are very patchily distributed on cave walls; in certain areas they form large congregations whereas in other, such as smooth surfaces, they are not present at all.
Water level and temperature data
In Jama u Predolcu Cave, the data logger collected 25,975 records of temperature and water level in the period of 816 days.The data are shown in Table 1 and Fig. 4.There are several abrupt peaks in water level, short in both the size (max 2 m) and duration.The fact that natural hydrological conditions are interrupted is obvious from the irregular pattern of peak occurrence (October and November 2010, and February and April 2012) including no high peaks in 2011.The temperature data is more in accordance with expected values; peak temperatures of above 18°C were recorded in summer months each year and lower values (around 13°C) in winter months.
In the Pukotina u tunelu Polje Jezero-Peračko Blato Cave the data logger recorded 23,253 measurements in the period of 726 days.The data are shown in Table 1 and Fig. 5.A period of low water levels started in June of both 2011 and 2012.Water levels began rising in late autumn, followed by oscillations during the whole winter and spring with several high peaks throughout that period.Temperature data shows that the lowest temperatures coincide with highest water levels and vice versa.For example, the highest water level peak recorded in November/ December 2010 coincided with the lowest recorded temperature (7.18°C).
In Žira Cave the data logger recorded 11,879 measurements in a period of 370 days (Table 1 and Fig. 6).A flat line in the water level data is present because the data logger was above the water for most of the time.Consequently, the details for water level changes during the low water levels are missing.High peaks (> 0.5 m change) occurred in December 2011, and February and April 2012.The temperature in the cave was very stable throughout the recording period, with a total variation of only 1.08°C.
In Markov Ponor sinkhole the data logger recorded a total of 23,695 measurements during 744 days (Table 1 and Fig. 7).However, due to the detachment of the data logger, on 18 October 2011 it had to be moved to a more protected spot where the chances that water current would detach it again were minimized.
Aerial survival
We have observed Congeria individuals from all three species living on air-exposed cave walls for a part of the year.The proportion of the population that gets exposed to air is small but it happens in most localities: for C. kusceri it is Žira, Pukotina u tunelu Polje Jezero-Peračko Blato, Jasena, Gradnica, Doljašnica, and Plitica; for C. jalzici it is Markov Ponor and Dankov Ponor; and for C. mulaomerovici Suvaja and Dabarska pećina.Only in five localities bivalves do not get exposed to air and these localities are mostly springs with direct outlet towards the surface where, as far as we know, there is no dry rock surface in the cave itself.
During the period of low water levels in Markov Ponor, the period of air exposure that bivalves were able to survive lasted for more than two months.Often, patches of bivalves living outside of the water were found in passages that were far away (tens of meters) from remaining lakes and ponds which excludes the possibility that they migrate in and out of the water.Interestingly, we would invariably find individuals with their shells open and inhalant and exhalant syphons extruded (Fig. 8).
DISCUSSION
The densities of individuals in extreme environments, such as caves, are usually limited due to the low food availability.This applies to the majority of stygobiont species; for example, cave populations of fishes are generally smaller than of those living in epigean habitats (Trajano, 1997;Bichuette & Trajano, 2015).There are exceptions like the sulpidic, nutrient rich caves where the animals live in high density populations (e.g., Jourdan et al., 2014).We have observed Congeria bivalves forming dense populations during our field surveys.Results of the population size study show that C. kusceri can live in populations with up to 1625 individuals per square meter.The reason for such high density could be related to a filter feeding life style of this species.There are two more sedentary filter feeders in the Dinaric Karst, the cave sponge Eunapius subterraneus Sket & Velikonja, 1984 and the cave tube worm Marifugia cavatica, Absolon & Hrabe, 1930 which co-occurs with Congeria at all sites.Both the sponge and the tube worm can form large colonies and live in dense populations (H.B., B.J. pers.observ.).In addition, Stygobromus emarginatus (Hubricht, 1943), a non-sedentary filter feeding amphipod, also has high population sizes (Knapp & Fong, 1999).Filter feeding therefore seems to be a good feeding strategy that can sustain large populations of subterranean animals.
High densities of cave bivalves suggest that subterranean waters they inhabit might be rich in organic matter.Changes in water levels, as indicated by our data, are very rapid and presumably can carry a rich supply of organic matter from the surface to the subterranean realm which would be the basis for sustaining large colonies of filter feeders.For example, C. jalzici lives in the sinkholes which are likely more nutrient rich due to the influx of organic matter with the water from the surface.Congeria kusceri lives in caves with various hydrological functions but it has been found only in the most downstream parts of the Neretva River basin.Water sinking and resurfacing in the upper parts of the catchment likely contributes to organic matter enrichment so that there are enough nutrients to sustain large colonies of bivalves (and in some sites cave tube worms too) in the downstream parts of the basin.Measurements of nutrient concentrations along river basins are needed to confirm this hypothesis but was outside the scope of this paper.
Other species in the family Dreissenidae also live in high density populations.Moreover, invasive species of dreissenid mussels are known to overgrow any sort of hard substrate (including other bivalves), and their densities in infested areas are commonly higher than few thousand ind/m 2 (Burlakova et al., 2006;Strayer & Malcolm, 2006;Nalepa et al., 2010).There are big differences in bivalve densities between different lakes in Jama u Predolcu.The low density in the Third Lake is potentially a consequence of the lower nutrient availability.Namely, sunlight reaches the surface of lakes in the Entrance hall and can potentially enable the growth of some photosynthetic organisms which can in turn form the basis of the food web.Further research is needed to test this hypothesis.The difference between bivalve densities in the Deep and Shallow lakes in the Entrance Hall can be explained by the micro relief.The Deep Lake has mostly simple and smooth cave walls, whereas the Shallow Lake has very rugged cave walls and therefore a greater surface for attachment of the animals.
Depth distribution of individuals shows resemblance in all three lakes.Congeria kusceri prefers depths between 1 and 3 m, with the peak at around 2 m depth in all three lakes (Fig. 3), which corresponds to some of the dreissenid species which also prefer shallow waters (e.g., D. polymorpha which is commonly found between one and 30 m; Stanczykowska, 1977;Mackle et al., 1989;Snyder et al., 1997).
Interestingly, Congeria bivalves are able to survive extended periods outside of the water, which is not a commonly encountered phenomenon.According to our data from Markov Ponor, these periods can last for more than 2 months.Remarkably, during those times we have observed C. jalzici individuals with their valves slightly open and exhalant and inhalant siphons ejected suggesting they are not dormant (Fig. 8).Some estuarine bivalves can survive outside the water, but for much shorter time periods (hours to several days; e.g., Hiroki, 1977).In addition, several freshwater mussels can survive up to one year buried in mud, but their shells are airtight closed (e.g., Velesunio ambiguous; Jones, 2007), and they lose up to 40 % of their body mass.Further, effects of desiccation depend upon temperature, and a cooler environment (like the one Congeria inhabits) means shorter survival time (Firth et al., 2011;Urian et al., 2011).In the closely related dreissenid species D. polymorpha and D. bugensis, emersion tolerance is lower than in other freshwater species, and at temperatures below 25°C and higher relative humidity, individuals are able to survive slightly more than 10 days (McMahon et al., 1993;Ussery & McMahon, 1995).So this trait in Congeria maybe evolved as a response to specific karst subterranean conditions where water level changes can be very severe.A big advantage in caves is that the air humidity is very high, often around 100%, so the threat of desiccation is minimized.However, the fact that bivalves have their shells open implies they may be extracting food or oxygen from the thin layer of water or just drops of water dripping over them from the cave ceiling.If so, this would be a unique adaptation in bivalves, and could help them to survive long periods outside of the water.
According to our data C. kusceri lives in a range of temperatures from 7.2 to 19.4°C.Previously reported temperatures are in the higher range; Morton et al. (1998) and Puljas et al. (2014) reported temperature of 13.5°C, whereas Jalžić (1998) by a water level rise meaning that an influx of cold water from the surface causes them.For instance, the lowest temperature in Žira and Jama u Predolcu caves was recorded at the same time, in February 2012, and was caused by the water level rise in the whole Trebišnjica river basin.In Pukotina u tunelu Polje Jezero-Peračko Blato and Markov Ponor the lowest temperature was recorded in December 2010 which coincides with the highest water level peak recorded in our study, suggesting that a heavy rainfall or other weather event filled up subterranean aquifers with cold water.
Temperature variations in Žira Cave were minimal (Δ = 1.08°C) because of hydrotechnical changes in Trebišnjica basin.The river was channeled into a concrete riverbed and the sinkholes were cut off from the water supply (except maybe during very high levels) leaving precipitation as the only source of water in the subterranean portion of the basin.This is also the cause of only 2 -2.6 m water level change in both localities in Trebišnjica basin, Jama u Predolcu and Žira cave.Temperature variations were higher for the other two C. kusceri sites (Jama u Predolcu Δ = 7.75°C, Pukotina u tunelu Polje Jezero-Peračko Blato Δ = 10.27°C) as well as in C. jalzici site (Markov Ponor Δ = 8.37°C).However, both temperature and water levels can change very rapidly indicating that Congeria species are not highly sensitive to variations in environmental conditions.
The highest water level peaks tended to coincide in all four localities (November/December 2010) and are probably a consequence of the autumn with above average rainfall, especially in south Croatia.Another event in the Neretva river basin that is evident in all three C. kusceri sites happened in February/April 2012.The lowest water levels occurred in autumn 2011 but do not exactly match in different localities and are probably a result of a dry summer/autumn in the whole region.However, it is also obvious from water level data that natural hydrological conditions are very disturbed.In general, one would expect a trend of low water levels and high temperatures during the summer and the opposite during the winter, but there are many exceptions (e.g., high water levels from June to beginning of October 2011 in Jama u Predolcu or occurrence of peak temperatures in Pukotina u tunelu Polje Jezero-Peračko Blato in May of both 2011 and 2012).In addition to hydrotechnical changes in Trebišnjica basin (already explained above) another reason for these events could be the use of uncontrolled quantities of water for agricultural purposes, which has been observed in some tributaries of Neretva River (Bonacci et al., 2012).Lika river basin, which is characterized by extreme and very quick changes in its discharge, has been heavily impacted by human activities as well, and for the same purpose: electric energy production (Bonacci & Andrić, 2008).Markov Ponor has been cut off from the river and no water sinks into the cave, although excess water is released into the sinkhole during high water conditions to prevent flooding of Lipovo Polje, as in the autumn of 2010.
It is unknown exactly how the hydrological modifications described above affected Congeria populations because there was no research on Congeria at the time the changes were made, but we can deduce that they had devastating consequences.For example, Bole (1962) reported that Congeria was growing like clusters of grapes in the last chamber of Žira Cave.Today only individual bivalves can be seen on the walls in the deepest part of the cave.The water level in Jama u Predolcu dropped by 10 m, after Trebišnjica river dam was built.Since today the maximum depth of the lakes in the cave is 10 m, it is possible that more than half of the Congeria population was destroyed at that time.Further, thick layers of dead shells can be found throughout the passages of Markov Ponor, suggesting a catastrophic event, probably not too long ago.The large hydrological changes in the basin when the Lika hydroelectric power plant was built are the likely cause.In addition, bivalves vanished from two previously known localities: Crni Ponor and Izvor kod kapele Sv.Mihovila.In the latter one we found a Congeria population in the mid 1990 but 20 years later there were no bivalves in the spring.The entrance to the spring was widened and today daylight penetrates into the entire cave including the subterranean pool where bivalves used to live, which may have caused their disappearance.
There are a number of other threats to Congeria populations: karst waters are in general very susceptible to contamination from domestic sources and agricultural runoff, and water extraction which can significantly reduce the water levels in the underground water bodies.In addition, tourism development in cave systems can also influence the subterranean habitats and fauna.Unfortunately, in the last few years Jama u Predolcu has also been used as a tourist site, and many changes were made without the proper assessment of the effects on Congeria and other fauna, and without establishing a baseline for subsequent monitoring of the habitat quality and Congeria population health.
Although population from Jama u Predolcu seems to be large and vigorous, it is important to stress that all three species of genus Congeria have been found in only 15 caves.Since Congeria is a long lived and slowly reproducing animal, it is highly sensitive to habitat changes.Data on ecology and population sizes are urgently needed to establish management plan for all three species, and for the whole distribution area, similarly to the plan already developed for Croatia (Bilandžija et al., 2014).Conservation of subterranean fauna and habitats should be of the highest priority since karst system in the Dinaric mountains is considered to be one of the most unique underground systems in the world (Culver & Sket, 2000;Deharveng et al., 2012).
CONCLUDING REMARKS
Bivalve species from genus Congeria are sedentary filter feeders.Unlike majority of cave species, they form populations with high densities.We estimated International Journal of Speleology, 46 (1), 13-22.Tampa, FL (USA) January 2017 the population size of Congeria kusceri in Jama u Predolcu and it is between 72,454 and 72,906 individuals.The highest density, reaching maximum of 1,625 individuals per square meter, similar to surface species from the family Dreissenidae, was observed between 1 and 3 m of depth.
Dinaric karst region, as one of the globally important subterranean hotspots, is under strong anthropogenic pressure.Our data show that hydrotechnical interventions have impacted all Congeria sites we monitored and due to the interconnectivity of the whole karst system they have likely influenced all C. kusceri and C. jalzici localities.This is clearly visible in the water level data that do not follow the expected natural cycle.Our results show that C. jalzici lives in colder caves and is exposed to much higher water level oscillations than C. kusceri.During periods of low water some of the bivalves become exposed to air, a condition that lasts for more than two months.Uniquely among bivalves, we found individuals exposed to the air to be active during that period.
Fig. 1 .
Fig. 1.Congeria kusceri and C. jalzici localities were included in our study.Lika river basin: 1) Markov Ponor; Neretva river basin: 2) Pukotina u tunelu polje Jezero -Peračko Blato, 3) Jama u Predolcu, and 4) Žira.40 x 40 cm, which were divided in four 20 x 20 cm.The squares were placed diagonally to the next square from the surface to the bottom, i.e., upper left corner of the new counting square would be positioned according to placement of the lower right corner of the preceding square.The starting points for each of the diagonal were 4 m apart in the Shallow and Deep Lake, and 2 m apart in the Third Lake.Data obtained by census of sample plots were used as a basis for population size estimation.The
Fig. 5 .
Fig. 5. Water level (blue) and temperature (black) in Pukotina u tunelu Polje Jezero-Peračko Blato from 18 September 2010 untill 14 September 2012.and followed a predictive pattern.Starting in June 2011, the water level dropped, reaching the lowest point in early to mid-August; in late autumn it increased several meters and then showed only slight oscillations from December to June.The highest change in water level was recorded on 31 November 2010 when water level had risen for 40 m
Fig. 8 .
Fig. 8.An individual with open shells found outside of the water in Markov Ponor (photo by H. Bilandžija).
Table 1 .
Therefore, during first period from 08 September 2010 until 17 October 2011, 12,910 measurements were recorded, and from 18 October 2011 until 21 September 2012, 10,761 measurements.Other than just a few peaks, the water level was about the same Population density, habitat dynamic and aerial survival of cave Congeria Data logger measurements on three sites, Jama u Predolcu, tunnel between Polje Jezero-Peračko Blato, and Žira.N: number of measurements, T max: highest recorded temperature, T min: lowest recorded temperature, T mean ± SD: average temperature at the site in the measured period ± standard deviation, H 2 0 max: highest water level recorded at the site, H 2 0 min: lowest water level recorded at the site, Δ depth: difference between the highest and lowest water level, H 2 O mean ± SD: mean water level ± standard deviation.
International Journal of Speleology, 46 (1), 13-22.Tampa, FL (USA) January 2017 temperatures between 14.5 and 19°C, probably because most of stygofauna research is done in the summer when water levels are low.The highest temperatures in C. kusceri localities were recorded in May 2012, except in Jama u Predolcu Cave where the peak was in August 2012.Low temperature extremes are always accompanied
|
2018-12-17T22:40:44.647Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "9b519b8472e06dfebd693023c9a61152bf973ac7",
"oa_license": "CCBYNC",
"oa_url": "https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=2020&context=ijs",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9b519b8472e06dfebd693023c9a61152bf973ac7",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
13235061
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Tricho-Rhino-Phalangeal Syndrome (TRPS) 1 in Apoptosis during Embryonic Development and Tumor Progression
TRPS1 is a GATA-type transcription factor that is closely related to human tricho-rhino-phalangeal syndrome (TRPS) types I and III, variants of an autosomal dominant skeletal disorder. During embryonic development, Trps1 represses Sox9 expression and regulates Wnt signaling pathways that determine the number of hair follicles and their normal morphogenesis. In the growth plate, Trps1 regulates chondrocytes condensation, proliferation, and maturation and phalangeal joint formation by functioning downstream of Gdf5 signaling and by targeting at Pthrp, Stat3 and Runx2. Also, Trps1 protein directly interacts with an activated form of Gli3. In embryonic kidneys, Trps1 functions downstream of BMP7 promoting the mesenchymal-to-epithelial transition, and facilitating tubule morphogenesis and ureteric bud branching. Moreover, Trps1 has been found to be closely related to tumorigenesis, invasion, and metastasis in prostate and breast cancers. It is interesting to note that during the development of hair follicles, bones, and kidneys, mutations in Trps1 cause, either directly or through crosstalk with other regulators, a notable change in cell proliferation and cell death. In this review, we will summarize the most recent studies on Trps1 and seek to elucidate the role for Trps1 in apoptotic regulation.
Introduction
Mutations of the TRPS1 gene lead to tricho-rhino-phalangeal syndrome (TRPS) types I and III (TRPS I, OMIM 190350; TRPS III, OMIM 190351), which are variants of an autosomal dominant malformation syndrome characterized by craniofacial and skeletal abnormalities [1]. Craniofacial malformation includes sparse scalp hair, a bulbous tip of the nose, a long flat philtrum, a thin upper lip, and protruding ears. Skeletal abnormalities include cone-shaped epiphyses at the phalanges, hip malformations, and short stature. Momeni P., et al. have assigned TRPS1 to human chromosome 8q24, which is mapped on a proximal locus of the gene EXT1 [2].
The TRPS1 gene is approximately 260.5 kb in length and consists of seven exons. The Kozak consensus ATG translation start site is located in the third exon. Both human and mouse TRPS1 genes encode a large zinc-finger GATA-type nuclear protein comprising 1,281 amino acid residues and a calculated molecular mass of 141 kDa. There is 93% similarity between the TRPS1 proteins of the two species [3,4].
The protein domain of TRPS1 consists of nine zinc-finger domains, including a single GATA-type DNA binding domain flanked by two potential nuclear localization signals (NLS) and two C-terminal zinc-finger domains sharing a similarity of 55% with those of the Ikaros transcription factors [2]. TRPS1 belongs to the GATA transcription factor family. Unlike the other family members from GATA-1 to GATA-6, which have two C4-type GATA zinc-fingers and function as transcriptional activators [5], TRPS1 has only one C4-type zinc-finger and functions as a transcription repressor. An intact GATA zinc-finger is indispensable for DNA-binding, whereas the repressive activity of TRPS1 depends on the two C-terminal Ikaros-like zinc-finger domains. Truncated TRPS1 without 119 of the residues in the C-terminal (C119) fails to repress GATA4-activated transcription while a fusion protein GATA4 + C119 is exerting repressive activity. The two C-terminal zinc-finger domains of the Ikaros family have been reported to be involved in protein-protein interactions [6] through which Ikaros binds to several repressor proteins, including CtBP [7], Sin3 [8], and Brg1 [9]. After translocating to the nucleus, TRPS1 binds to the consensus GATA sequences in the promoter regions of target genes by its C4-type zinc-finger and suppresses transcription through the protein-protein interactions of its Ikaros-like domain by forming heterodimers with other transcriptional repressive factors [4]. The use of yeast 2-hybrid assays has demonstrated that TRPS1 can interact with the light chain 8 protein (LC8a) [10] and the ring finger protein RNF4 [11]. DNA binding assays and reporter studies revealed that binding TRPS1 with either LC8a or RNF4 diminishes the interaction of TRPS1 with GATA consensus sequences, consequently releasing the repressor activity of TRPS1. Recently, Trps1 has been reported to bind to the promoters of several Wnt inhibitors including Wif1, Apcdd1, and Dkk4, activating their transcription [12]. This adds the role of transcriptional activator to the established role of Trps1 as a transcription repressor.
The function of Trps1 is studied by tracing Trps1 expression during embryonic development in mice and by morphogenesis studies in Trps1-deficient mice. Trps1 is expressed in a dynamic pattern with strict spatio-temporal regulation during mouse embryonic development [13]. Trps1 mRNA is detected prior to E7.5 with peak levels at around E11.5. From E12.5 to E14.5, Trps1 expression is intense in the facial region and pharyngeal arches, including the joints of limbs, maxilla, mandible snout, prospective phalanges, and hair follicles [4,13,14]. Trps1 expression is also detected in the developing lungs, gut, kidneys, and mesonephric ducts [13,15]. Both Trps1-null and Trps1 Δgt/Δgt mice die of respiratory failure soon after birth duo to abnormalities of their thoracic spines and ribs [14,16]. Trps1 −/− newborns have a prominently decreased number of hair follicles; skeletal malformation, including shortened bones in the limbs, facial abnormalities, and thoracic defects; poorly inflated lungs; and ill-developed kidneys [14][15][16][17]. The phenotypes found in Trps1-deficient mice mirror those found in TRPS patients, suggesting the indispensable role of TRPS1 in embryonic development.
Before Trps1 was assigned to be the causative gene whose mutation leads to tricho-rhino-phalangeal syndrome, GC79 (the previous name for TRPS1) was found to be one of the differentially expressed genes between androgen-dependent and independent prostate cancers [3]. Subsequently, Trps1 was identified to be the most prevalently expressed gene in breast tumors [18]. Collective evidence shows that Trps1 may play a protective role in preventing tumor growth, invasion, and metastasis by promoting apoptosis and counteracting epithelial-mesenchymal transitions [3,19,20]. Recently, it has been reported that patients with mutations in the TRPS1 gene have increased susceptibility to tumorigenesis [21].
Apoptosis, also known as programmed cell death, is crucial in embryonic development, organ morphogenesis, normal tissue cycling, and tumorigenesis. Tprs1 has been shown to play pivotal roles in the development of bone [14], distal phalangeal joints [22], the temporomandibular joint [17], and hair follicles [23], as well as in tumor progression [24], by regulating key factors that participate in signaling pathways controlling cell proliferation and apoptosis. Here, we will briefly review Trps1 and how it exerts its function in embryonic development and tumor progression by interfering with cell proliferation and death.
The Role of Trps1 in Developing Hair Follicles, Bone, and Kidney
TRPS1 is indispensible in normal hair follicle morphogenesis. Trps1 protein first appears in the dorsal epidermis in the undifferentiated epithelium at E14.0, whose expression is transient and diffuse. Later, at E15.5, Trps1 protein is observed in the dermal condensate; by E17.5, Trps1 protein is restricted in dermal papilla cells and the mesenchymal cells surrounding the hair pegs and underlying the epidermis [25]. Trps1 Δgt/Δgt neonates have an almost 50% reduction in the number of pelage follicles in the dorsal skin and completely lack vibrissa follicles [16]. Developing Trps1 Δgt/Δgt vibrissa follicles are irregularly spaced with reduced number and size, compared to their wild-type littermates at E16.5. Just after the initiation of epithelial peg downgrowth, the further development ceases and undergoes subsequent degeneration [12]. A marked increase in proliferation throughout Trps1 Δgt/Δgt mutant vibrissa follicles has been demonstrated by immunostaining for Ki67. Elevated Sox9 expression in both mRNA and protein levels is suggested to be the underlying mechanism [23]. Sox9 has been well demonstrated in a number of studies with regard to its pro-survival role during chondrogenesis [26] and hair follicle development [27], and recently it has been reported to have oncogenic potential [28]. Trps1 represses Sox9 transcription by direct promoter binding [23]. So it is likely that enhanced Sox9 expression due to a loss of Trps1 leads to active cell proliferation.
Microarray hybridization analysis between wild-type and Trps1 Δgt/Δgt whisker pad in early stage of vibrissae development (E12.5) identifies number of transcription factors and Wnt inhibitors are downregulated and several extracellular matrix proteins are upregulated in Trps1 Δgt/Δgt whisker pad.
Wnt signaling is upregulated in epithelial cells of the placode in mutant follicles [12]. Trps1 is likely to regulate the normal early vibrissa follicle organogenesis by orchestrating a wide range of gene expression.
Trps1 −/− newborn mice have shortened long bones and incompletely formed phalangeal joints. Histological examination of Trps1 −/− newborn tibiae revels an expanded length of the proliferative zone with a reduced chondrocyte density and a normal area of the hypertrophic zone in the growth plate. The differentiation of Trps1 −/− chondrocytes is normal with non-altered expression of Col II, Col X, and Ihh signaling [14]. Different from Trps1 −/− mice, in Trps1 Δgt/Δgt mice that have an in-frame mutation in GATA-DNA binding domain, both elongated proliferative and hypertrophic zone are observed, with enhanced Ihh signaling, and elevated Col X and Runx2 expression. Trps1 −/− mice represent the animal model of human TRPS type I, whereas Trps1 Δgt/Δgt mice reflect the TRPS type III, which are similar to TRPS type I except with severe generalized shortening of all phalanges and metacarpals [29,30]. Although the underlying mechanism is not clear, it is speculated to be due to gain of function of a Trps1Δgt allele, in addition to the loss of DNA-binding and repressive activity [31]. Apoptosis in hypertrophic chondrocytes is greatly inhibited in mutant growth plates compared to wild-type littermates. An in vitro study has demonstrated that cultured primary Trps1 −/− chondrocytes are more resistant to Jo2-induced apoptosis than wild-type chondrocytes. Reporter and ChIP assays have revealed that Trps1 inhibits Stat3 transcription by directly binding to GATA consensus sequences in the Stat3 promoter [14]. Stat3 has been reported to directly increase Bcl-2 and support hepatocyte survival, which is an important anti-apoptotic regulator [32]. Thus, it is possible that Trps1 exerts its pro-apoptotic functions by indirectly repressing Bcl-2 expression. Indeed, Trps1 −/− chondrocytes showed higher Bcl-2 expression after death-induction by Jo2. Another study has demonstrated that Trps1 represses PTHrP expression by direct promoter binding, contributing to an expanded proliferative zone in Trps1 −/− growth plate [33]. PTHrP inhibits major death pathways by blocking signaling via p53, death receptors, and mitochondria in tumor cells, so we can expect Trps1 to play a role in pro-apoptosis by counteracting PTHrP [34,35].
The development of the mammalian kidney is the result of a programmed reciprocal induction between ureteric buds (UBs) and the metanephric mesenchyme [36]. During kidney development, Trps1 is expressed in the ureteric buds, cap mesenchyme, and renal vesicles. Trps1 −/− developing kidneys exhibit fewer tubules and glomeruli and have expanded interstitium compared to wild-type mice [15] and defects in UB branching are also observed, which are possibly due to the abnormal mesenchyme and ineffective reciprocal induction [37]. Several studies have done to reveal the role of Trps1 in developing kidneys. Trps1 acts downstream of Bmp7 via the Bmp7/p38 MAPK/Trps1 pathway mediating Bmp7-induced mesenchymal-to-epithelial transition which lead to formation of tubules and glomeruli and is essential for normal renal development [15]. The loss of Trps1 leads to increased activation of TGF-β/Smad3 signaling [38], which is also observed in developing kidneys [39]. Expression patterns of several genes associated with the TGF-β/Smad3 signaling pathway, such as Rb1cc1, Arkadia, Smurf2, Smad7, and c-ski are altered in Trps1 −/− mice during kidney morphogenesis. Previous studies on TGF-β1 during tubulomorphogenesis have indicated that TGF-β1 functions as a negative regulator in ureteric bud development. The addition of TGF-β1 into culture medium of the ureteric buds leads to a decrease in overall size, branching numbers, and BrdU-positive UB tip cells and an increase in the thickness of UB stalks and the number of cells undergoing apoptosis [39][40][41], whereas the exogenous addition of SIS3, a Smad3 inhibitor, was able to restore Trps1 −/− UB branching [39].
Multiple genes are required for branching morphogenesis, however, the molecular mechanism remains largely unclear. Many studies suggest that active cell proliferation and inhibited apoptosis is the key feature of normal kidney development [42,43]. A number of genes, such as Pax2 [43], Bcl-2 [44], Mdm2 [45], and Mmp9 [46], have been identified as a fate-determining gene in kidney morphogenesis by regulating cell apoptosis. Mutations in these genes cause fulminant apoptosis in UB cells and subsequent dramatic decrease in UB branching. In accordance with this hypothesis, Trps1 −/− developing kidneys manifest low levels of cell proliferation and boosted apoptosis [39]. Hence we speculate that Trps1 plays a role in maintaining cell proliferation and counteract apoptosis in normal kidney morphogenesis.
TRPS1 Promotes Apoptosis and Counteracts Metastasis in Tumor Cells
TRPS1 was first discovered as one of the differentially expressed genes between androgen-dependent (LNCaP-FGC) and androgen-independent (LNCaP-LNO) prostate cancer cell lines [47]. TRPS1 protein was found to co-express with androgen receptors and PSA in androgen-dependent prostate cancers. TRPS1 is androgen-repressible in androgen-dependent, but not in androgen-independent, prostate cancers. Recently, TRPS1 mRNA was also detected in human breast cancer xenografts [48].
TRPS1 has begun to attract wide attention as an important regulator of apoptosis, as it has been found that TRPS1 protein expression is androgen-suppressive in androgen-dependent (LNCaP-FGC) prostate cancer cells but not in androgen-independent (LNCaP-LNO) prostate cancer cells [47]. Adding androgen to the culture medium promotes tumor cell growth and simultaneously represses TRPS1 mRNA expression [47]. In a regressing rat ventral prostate, castration-induced androgen withdrawal causes prostate cell apoptosis, a notable increase in Trps1 mRNA levels, and changes in a set of oxidative stress-related genes [3,49,50]. On the other hand, over expression of the TRPS1 gene by transiently transfecting TRPS1-encoding vectors leads to a dramatic increase in the apoptotic index of both COS-1 cells and LNCaP prostate cancer cells [3]. Proteomic analysis of androgen-independent DU145 prostate cancer cells that do not express TRPS1 and of genetically engineered DU145 cells that stably express recombinant TRPS1 have demonstrated that TRPS1 suppresses the protein expression of certain antioxidants, including superoxide dismutase, protein disulfide isomerase A3 precursor, endoplasmin precursor, and annexin A2 [51]. These studies suggest that the involvement of TRPS1 in apoptosis is to occur by perturbing the intracellular reduction-oxidation balance.
As previously discussed, in hair follicles of developing mice, Trps1 is able to activate several Wnt inhibitors, thus suppresses Wnt signaling [12]. Although there is no studies to investigate the relation between Trps1 and Wnt signaling in tumors cells, the deregulation of β-catenin activation do leads to tumors, such as colon cancer, leukemia, hair follicle tumors, and a wide variety of solid tumors [52], and this oncogenesis is at least partly related to elevated expressions of cyclin D1, c-myc, and the anti-apoptotic factor survivin [53][54][55]. Hence, it will be interesting to examine Trps1 expression in tumors that harbor up-regulated Wnt signaling. We speculate that Trps1 might play a role in tumorigenesis by regulating cell proliferation and cell death via interference with Wnt signaling.
Based on a comprehensive differential gene expression screen, Trps1 has been revealed as one of the most prevalent genes that is specifically overexpressed in breast cancer [18]. A quantitative immunohistochemistry (qIHC) approach found that TRPS1 protein expression in breast cancers above a certain threshold was correlated with markedly improved overall survival [20]. Trps1 is a target gene of miR-221/222 in luminally originated breast cancer cells counteracting EMT that restrains tumor cells from metastasis [19]. Collectively, TRPS1 is considered to be related to a better clinical prognosis of breast cancer.
Conclusion
An increasing number of studies are demonstrating that Trps1 is an indispensable regulator of embryonic development, growth plate regulation, and tumor morphology. Among these, a number of studies have made intriguing discoveries that hint at an apoptosis-regulating role for Trps1. As reviewed above, Trps1 regulates apoptosis either by directly repressing pro-survival factors, such as Sox9 and PTHrP, or by indirectly suppressing signaling pathways, such as Wnt and JAK-STAT signaling that favor cell survival. On the other hand, TGF-β1/Smad signaling is tightly regulated by Trps1 [38]. It has been reported that the incubation of cultured UB with TGF-β1 leads to enhanced apoptosis in UB tips. Thus, Trps1 may be perceived as an anti-apoptotic factor, a result consistent with our observation that during UB branching, Trps1 −/− UBs presents enhanced expression of TGF-β1 and apoptosis (Table 1). Table 1. Target genes of tricho-rhino-phalangeal syndrome (Trps) 1, interaction and function of Trps1, the tissues where the interaction with Trps1 takes place, and the consequences of the interaction. [38,39] In conclusion, Trps1 may act as both pro-and anti-apoptosis regulators in the context of microenvironment. However, no research currently exists on how these changes in apoptosis take place. There is lack of direct evidence showing direct interaction between Trps1 and classical apoptotic regulators. It will be interesting to investigate these apoptotic pathways with delicate experimental designs to hopefully elucidate the exact role of Trps1 in the regulation of apoptosis.
|
2014-10-01T00:00:00.000Z
|
2013-06-27T00:00:00.000
|
{
"year": 2013,
"sha1": "343cb54d2674d58deee3546d0b0a207dc1c93124",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/2073-4409/2/3/496/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "343cb54d2674d58deee3546d0b0a207dc1c93124",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
58902183
|
pes2o/s2orc
|
v3-fos-license
|
Environmental Proactivity and Environmental and Economic Performance : Evidence from the Winery Sector
Environmental sustainability in the winery sector is receiving increased attention from governments, environmental groups, and consumers. The aim of this study is to explore the relationship between the degree of proactivity of a firm’s environmental strategies and its business performance. The novelty of this research work lies in its definition of business performance, which includes business environmental performance in terms of reducing the firm’s environmental impacts and eco-efficiency in the use of resources such as water, energy, and raw materials, in addition to its economic performance. A model is proposed and tested using a sample of 312 Spanish wineries. Data were analysed using partial least squares path modelling (PLS-PM). The fitness and robustness of the structural model proved adequate. The results indicate positive correlation of environmental proactivity with economic and environmental performance. Although environmental proactivity improves business performance, it has a greater impact on reducing environmental impacts and improving eco-efficiency.
Introduction
The search for new sources of competitive advantages has resulted in companies reconsidering the role of environmental topics in corporate strategy against a backdrop of overwhelming concern for environmental issues.Proof of this widespread concern is the Directorate-General for Environment study [1], which reveals that 95% of Europeans believe that protecting the environment is important to them personally.Companies can no longer remain oblivious to these widespread environmental concerns.The inclusion of environmental topics in corporate strategy is therefore a growing market requirement due to pressure from various company stakeholders [2,3].Environmental corporate strategy could also represent a source of competitive advantage [4] not only in terms of eco-efficiency or cost reduction as a result of improved environmental performance and eco-innovation, for example, but also by differentiating a company from its competitors in terms of an improvement in the company's reputation for responsible management.Eco-efficiency and improved reputation focus on creating competitiveness through environmental or sustainable progress, thus enabling companies to act as driving forces for sustainable development [5,6].
The Triple Bottom Line (TBL) concept proposed by Elkington [7] considers there to be three dimensions for tackling sustainability (the social, the economic, and the environmental dimension) and that in order to achieve sustainable development these must be linked.According to Goodland [8], however, there is a stronger link between the economic and the environmental dimension since the aim of many of the environmental strategies and instruments implemented in business is to achieve an economic-environmental balance.This balance is geared towards environmental performance and leads to economic performance, giving rise to the concept of eco-efficiency.Environmental sustainability issues include resource efficiency, dematerialization, and the reduction of waste and emissions, thereby resulting in an improvement in environmental performance and/or a lower environmental impact [9].
The European Parliament [10] defines eco-innovation as the introduction of any new or significantly improved product (good or service), process, organizational change, or marketing solution that reduces the use of natural resources (including materials, energy, water, and land) and decreases the release of harmful substances during the product life-cycle.Eco-innovation is therefore a type of innovation that instigates eco-efficiency [11].
According to North [12], the aim of environmental management is to integrate environmental protection into each corporate management function in order to achieve optimal economic and environmental company performance.Unlike an environmental reactivity strategy, which only complies with current legislation requirements, corporate environmental proactivity is characterized by the voluntary adoption of measures which help reduce the environmental impact [13].The variety and diversity of these measures transforms environmental proactivity into a complex, multidimensional construct, as indicated by Aragón-Correa and Sharma [14].A core question in this context is to identify the dimensions of environmental proactivity in wineries, a question which has seldom been addressed in previous research and which serves as the starting point for this research work.
Many publications have analysed the impact of environmental proactivity on corporate results [15] and different conclusions have been reached.While some authors (e.g., [16,17]) believe the effects of environmental proactivity to be significant and positive, others find no clear relationship (e.g., [18]) or consider them to be negative (e.g., [19]).As Sen et al. suggest [20], one of the main reasons for the lack of any conclusive results might very well be due to the lack of homogeneity both in environmental proactivity dimensions and in the variety of corporate results employed (e.g., profit margin, sales growth, stock price, or perceived customer satisfaction).Additionally, Sharma and Vredenburg [21] point out that it is difficult to make generalizations when analysing companies from different sectors.A common sectoral context would therefore facilitate control of important external influences such as the level of environmental regulation, the amount of pressure from lobby groups, and/or environmental standards of common practices in the industry [22].From an environmental perspective, however, very few studies analyse a specific sector and even fewer have focused on the winery sector [23].
This work therefore aims to fill a gap in current research by empirically analysing the effects of the level of environmental proactivity in wineries on perceived economic and environmental results.With this as its main contribution, we also explore the multidimensional nature of environmental proactivity in the winery sector.This sector was chosen not only because of the shortage of research work in this context but also due to the special significance of this industry in Spain in terms of surface area cultivated, quantity of wine produced, and economic importance of the sector.This is traditionally considered to have a low environmental impact but the wine-making business and related support activities affect other high value-added agricultural sectors, in particular, and the ecosystem of a region of origin, in general, with particular importance for supply chain selection [24] and as a key factor for regional competitiveness [25].The Spanish wine sector also faces a series of threats relating to recent environmental changes that will determine company survival.Examples of such threats are the emergence of increasingly competitive countries in the worldwide wine market or the appropriate application of Common Market Organization (CMO) wine regulations [26].In recent years we have seen how one of the tools used by wineries on a corporate level to distinguish them from their competitors is the environmental issue, both in terms of organic products and ecological processes and also in terms of protecting the environment [26,27].In this respect, the mere fact of being more environmentally proactive could result in greater sales not only in the Spanish market but also through an increase in exports or access to countries with stricter environmental controls.
In order to achieve the objectives proposed, this article is structured as follows: firstly, the theoretical framework is explored and hypotheses formulated; secondly, the data gathering method is described; thirdly, the results obtained are presented and discussed; and finally, the main conclusions and limitations of the study are outlined and ensuing future lines of research are detailed.
Theoretical Approach
Generally speaking, corporate survival depends on the natural environment and it is therefore extremely important that companies find a balance to enable system supply and enrichment.The incorporation of the environmental variable into corporate strategies, however, has largely depended on the level of corporate proactiveness [28,29].According to Lumpkin and Dess, this is understood to be an "opportunity-seeking, forward-looking perspective involving introducing new products or services ahead of the competition and acting in anticipation of future demand to create change and shape the environment" [30] (p.431).The aim of any environmental proactivity strategy is therefore to reduce the environmental impact and manage the interface between business and nature beyond imposed compliance [14,31], and this entails implementing a variety of voluntary practices and initiatives in order to improve environmental performance.
While environmental literature identifies different types of environmental practices, there is no common consensus about the number or content of each group.The authors González-Benito and González-Benito [32] categorize the set of possible environmental practices into three groups: planning and organizational practices, which include those for developing environmental policies and environmental impact analysis; operational practices, which include those that focus on more environmentally-friendly product design and development and manufacturing and operational methods and processes; and communicational practices, which concerns how the company informs its social and institutional environment of the environmental actions which it has adopted.The author Hart [33] also divides them into three large groups, but these are to do with practices relating to pollution prevention, which could result in lower costs by improving corporate profitability; those relating to product protection, which influences the selection of raw materials and product design in order to minimize the environmental impact of the goods and services on offer; and those of sustainable development, which foster market creation in undeveloped economies while promoting rational consumption in developed economies.Finally, other authors (e.g., [28,34]) simplify their classification by differentiating between those practices that indicate a basic environmental or control commitment, which consist of eliminating, reducing, or treating pollutants once they have been generated (i.e., at the end of the production process), and those that involve an advanced environmental or prevention commitment, which attempt to reduce resource consumption and avoid excessive waste and pollutant generation.
The diversity of environmental practices representing environmental proactivity and the consideration of environmental proactivity as a dynamic capability [14,35] highlight the complex nature of this construct.This complexity may result in the lack of any consensus in literature about the best way to measure environmental proactivity.While some authors use a one-dimensional approach (e.g., [29]), others favour a multidimensional view of environmental proactivity (e.g., [32]).In this respect, Banerjee et al. [36] identify four dimensions that comprise the components of a reliable, multidimensional proactive environmental construct: internal and external environmental orientation, and environmental corporate and marketing strategy.Walls et al. [37] point out that most companies appear to develop one or several of six capabilities for building an environmental strategy: a historical orientation, network embeddedness, stakeholder networks, ISO certification schemes, top management skills, and human resources developed to address environmental issues.Although both emphasize the importance of strategy in identifying and measuring environmental proactivity, neither considers whether it is a formative or reflective construct for inferring the direction of causal flow between the construct and its indicators [38].Sarkis [39] therefore recommends that a generic strategic framework be used to identify the logical sequential process in the implementation of a proactive environmental process [40]: analysis and planning, organization and implementation, and control.To begin integrating environmental proactivity into the corporate strategy, each of the dimensions of strategy should be also linked closely to various environmental issues.Since the causal action flows from the latent variable to the indicators, Wright et al. [41] consider environmental proactivity to be a second-order reflective multidimensional construct.
Various research paradigms have attempted to explain the determining factors of corporate environmental proactivity, and these include the focus of natural resources [24], the perspective of stakeholders [25], or the cognitive focus [26].The natural resource-based view of the firm attempts to explain the development of competitive corporate advantages through the strategic handling of the firm's relationship with the environment.This theory suggests that given the increase in restrictions imposed by the natural (biophysical) environment, the organization's willingness to deal with such restrictions will determine the appearance of valuable, rare, and imperfectly imitable capabilities that will entail a superior economic and social result [42].This is why environmental proactivity can represent a source of sustainable competitive advantages [33,36] through cost reduction [43,44], product differentiation [45,46], or the creation of new business opportunities [47], which should impact positively on the economic results and reduce the environmental impact of companies by improving their environmental result.The authors Liu et al. [48] therefore performed a meta-analysis of sixty-eight studies which had been conducted in different countries and they reached the conclusion that environmental proactivity affects both the companies' economic and environmental results, although the strength of this impact varies according to the reference country as activity is developed with different regulations, stakeholder norms, and managerial mindsets.
The authors Judge and Douglas [49] confirm the relationship between environmental proactivity and environmental results and argue that it would be pointless otherwise since the ultimate objective and raison d'être of environmental management is to improve environmental results.The relationship between environmental proactivity and economic results, on the other hand, is not so obvious.Certain authors claim that the high costs associated with the implementation of environmental practices cancel out any possible competitive advantage that might be achieved, thereby discouraging firms from implementing them, and that economic results are unaffected [13,18] or negatively affected [19,50] by environmental proactivity.Others (e.g., [21,51]) indicate that it is the sector in which the firm operates that is responsible for this lack of consensus since environment-related practices and standards vary according to economic activity and so in many cases they are not comparable.
Very few studies have in fact been conducted in the wineries sector to identify the competitive advantage drivers in this industry in terms of environmental proactivity (e.g., [26,52]).These studies have focused on the factors leading to the adoption of an environmental management system (EMS) (e.g., [53,54]) or have examined subjects relating to consumer perceptions, brand image, or eco-labelling or eco-branding product differentiation strategies (e.g., [55,56]).Authors such as Atkin et al. [23] go even further and attempt to confirm links between the adoption of an EMS and entrepreneurship in wineries, but are unable to link this adoption with any cost reduction.One reason given by the authors for this is that although costs can easily and quickly be quantified, benefits are often long term and harder to measure.However, significant links were found with an increase in sales in new markets, an improvement in customer satisfaction, and an improvement in the corporate image.In this context, the following hypotheses are considered: H1: Environmental proactivity (EP) has a positive impact on the economic result of wineries (EcP).
H2: Environmental proactivity (EP) has a positive impact on the environmental results of wineries (EnP).
Finally, previous articles [49,57] established the positive relationship between environmental performance and economic results, although this could be affected by the existence of resources or capabilities in addition to the implemented environmental practices [46], by the way in which these practices are implemented [13], or by the phase of the economic cycle [58].Generally speaking, any improvement in environmental performance in terms of minimizing resource consumption and pollutant emissions [12] results in cost savings.This supports the concept of eco-efficiency [46], which is taken to be the input ratio of resources used and waste generated in relation to the final product obtained [59].The following hypothesis can therefore be formulated: H3: The environmental result (EnP) has a positive impact on the economic result of wineries (EcP).
By way of summary, Figure 1 shows the research model proposed in line with the working hypotheses.By identifying the dimensions of the environmental proactivity construct [32,60], we analyse the effects of this on the economic and environmental results of wineries [49], and of the environmental results on the economic results [57].By way of summary, Figure 1 shows the research model proposed in line with the working hypotheses.By identifying the dimensions of the environmental proactivity construct [32,60], we analyse the effects of this on the economic and environmental results of wineries [49], and of the environmental results on the economic results [57].
Data Collection and Sample
The data used in this work are part of a wider research project and were gathered from computer-assisted telephone interviews (CATI).These enabled information to be collected while data was simultaneously recorded, coded, and cleaned.The questionnaire was designed by taking into account previously published work and the opinions of a panel of environmental experts comprising managers from the Wine and Vine Institute of Castilla-La Mancha (IVICAM) and academics.The questionnaire pre-test was conducted with one-on-one interviews with those responsible for environmental matters or, in their absence, the managers of ten local wineries.Some of the questions were subsequently modified, reordered, or rewritten to aid comprehension.The final questionnaire consisted of thirty-eight questions (see Appendix A) which were grouped into three sections.The first section was entitled winery descriptive information and its seven questions collected data on turnover, number of employees, number of partners and how many of these belong to the same family, the existence of any environmental management system, and how long the winery has been operating.The second group of questions analysed environmental proactivity with twelve questions relating to environmental planning and analysis practices, environmental responsibility and organization practices, and environmental management control practices.The third section included nineteen questions about corporate performance to examine the interviewee's perception of the winery's economic and environmental results.The fieldwork was conducted during November 2015 and each interview lasted an average of 12.45 min.Every participant received the questionnaire before the interview in order to increase participation in the study and to speed up the phone data gathering process.
The study population for this study was every Spanish winery and a stratified sample was selected by regions from the information contained in the SABI database.The final sample comprised 312 Spanish wineries, which represented a sample error of 3.5% for a confidence level of 95.5%.
The model hypothesis was compared using the structural equation modelling (SEM) methodology with the partial least squares (PLS) technique and SmartPLS 3 software [61].This
Data Collection and Sample
The data used in this work are part of a wider research project and were gathered from computer-assisted telephone interviews (CATI).These enabled information to be collected while data was simultaneously recorded, coded, and cleaned.The questionnaire was designed by taking into account previously published work and the opinions of a panel of environmental experts comprising managers from the Wine and Vine Institute of Castilla-La Mancha (IVICAM) and academics.The questionnaire pre-test was conducted with one-on-one interviews with those responsible for environmental matters or, in their absence, the managers of ten local wineries.Some of the questions were subsequently modified, reordered, or rewritten to aid comprehension.The final questionnaire consisted of thirty-eight questions (see Appendix A) which were grouped into three sections.The first section was entitled winery descriptive information and its seven questions collected data on turnover, number of employees, number of partners and how many of these belong to the same family, the existence of any environmental management system, and how long the winery has been operating.The second group of questions analysed environmental proactivity with twelve questions relating to environmental planning and analysis practices, environmental responsibility and organization practices, and environmental management control practices.The third section included nineteen questions about corporate performance to examine the interviewee's perception of the winery's economic and environmental results.The fieldwork was conducted during November 2015 and each interview lasted an average of 12.45 min.Every participant received the questionnaire before the interview in order to increase participation in the study and to speed up the phone data gathering process.
The study population for this study was every Spanish winery and a stratified sample was selected by regions from the information contained in the SABI database.The final sample comprised 312 Spanish wineries, which represented a sample error of 3.5% for a confidence level of 95.5%.
The model hypothesis was compared using the structural equation modelling (SEM) methodology with the partial least squares (PLS) technique and SmartPLS 3 software [61].This technique has been used in previous similar studies because of its ability to predict one or more dependent variables of a model with a limited theoretical base.
Measures
In order to measure the variables in the study, Likert-type 5-point scales were compiled to record the extent to which the interviewees agreed or disagreed with a series of statements, with 1 corresponding to "strongly disagree" or "not at all" and 5 to "strongly agree" or "to very great extent", depending on the question.
Environmental Proactivity
Previous research suggests that environmental proactivity is a second-order reflective multidimensional construct [41] due to its multifaceted nature that is reflected in a multitude of different environmental practices [13,36].In order to develop a measuring scale for environmental proactivity dimensions in wineries, we consulted both general and sector-specific literature.Following on, therefore, from Barnerjee et al.'s [36] concept of corporate environmentalism and Sarkis's proactive environmental process [39], it was deemed appropriate to separate the environmental practices of wineries into three different dimensions: (1) Environmental planning and analysis (EPA): this comprises five items to evaluate the integration level of environmental concerns in the winery's strategic planning process.(2) Environmental responsibility and organization (ERO): this comprises three items and reflects the importance placed by the winery on the environment and the communication of environmental values to its members.(3) Environmental management control (EMC): adapted from de Pondeville et al. [62], this dimension comprises four items relating to feature rules, standard operating procedures, and result controls.
Corporate Performance
In order to properly record the performance of wineries, economic and environmental results should be separated accordingly: (1) Economic performance: in this research work, a subjective measure has been chosen and those responsible for environmental issues in wineries were asked to evaluate the impact of implemented environmental practices on twelve items relating to economic performance in accordance with those proposed by Sellers-Rubio [63].(2) Environmental performance: in order to evaluate environmental performance, we chose a similar approach to the one adopted by Atienza-Sahuquillo and Barba-Sánchez [2].These authors used objective environmental performance measures such as levels of emissions, discharge, waste, or noise in addition to consumption of water, energy, or raw materials.
Results
This section details the results obtained for the proposed research model.A PLS model must be analysed and interpreted in two stages, although the structural measurement parameters are estimated in a single step [64].During the first stage, the measurement model is evaluated by analysing whether the theoretical concepts are correctly measured using the observable variables; in the second stage, the structural model is evaluated in terms of, for example, the magnitude and significance of relationships between the different variables.
Measurement Validation
The measurement model for reflective constructs is assessed in terms of individual item reliability, construct reliability, convergent validity, and discriminant validity.Individual indicator reliability is analysed using indicator loading values.In order for an indicator to be accepted as part of a construct its loading should be ≥0.707,although Hair et al. [65] believe that indicator loadings of between 0.4 and 0.7 could remain if this helped improve content validity.In our case, the following items in the model have been debugged: epa5 of the environmental planning and analysis construct; ecp1, ecp3, ecp4, ecp11, and ecp12 of the economic performance construct; and enp6 and enp7 of the environmental performance construct.
Construct reliability is examined using Cronbach's alpha and the composite reliability (CR) index and results are shown in Table 1.In both cases, all construct values are greater than the critical value of 0.7, although Nunnully [66] suggests the stricter criterion of being equal to or greater than 0.8, a condition satisfied by this research except in the case of the environmental responsibility and organization (ERO) construct, where the value is 0.717.Convergent validity was assessed using the average variance extracted (AVE), which implies that a set of indicators represents a single construct with a value that is greater than 0.5 [67], a condition satisfied by every construct used in this research work (see Table 1).
Finally, in terms of the discriminant validity, one of the most accepted methods in PLS consists of comparing the square root of the AVE of each construct with the correlations of each construct with the others [68].Table 2 presents the results obtained, and these enable us to confirm the existence of discriminant validity between constructs, since the diagonal elements should be larger than the off-diagonal elements.In the case of correlation between the superordinate environmental proactivity (EP) construct and its dimensions (EPA, ERO, EMC), the values are high precisely because it is a reflective second-order construct with first-order constructs which are also reflective [69,70].
Structural Validation
The main causal relationship being contrasted is how a superordinate construct (EP) determines two latent reflective variables (EcP and EnP).The structural model is assessed using the significance of the path coefficients, by observing the explained variance values (R2) of the dependent variables and the standardized root mean square residual (SRMR).
The Bootstrap resampling technique was used to determine the statistical significance of the path coefficients and this generated 5000 alternative samples from the original data matrix.The parameters for each of these subsamples were re-estimated with PLS, and the t-value was used to contrast the accuracy of these estimates.The results are shown in Table 3 and these confirm the significance of every path coefficient.Although the effect regarding the relationship between the environmental performance and the economic performance constructs is not as strong as the others (significant at the 0.05 level), all of the proposed hypotheses can be accepted a priori.Table 4 shows the effects of the explicative latent variables on the endogenous latent variables.Since the superordinate environmental proactivity (EP) construct can predict or explain more than 50% of all of its dimensions, in Chin's terminology the amount of construct variance that is explained by the model is substantial [5].However, for endogenous constructs relating to the result, both for the economic (EcP) and environmental (EnP) constructs, the R 2 are around 10% which is the recommended minimum [6].Finally, SRMR is a goodness of model fit measure for PLS [71], which answers the question of whether the correlation matrix implied by our model is sufficiently similar to the empirical correlation matrix.In this case, the SRMR Composite Model is 0.079 (≤0.08), which means that it is adequate.
Figure 2 summarizes the final estimated model.It is possible to observe how the EP is a multidimensional construct, as suggested by specialist literature [13,16], which affects corporate results both on an economic (EcP) and environmental level (EnP).Moreover, environmental results also have a certain influence on economic results.
Discussion
Although the results confirm the multidimensional nature of the EP construct in line with many previous studies [14,32], the dimensions identified reveal certain peculiarities.The authors Henriques and Sadorsky [51] or Lazaro et al. [72] believe that this diversity in environmental proactivity dimensions undoubtedly stems from the different circumstances that surround the environmental problem in different countries and sectors of activity, which suggests that the relationships and implications between environmental parameters are not universal but rather should be studied in different contexts.As such, in the Spanish context of wineries, EP is understood to be a strategy of integration of the environmental variable in all business areas [63] and three EP dimensions are considered [26]: environmental planning and analysis (EPA), which consists of such things as designing a defined and formal environmental policy, identifying and evaluating environmental impacts, or defining environmental objectives in the sphere of the firm's strategic plan; environmental responsibility and organization (ERO), which concerns the involvement and commitment of all organization members and their participation through suggestions or specific work teams; and environmental management control (EMC), which includes the so-called management control for sustainability [70] or environmental management control systems (EMCS) [61] (i.e., the existence of formal procedures that gather, review, and audit the environmental impact, waste, and consumption reduction programs and environmental risks).The relationships established between EP and its different dimensions are significant and positive and this explains the high variance percentages (78.6%, 57%, and 74%, respectively).There is therefore enough evidence to conclude that environmental proactivity is reflected in the environmental practices adopted by the company.
Having identified EP dimensions, the question was whether this had a positive impact on the firm's economic results, as indicated by most reference literature [13,46], either directly or indirectly through an improvement in environmental results and a reduction in consumption or environmental impacts.In the case of wineries, EP has a positive and significant effect on perceived corporate performance in line with the results obtained by Atkin et al. [23].The first hypothesis is therefore accepted despite the very low explained variance (11.6%).Unsurprisingly, other factors do exist that determine and explain the economic results of wineries, such as label and designation of origin [27].The importance of these results stems from the fact that wineries have traditionally ignored environmental concerns which they consider to be irrelevant since this is not one of most polluting sectors [73].There is, however, evidence to suggest that such an approach is wrong.According to literature, the reasons for why this relationship exists include eco-efficiency and eco-innovation [6].
In terms of eco-efficiency, this research has demonstrated that EP also has a positive and significant impact on environmental results by reducing resource consumption and waste generation,
Discussion
Although the results confirm the multidimensional nature of the EP construct in line with many previous studies [14,32], the dimensions identified reveal certain peculiarities.The authors Henriques and Sadorsky [51] or Lazaro et al. [72] believe that this diversity in environmental proactivity dimensions undoubtedly stems from the different circumstances that surround the environmental problem in different countries and sectors of activity, which suggests that the relationships and implications between environmental parameters are not universal but rather should be studied in different contexts.As such, in the Spanish context of wineries, EP is understood to be a strategy of integration of the environmental variable in all business areas [63] and three EP dimensions are considered [26]: environmental planning and analysis (EPA), which consists of such things as designing a defined and formal environmental policy, identifying and evaluating environmental impacts, or defining environmental objectives in the sphere of the firm's strategic plan; environmental responsibility and organization (ERO), which concerns the involvement and commitment of all organization members and their participation through suggestions or specific work teams; and environmental management control (EMC), which includes the so-called management control for sustainability [70] or environmental management control systems (EMCS) [61] (i.e., the existence of formal procedures that gather, review, and audit the environmental impact, waste, and consumption reduction programs and environmental risks).The relationships established between EP and its different dimensions are significant and positive and this explains the high variance percentages (78.6%, 57%, and 74%, respectively).There is therefore enough evidence to conclude that environmental proactivity is reflected in the environmental practices adopted by the company.
Having identified EP dimensions, the question was whether this had a positive impact on the firm's economic results, as indicated by most reference literature [13,46], either directly or indirectly through an improvement in environmental results and a reduction in consumption or environmental impacts.In the case of wineries, EP has a positive and significant effect on perceived corporate performance in line with the results obtained by Atkin et al. [23].The first hypothesis is therefore accepted despite the very low explained variance (11.6%).Unsurprisingly, other factors do exist that determine and explain the economic results of wineries, such as label and designation of origin [27].The importance of these results stems from the fact that wineries have traditionally ignored environmental concerns which they consider to be irrelevant since this is not one of most polluting sectors [73].There is, however, evidence to suggest that such an approach is wrong.According to literature, the reasons for why this relationship exists include eco-efficiency and eco-innovation [6].
In terms of eco-efficiency, this research has demonstrated that EP also has a positive and significant impact on environmental results by reducing resource consumption and waste generation, and so the second hypothesis is accepted.However, environmental proactivity's contribution to the improvement of these environmental results is scarce, with low explained variance (9.9%), and this supports the conclusion that other factors exist in addition to environmental proactivity that could affect their improvement such as, for example, pressure from stakeholders or stricter legislation.According to González-Benito and González-Benito [32], the practices to change environmental behaviour must emanate from within the firm itself.Such practices include those that focus on the eco-design of products and productive processes, which are in turn the least visible for the socio-economic environment.Additionally, the implementation of standard ISO14001 on an operational level merely requires compliance with current legislation and Pomarici et al. [74] believe that this could involve limited environmental commitment to protecting the natural environment.It is therefore necessary to explore in greater depth the additional causes which lead wineries to reduce their consumption and environmental impact, and future research should be conducted and quantitative studies be supplemented with qualitative ones to enable new impact factors to be identified.
In terms of the last hypothesis proposed, indicating that the environmental result has a positive and significant effect on the economic result of wineries would lead us to accept this fourth hypothesis despite the low explained variance (3.19%).Authors such as Junquera and Del Brío [34] or Bansal and Roth [75] point out that economic motivation would not be the main reason why a firm would choose environmental proactivity, but rather that variables such as the environmental attitude, expectations, and motivations of managers would be key to explaining corporate environmental proactivity.Future research should investigate these associations and delve deeper into possible moderating and mediating effects.
In short, this research supports the conclusion that the environment can also be a differentiating factor by creating competitive advantages that improve the competitiveness of wineries through income generation as a result of cost reduction or better market perception, for example, although there are many unknown factors to be addressed in this research context.
Conclusions
In an increasingly competitive and more socially responsible setting, the generation of competitive advantages associated with sustainability is paramount for the survival of firms [6] in general and wineries in particular [73].Sustainability is the path to finding economic, ecological, and social balance, resulting in prosperity and the capitalization of new resources [7].The wineries under analysis maintained their sustainability commitments despite the economic downturn.
This research focuses on the environmental question and identifies three environmental proactivity dimensions in wineries (EPA, ERO, and EMC) and their possible relationship with corporate performance.Although the topic of economic profitability of environmental proactivity is still being discussed in environmental literature [12,19], the results obtained in this research reveal that a direct and significant relationship does exist and that this has clear management implications.This is because it could provide wineries with competitive advantages as a result of cost reduction or product differentiation thanks to eco-labelling, for example, and through an improvement in the firm's reputation [26].Consumers might consider products to be unique or innovative if they are produced in a sustainable and environmentally-friendly way [73].Previous studies, however, reveal that the personal environmental values of those responsible are also important for environmental proactivity in this sector [22].It is therefore also important to analyse the relationship between this environmental proactivity and environmental results.
According to the outcomes, not only does EP reduce resource consumption and waste generation, thereby minimizing the environmental impact of wineries, but these environmental results also have a positive impact on perceived corporate performance.Future research should therefore analyse the mediating effect that environmental results have on the relationship between EP and economic results.In addition to this indirect effect, it would also be interesting to analyse the effect that other variables such as the pressure exercised by stakeholders [38] or the perception of directives relating to the environment [30] have on environmental proactivity and their relationship with economic results in order to increase the explained variance percentage, one of the main limitations of this work.
Another limitation relates to result generalization, since this research work is based on a cross-sectorial sample limited to a single country.Future research might also focus on other national contexts in order to enable a comparison to be made between Spanish wineries and those of benchmark European Countries [72], such as France and Italy, although it would be necessary to consider the environmental regulatory differences between nations.
Although this work mainly focuses on the environmental proactivity of wineries, it would also be interesting to discover consumer opinion to analyse the impact that good corporate environmental practices have on the market.In this respect, the perception of wineries is not particularly positive [52] and the firm needs to adopt a more aggressive environmental education and communication strategy in order to reap the benefits in terms of higher sales in the medium-and long-term.Future research is needed to determine the longitudinal impacts of this environmental education and consumer awareness.
In terms of the implications of this study for practitioners working in the sector, it is possible to draw an interesting conclusion and that is environmental proactivity could be a source of competitive advantage since it directly affects a firm's performance and can improve efficiency and competitiveness.It does this either internally by reducing costs and promoting innovation, or externally by building and reinforcing the brand.It is evident that being a green winery helps to cement customer brand loyalty and generate new customers.Thanks to environmental proactivity, wineries can therefore develop new strategies to survive economic downturns and achieve better market positioning.
By way of conclusion, it is possible to extract useful data from the final analysis model regarding the global and specific environmental benefits of environmentally proactive wineries in terms of costs, differentiation, reputation, and sustainability for the stakeholders and policy-makers interested in improving the overall sustainability of the wine industry.c We constantly identify and evaluate new environmental aspects in terms of their impact.d We provide our suppliers with a detailed, written list of environmental requirements.e We have conducted a life cycle analysis of the main products manufactured in this company.f Each and every one in the winery is responsible for environmental performance.g Employee suggestions are an excellent source of ideas for improving the environmental result.h Formal work teams are used to identify environmental problems and opportunities and to develop solutions.i The environmental impact of operations is formally reviewed at least once a year.j Formal procedures exist to examine the environmental implications of new investments.k An annual audit of waste reduction programmes and their results is conducted in all production areas.l An annual audit of the environmental risks of existing production processes is conducted in all production areas.
3. Corporate performance: Assess the extent that implemented environmental practices have had on the following questions in terms of your winery (1 corresponding to "not at all" and 5 to "to a very great extent"): a
Figure 1 .
Figure 1.Research model and hypotheses.Notes: EPA: Environmental planning and analysis; ERO: Environmental responsibility and organization; EMC: Environmental management control; EP: Environmental proactivity; EcP: Economic performance; EnP: Environmental performance.
Figure 1 .
Figure 1.Research model and hypotheses.Notes: EPA: Environmental planning and analysis; ERO: Environmental responsibility and organization; EMC: Environmental management control; EP: Environmental proactivity; EcP: Economic performance; EnP: Environmental performance.
Table 4 .
Effects on endogenous variables.
|
2018-12-15T04:32:55.573Z
|
2016-10-13T00:00:00.000
|
{
"year": 2016,
"sha1": "350cf57ad9cacebafe868d9d957ea3bd5c538b61",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/8/10/1014/pdf?version=1476352547",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "350cf57ad9cacebafe868d9d957ea3bd5c538b61",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
119262566
|
pes2o/s2orc
|
v3-fos-license
|
Cosmological models with the energy density of random fluctuations and the Hubble-constant problem
First the fluctuation energy is derived from the adiabatic random fluctuations due to the second-order perturbation theory, and the evolutionary relation for it is expressed in the form of rho_f = rho_f (rho), where rho and rho_f are the densities of ordinary dust and the fluctuation energy, respectively. The pressureless matter as a constituent of the universe at the later stage is assumed to consist of ordinary dust and the fluctuation energy. Next, cosmological models including the fluctuation energy as a kind of dark matter are derived using the above relation, and it is found that the Hubble parameter and the other model parameters in the derived models can be consistent with the recent observational values. Moreover, the perturbations of rho and rho_f are studied.
Introduction
At the later stage of the universe, the main constituent is considered to be a pressureless matter consisting of ordinary dust. It is well known that the universe has random fluctuations in its density which were caused by quantum fluctuations at the early stage [1][2][3][4][5], and their amplitude and spectrum have been studied through precise mesurements of fluctuations in the cosmic microwave background radiation (CMB) by WMAP [6] and Planck [7,8] collaborations. However the mean energy density corresponding to the fluctuations has not been derived, and so their dynamical influence on the universe has also not been clarified yet.
In a previous paper [9], we tried to derive the energy density of random fluctuations using the general-relativistic second-order nonlinear perturbation theory [10,11], in which the random density fluctuations are given as the first-order density perturbations with the specified spectrum, and the homogeneous energy density ρ f was derived as the (spatially) averaged value of the second-order density perturbations. Moreover, the corresponding second-order metric perturbations and its spatial average were also derived. By adding the contribution of second-order homogeneous perturbations to the background model parameters, we renormalized the model parameters from the background ones to modified ones. As a result of this procedure, we found the possibility of solving the Hubble-constant problem, in which the contradiction between the measured Hubble constant and the background Hubble constant was shown [7,8,[12][13][14][15][16][17]. In the previous paper, it was found that the renormalized Hubble constant can become nearly equal to the measured Hubble constants.
In this paper, we treat the fluctuation energy as a kind of dark matter and construct cosmological models involving it as part of the constituent. In Sect.2 we express the fluctuation energy density ρ f as a function of the ordinary dust density ρ, using the result of calculations in the second-order perturbation theory in the basic background models. In Sect.3, we derive cosmological flat models including pressureless matter whose density is the sum of the densities of ordinary dust (ρ) and the fluctuation energy (ρ f ). The revised model parameters in these models are compared with those in the basic models without the fluctuation energy. In Sect. 4, we discuss the perturbations in the models with the fluctuation energy. In Sect.5, we give some concluding remarks. In Appendix A, the formula of the fluctuation energy is shown.
Evolutionary relation for the fluctuation energy
First, to derive the fluctuation energy, we assume two basic background models (Model 1 and Model 2) with and where ρ b is the density of ordinary dust in the basic background models, H b 0 is the Hubble parameter H b at the present epoch t b 0 , and 8πG = c = 1. In the previous paper [9], only Model 1 was taken as the background model. Here we also consider Model 2 for reference. The Hubble parameter H b satisfies Using the transfer function (BBKS) for cold dark matter adiabatic fluctuations [5], we derived the second-order density perturbations δ 2 ρ, and the spatial average δ 2 ρ as a function of the cosmic time t b in the previous paper. The formula for δ 2 ρ is shown in Appendix A. The latter is represented here as the fluctuation energy ρ f (≡ δ 2 ρ ). In this paper, we eliminate t b from ρ f and ρ b , and represent ρ f as the evolutionary function (ρ f (ρ b )) of ρ b . Moreover, the ratio of their values is expressed as The value β at ρ b → ∞ vanishes and the present values are for (0.24) −1 ≥ 1/u ≥ 0.9 (1 ≥ a b ≥ 0.6), and for 0.900 ≥ 1/u ≥ 0 (0.6 ≥ a b ≥ 0).
Cosmological models with the fluctuation energy and the model parameters
To derive a spatially flat model with the fluctuation energy, we consider the line element where the Greek and Roman letters denote 0, 1, 2, 3 and 1, 2, 3, respectively. The conformal time η(= x 0 ) is related to the cosmic time t by dt = a(η)dη.
In this paper, the fluctuation energy is regarded as a kind of dark matter, and is assumed to move together with ordinary dust. Then the velocity vector and energy-momentum tensor of pressureless matter are expressed in comoving coordinates as and with ρ T ≡ ρ + ρ f , where ρ T , ρ, and ρ f are the total density of pressureless matter, the ordinary dust density, and the fluctuation energy density, respectively, and we assume as the approximate equation of state for the fluctuation energy, where the function β(ρ) is specified by Eq.(5) with Figs. 1 and 2, and Eqs.(7) ∼ (10). From the Einstein equations, we obtain and the energy-momentum conservation (T µν ;ν = 0) gives the relation where a = 1 at the present epoch (t = t 0 ) and a prime denotes ∂/∂η. In the previous paper [9], the renormalization of the Hubble constant was done using the spatial average of the secondorder metric perturbation. In this paper, the Hubble parameter is derived only through 3/14 considering the fluctuation energy ρ f as the part of the total energy. Then the Hubble parameter H(≡ȧ/a = a ′ /a 2 ) satisfies and we have the relations for the model parameters and where H 0 is H at the present epoch (t 0 ). This model reduces to the basic background models in Sect. 2 in the limit a → 0, because ρ f /ρ → 0.
From Eqs. (15) and (18), the equation for a is and a(t) is determined by specifying Ω M , Ω Λ , and H 0 , and solving this equation. Now let us derive the model parameters (Ω M , Ω Λ , H 0 ) in the present model as the function of (Ω b M , Ω b Λ , H b 0 ) in the basic models. Here the Hubble parameters are represented by H and H b at epochs with scale factors a and a b , respectively, and their ratio α is expressed as using Eqs. (4) and (17). This equation is rewritten as At the present epoch with a = a b = 1, we have where (α 0 , β 0 ) is the present counterpart of (α, β) and Here we express Ω M d , Ω M f , and Ω Λ in terms of Ω b M and Ω b Λ . Using Eq.(23), we obtain Using Eq.(24), moreover, we obtain For the density parameter of the pressureless matter Ω M (≡ Ω M d + Ω M f ), we have and for ordinary dust, we have Here we consider the correspondence between the ordinary dust density in the model with ρ f = 0 and that in the basic model (ρ f = 0), so that we may clarify the additional effect of the fluctuation energy. First we take the correspondence in which the present densities of ordinary dust are equal, i.e., Then we obtain X = 1 + β 0 and from Eq.(23). For inserting β 0 ≡ β(ρ 0 ) and the model parameters of the two basic models, therefore, we obtain for Model 2.
For the present ordinary dust density ratio (ρ/ρ b ) 0 which is not equal to 1, we have X = (ρ/ρ b ) 0 (1 + β 0 ) from Eq.(24), and, using Eq.(23) for α 0 , we obtain the following parameters Next let us study the behaviors of models in the past in comparison with the basic models.
where β(ρ) is given by Eqs. (7) -(10), and To evaluate α in the past for (ρ/ρ b ) 0 = 1, we take the correspondence between a and a b , in such a way that ρ/ρ b = 1 also in the past. Then we have so that β → 0 and α → 1 for a → 0. To evaluate α in the past for (ρ/ρ b ) 0 = 1, we take the correspondence in such a way that Then we find that ρ/ρ b → 1 and β → 0 for a → 0, and from Eq.(37) that α → 1 for a → 0. The a dependences of 1/u, β and α in the case of (ρ/ρ b ) 0 = 1 for Model 2 (with the model parameter (33)) are shown in Figs. 3, 4 and 5, respectively, where u ≡ ρ/[3(H b 0 ) 2 ]. At the early stage with a < 0.6, the role of ρ f is effective and α increases with a, but at the later stage with 1.0 > a > 0.6, Λ is dominant and α decreases slowly after a peak.
The a dependences of 1/u, β and α in the case of (ρ/ρ b ) 0 = 1 are also found to be similar to those in the case of (ρ/ρ b ) 0 = 1, owing to the above correspondence. Here the a dependence of α in the case of (ρ/ρ b ) 0 = 1 for Model 1 (with the model parameter (34)) is shown in Fig. 6.
7/14
Moreover, let us define the time-dependent model parameters Ω M (t) and Ω b M (t) (representing those in the past) by Then Ω M = Ω M (t 0 ) and Ω b M = Ω b M (t 0 ), and we have the ratio This ratio tends to 1 for a → 0. The a dependence of Ω M (t)/Ω b M (t) is shown in Fig. 7 for the model parameter (33).
It is concluded, therefore, that at the later stage the models with the fluctuation energy can have a Hubble constant (H 0 = 73 ∼ 74 km s −1 Mpc −1 ) larger than that (H b 0 = 67.3 km s −1 Mpc −1 ) in the basic models, while, at the early stage with large densities, both models have the same Hubble constants (in such a way that H/H b → 1 for a → 0). This shows that the Hubble-constant problem [7,8,12,13,[15][16][17] can be solved by taking the fluctuation energy into account.
Perturbations in cosmological models with the fluctuation energy
The behaviors of linear perturbations in the cosmological models with pressureless matter are well-known and expressed using the gauge-invariant treatment [18,19].
Here we assume that the accurate background model has been obtained and consider the perturbations to it. The gauge-invariant density perturbation ǫ T for the total density ρ T (≡ ρ + ρ f ) satisfies the equation The evolutionary relation for the fluctuation energy is assumed to hold in the weak inhomogeneities. Then the gauge-invariant density perturbation ǫ for ordinary dust satisfies and the perturbation ǫ f of the fluctuation energy is expressed as and At the early stage of β ≪ 1, ǫ ≃ ǫ T and ǫ f ≪ ǫ, but at the later stage of β ∼ 1, ǫ and ǫ f are comparable.
Concluding remarks
The existence of random fluctuations is beyond doubt and their amplitudes are also wellknown [1][2][3][4][5]. We must take their energy (the fluctuation energy) into account, to clarify the dynamical evolution of the universe. This paper is the first step to considering it as a kind of dark matter. At the stage of a ≪ 1, the fluctuation energy ρ f is negligibly small compared with the density ρ of ordinary dust, but at the present epoch it occupies about 36 ∼ 41% of the 8/14 total density of the pressureless matter, depending on the basic models. The fluctuation energy was considered in this paper as part of the dark matter, which cannot be touched but contributes to the formation and evolution of astronomical objects at the later stage. The essential difference between the model with the fluctuation energy and the basic models is the quantitative large change in the dark matter.
In this paper we tentatively adopted Model 1 and Model 2 as the basic model, to derive the fluctuation energy using the second-order perturbation theory. The derived model parameters depend sensitively on their basic model parameters, the present ordinary dust density ratio (ρ/ρ b ) 0 , and the upper limit x max for the integrations A and B (in Appendix A). Therefore, they should be selected, so that the derived model parameters may be fitted as well as possible with the observational ones.
In the previous paper [9], we took the effect of fluctuation energy into account, by renormalizing the model parameters of a basic background model due to adding the second-order 9/14 density and metric perturbations to the background quantities. That method is different from the present one in which the cosmological models are constructed by taking the fluctuation energy into account as part of pressureless matter. However, we could obtain similar model parameters that are consistent with their observational values. The accuracy for the second-order perturbations ρ f (≡ δ 2 ρ) is good at the early stage of the universe, because β ≡ ρ f /ρ ≪ 1, but it becomes worse with the expansion of the universe. At the present epoch, β is still smaller than 1, but not so small, i.e. 0.552 and 0.685 for the two basic models as Eq. (6) shows. So, to derive a more accurate model at the stage of a ≃ 1, we should correct β(x) in Eq.(7) -Eq.(10), by constructing the higher-order general-relativistic perturbation theories.
The contributions of the super-sample modes (i.e. the large-scale modes longer than the survey scales) to the mean density fluctuations and the power spectrum in the finite-volume survey have recently been studied by several authors. [20,21] They are not equal to the backreaction of long-wavelength random fluctuations, but they may be closely connected with it, 10/14 and so with the present analyses. If so, the general-relativistic second-order perturbations, or the nonlinear perturbations in the post-Newtonian approximation may play important roles also in their treatments (in the similar way to our treatment in the previous paper [9]). This is because the large-scale modes cross the Hubble-scale length during their evolution from the very early stage to the present epoch. [22] Acknowledgement The author thanks the referee for helpful comments. A. Second-order density perturbations corresponding to the first-order random fluctuations In the Sect. 3 of the previous paper [9], we obtained the formula for the spatial average of the second-order density perturbations in the basic models. It is expressed as where P R0 = 2.2 × 10 −9 ,ρ = ρ + Λ, H b 0 = 100h, and K eq ≡ k eq /H b 0 = 219(Ω b M h). The definitions of Y (a), and Z(a) are found in the previous paper [9].
|
2017-09-09T00:21:18.000Z
|
2017-06-18T00:00:00.000
|
{
"year": 2017,
"sha1": "a6345875f8db7225d6dc6b177fe1d3868453f626",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/ptep/article-pdf/2017/8/083E04/19650730/ptx117.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c521cbfdd2757c6de5ac9c914933f7d30592f8be",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
53642466
|
pes2o/s2orc
|
v3-fos-license
|
Distribution of hemoglobinopathies in patients presenting for electrophoresis and comparison of result with High performance liquid chromatography
Correspondence: Dr. Runa Jha, MBBS, MD Department of Pathology Tribhuvan University Teaching Hospital, Maharajgunj, Kathmandu, Nepal Email: runa75jha@yahoo.com Background: Nearly 226 million carriers of thalassemias and abnormal hemoglobin are present worldwide according to the World Health Organization (WHO). The laboratory plays an important role in the investigation of the thalassemias and hemoglobinopathies. Cellulose acetate electrophoresis at alkaline pH and diagnosis based mainly on visual impression of thickness of band may miss the thalassemic trait patients. The aim of this study was to find out different hemoglobinopathies and thalassemia presenting in our hospital and to compare electrophoresis results with HPLC.
INTRODUCTION
The World Health Organisation (WHO) reports that the frequency of thalassemias and hemoglobinpathies carriers is 5.1% with nearly 226 million carriers worldwide. 1r the most severe cases, the only curative treatment is bone marrow transplantation with a human leukocyte antigen (HLA) compatible sibling.For the majority of patients, therefore, treatment remains supportive and consists of lifelong transfusion/chelation and management of acute and progressive organ damage. 2 Thus management of these diseases pose a significant burden on the healthcare system and family.
In several countries; there are screening programs with the aim of identifying carriers of hemoglobin disorders in order to assess the risk of a couple having a severely affected child and to provide information on the options available to avoid such an eventuality.Nepal doesn't have any such programme and there has been no population based study to find out the prevalence of thalassemia and hemoglobinopathy in Nepal.However there are some published data about such disorders in Nepalese population. 3,4e aim of this study was to find out different hemoglobinopathies and thalassemia presenting in our hospital and to compare electrophoresis results with HPLC.
MATERIAL AND METHODS
This study was performed in the Hematopathology section of Department of Pathology of Tribhuvan University Teaching Hospital (TUTH) on cases sent for electrophoresis, during 18 months period from October 2013 to March 2015 with the aim to identify differerent types of hemoglobinopathies and thalassemias presenting to TUTH, the ethnicity and hemogram findings of such patients and to compares electrophoresis and high performance liquid chromatography results.All electrophoresis performed in the department during the study period was evaluated.Before electrophoresis was performed, a complete blood count (CBC) by automated hematology analyser and peripheral smear examination was performed on all cases.Our laboratory only has facility of cellulose acetate electrophoresis at alkaline pH and our diagnosis is based mainly on visual impression of thickness of band seen.Some cases where hemogram findings were suspicious of thalassemia trait but electrophoresis did not show prominent band at A2 position were also sent for high performance liquid chromatography (HPLC).Some other cases were also randomly selected and sent for HPLC at and HPLC is shown in table 4 Electrophoresis was not able to detect 7 cases of beta thalassemia trait.The figure may be higher because not all such cases were sent for HPLC.
There was an eight month old child with anemia and microcytic hypochromic blood picture who had strong band at HbA2 position and Hb F position, Hb F was 40% , Band at Hb A position was faint .This was considered as compound heterozygous for Hb E -beta thalassemia by electrophoresis as hemogram findings and family study was suggestive.One of the parent was Hb E trait and one was Beta thalassemia trait.But no opinion was made in HPLC, may be considering the age of patient.HbA here was 5.8% and hemoglobin eluting at HbA 2 position was 46.8% in HPLC in this case.Hb J variant showed peak in HPLC however impression of HbH was made by presence of peak before Hb F. HbH can be seen better as fast moving band in electrophoresis and by HbH inclusion in supravital stain.In electrophoresis, in Hb M, strong band was seen and Hb F position but Hb F percentage calculated by alkali denaturation method was only 1%.So a conclusion was drawn that it was a band comigrating with HbF.
Percentage of different hemoglobins detected by HPLC in various conditions are given in table 6.This includes only cases in whom HPLC findings were available Mean HbF level in different abnormal hemoglobin is shown in table 5. Highest mean HbF was seen in thalassemia major (81.9%) followed by compound heterozygous for HbE beta thalassemia.
DISCUSSION
The prevalence of thalassemias and hemoglobinopathies varies with geographic locations.In Southeast Asia α-thalassaemia, β-thalassaemia, hemoglobin (Hb) E and Hb Constant Spring (CS) are prevalent.Hb E is the hallmark of Southeast Asia attaining a frequency of 50-60 per cent at the junction of Thailand, Laos and Cambodia.Hb CS gene frequencies vary between 1 and 8 per cent. 5e disorders of Hb frequently encountered in India include beta thalassemia, HbE -beta thalassemia, HbE, HbD and sickle cell anemia.In study by Mondal et al beta thalassemia trait was the most common abnormality found , followed by HbE trait and then E-beta thalassemia followed by beta thalassemia major/intermedia.Other variants detected included sickle cell trait, HbE disease, sickle cell disease, sickle β thalassemia, HbD-Punjab trait, double heterozygous state of HbS and HbE, double heterozygous state of HbS and HbD, Hb Lepore, HbJ-Meerut and HbH. 6In study of Goswami et al it was found that Hb E trait was the most common hemoglobinopathies (34.4%) followed by homozygous E (25.3%), beta-thalassemia trait (17.8%),E-β-thalassemia (15.1%), β-thalassemia major (1.5%), sickle cell-β-thalassemia (3.4%), sickle cell trait. 7Study done by Mehandi et al in Saudi population found Beta Thalassemia trait to be the most common hemoglobinopathy detected followed by Sickle cell trait and sickle cell alpha Thalassemia trait.The Hb variant E and D, which are more prevalent in Southeast Asia were rarely found among Saudis. 8In study done by Patel U et al in population of Gujarat, beta Thalassemia trait was most common hemoglobinopathy, followed by Thalassemia major, sickle cell anemia and sickle cell trait. 9In our study, beta thalassemia trait was most common, followed sickle cell anemia and then different variants of alpha thaassemia.
Other variants detected included compound heterozygous for HbE beta Thalassemia, thalassemia major, sickle cell beta Thalassemia, Hb E trait, and one case each of delta beta thalassemia, HbD and HbM.
HbE occurs at an extremely high frequency in many countries in Asia.Because there is also a high frequency of different beta-thalassemia alleles in these populations, the coinheritance of HbE and beta thalassemia, HbE beta thalassemia, occurs very frequently. 10In our study also 9.3% abnormal hemoglobins were HbE beta Thalassemia.Although molecular analysis was not done in these cases, diagnosis was made by combination of electrophoresis findings and by screening of parents.These patients had absent HbA band and increase HbF and HbA2.One parent of these patients was Beta thalassemia trait and other was Hb E trait.These patients had lower mean hemoglobin and red cell indices than HbE homozygous and HbE trait.Since this study selected cases with abnormal electrophoresis findings, this may be the reason of low number of HbE homozygous and HbE trait.Since these patients are asymptomatic may not have presented to hospital or may not have been referred for electrophoresis.The Hb E trait included in our study were also asymptomatic patients, their electrophoresis being run as part of family screening of patients having abnormal electrophoresis.
Fifity three percent were male and 47% were females in study of Manan et al. 11 Similar data was also found earlier by Yagnik and Balgir and reported 65.5, 56 and 62.1% of male patients, respectively. 12,13In our study also 66% patients were males.As suggested by Manan et al this might be due to the gender bias among the parents of these ill children who seek medical care and are ready to spend more for their male children only. 11rtain communities in India like Sindhis, Gujratis, Punjabis, and Bengalis are more commonly affected with beta thalassemia, the incidence varying from 1 to 17%.Some population groups from the north eastern regions have a high prevalence of HbE. 14 In study of Goswami et al occurrence of hemoglobinopathies was highest (72.1%) among Rajbanshis followed by Muslims (54.9%).In tribes like "Santal" and "Oraowo" and in Bengali Hindu and Marwari/Behari approximately equal percentage (34%) was observed while least belonged to mongoloid like "Nepalis" and other 'Hill men' population (17.5%). 15In our study also, maximum number of patients belonged to Tharu community (37.1%).Although abnormal hemoglobins were also found in other variable number of castes, most of them belonged to Terai region.Study done in Nepal on sickle cells anemia by Shrestha A and Karki S also showed that sickle cell anemia was most common in the Tharu community. 5 study of Mehdi SR et al, MCV and MCH were significantly low (P < 0.001) in cases of thalassemias presenting microcytic hypochromic picture on peripheral blood smear, however, these values were within the normal limits in sickle cell disorders.The red cell count was increased in cases of thalassemias while it was not much affected in sickle cell disorders.The indices were lower in sickle cell α thalassemia trait. 9In another study, Mehadi et al also concluded that moderate degree of microcytosis (MCV ≤ 78fl) and hypochromia (MCH ≤ 27pg) was a feature of β thalassemia trait and homozygous α-thalassemias.However, microcytosis was more marked in β thalassemia trait compared to heterozygous α-thalassemias. 16 our study, the mean hemoglobin as well as RBC count was lowest for beta thalassemia major (4.4 gms/dl,), followed by HbE beta thlassemia ( 6gm/dl).Sickle cell anemia patients had lower mean hemoglobin level than beta thalassemia traits (7.9 gm/dl vs 10.6 gm/dl).While RBC count was normal in sickle cell anemia ( mean 3.4 milliom/cumm),it was elevated in case of beta thalassemia trait(mean 5.2 milliom/cumm).Like their study, in our study also MCV and MCH were low in thalassemia.The beta thalassemia Many investigators have used different mathematical indices to distinguish beta thalassemia trait from iron deficiency anemia, using only a complete blood count.This process helps to select appropriate individuals for a more detailed examination; however, no study has found 100% specificity or sensitivity for any of these RBC indices.Vehapoglu A et al compared different mathematical indices and found that MCV and RBC counts and their related indices (Mentzer index and Ehsani index), have good discrimination ability in diagnosing beta thalassemia trait. 17In Mentezer index, if the quotient of the mean corpuscular volume (MCV, in fL) divided by the red blood cell count (RBC, in Millions per microlitre) is less than 13, thalassemia is said to be more likely.If the result is greater than 13, then iron-deficiency anemia is said to be more likely.In a lot of cases, the index may fall in between 11 and 13, such cases a peripheral blood smear and iron studies would help to differentiate iron deficiency from thalassemia. 18However another study suggests Srivastava formula to be more reliable. 19e HbA2 analysis is considered the gold standard for diagnosing thalassemia.Several studies have shown that iron deficiency directly affects the rates of HbA2 synthesis in bone marrow; therefore, 16-20 weeks of iron therapy should be instituted, after which a repeat serum iron with electrophoresis should be done to confirm improvement in the HbA2 levels. 20The most common problem is the presence of microcytosis with HbA2 and HbF concentrations within the reference range.This may be due to iron deficiency or α-thalassemia trait.Iron deficiency anemia produces a wide range of red cell abnormalities (reduction of MCV, MCH and hemoglobin levels and normal or lowered RBC) depending on the severity at the time of hematological analysis.For this reason iron deficiency anemia can be easily mistaken for some forms of heterozygous Thalassemia.Raised RBC with low MCV and MCH is more consistent with α thalassemia trait. 21The Hb A2 level may be modified by many factors.
The most frequent problem is the co-existence of an iron deficiency which may even normalize the Hb A2 level requiring a novel Hb assay after iron deficiency treatment.
In beta thalassemia carriers presenting with a normal Hb A2 level, the most frequent cause is a co-inherited delta globin abnormality.Increased levels of Hb A2 may result from the co-existence of a variant with electrophoretic or chromatographic properties close to that of Hb A2.As a rule, this situation has to be verified when a level of Hb A2 higher than 8 per cent is observed.It is always better to perform hemoglobin electrophoresis before any blood transfusion because though single transfusion does not affect hemoglobin pattern in electrophoresis but multiple transfusion shows significant difference.Quantitation of HbF is more important than HbA2 in beta thalassemia major.A2 percentage is normal in Thalassemia major due to Intermittently transfused β-thalassemic major patients showed both 'A' and 'F' band thicker and prominent, A band being thicker than F band in most cases.Whereas regularly transfused patients showed A band mainly, HbF band seen only in few cases in study by Paunipagar et al. 22 Six patients in our study showed strong A and F band and no other band.Two of these were delta beta thalassemia who were asymptomatic and had never received transfusion whereas the other four were transfused Thalassemia major.
The highest adult levels of HbF are seen in beta and delta beta thalassemia, or hereditary persistence of fetal hemoglobin (HPFH), in which HbF can constitute up to 100% of the hemoglobin.In sickle cell disease, HbF usually constitutes only between 5% and 20% of the total hemoglobin.In the presence of some hemoglobin variants (HbS, HbC, Hb Lepore and some unstable hemoglobins) and in association with the beta -thalassemia trait, a slight increase (1 to 5%) of HbF is found in the heterozygotes, while higher levels can be found in the homozygous.HbF levels are variable (10-80%) in presence of HbE and beta thalassemia, the important determinants being the age, the presence of alpha-thalassemia and of genetic determinants of gamma chain synthesis.Increase in HbF keeps HbS more soluble in the deoxygenated state, and the illness is thus less severe. 23In our study highest level of HbF was found in beta Thalassemia major (mean 84.9%, range 64% to 95%).This was followed by compound heterozygous for HbE /beta thalassemia ( 10-55% Differentiation of HbE disease beta thalassemia from homozygous HbE in samples containing HbA2/E > 75% and HbF < 15% is difficult.In places where the molecular analysis is not available, HbF > 5% in combination with MCV < 55 fL and hemoglobin < 100 g/ could be used for screening of beta-thalassemia/HbE disease. 26Mean HbF in HbE beta Thalassemia cases in our study was 26% ( range 5-55%).These patients had hemoglobin ranging from 4.1 to 7.9 gm/dl (mean 5.9 gm/dl)and MCV ranging from 55.2 to 64 fl( mean 60.8 fl).Although molecular analysis is not used for diagnosis in our study, these patients were symptomatic, had moderate to severe anemia and their one parent had beta thalassemia trait.
History of recent blood transfusion must be sought along with correct age so as to aid in an accurate diagnosis.Conditions with borderline Hb A2 need careful interpretation.Iron deficiency may lead to low Hb A2 and hence may mask a thalassemia trait whereas B12/folate deficiency may lead to slightly raised Hb A2 leading to a false diagnosis of a trait. 27moglobin electrophoresis is a labor intensive and time consuming method and is not that efficient when quantifying low concentrations of HbA2 and HbF.The HPLC method is a sensitive and precise method and has become the preferred method for thalassemia screening because of its simplicity, superior resolution, rapid assay time and accurate quantification of Hb fraction.
CONCLUSION
Beta Thalassemia trait and sickle cell anemia both are common in Nepal , along with some other hemoglobinopathies A sharp peak of hemoglobinopathies and thalassemias are seen in Tharu community though other communities are also effected.These abnormal hemoglobins and thalassemias are mainly seen in Terai region.Lowest hemoglobin was seen in thalassemia major followed by compound heterozygous for HbE beta thalassemia.Electrophoresis is time consuming and labour intensive method that fails to quantify hemoglobin percentage and thus is not appropriate test in beta thalassemia screening which is diagnosed by elevated HA2 level.As identification of traits is necessary to reduce the birth of thalassemia major cases, it should be used only in conjuction with more advanced techniques like HPLC or others.This study is only hospital based and diagnosis is mainly based on combination of different findings and not on genetic analysis.This study provides only a glimpse of different abnormal hemoglobins and their ethnic distribution.To know the exact burden of thalassemias and hemoglobinopathies and their ethnic and geographic distribution, community based studies are required and molecular methods should be used for mutation identification
Figure 1 :
Figure 1: Frequency of different hemoglobinopathies and thalassemias
Figure 2 :
Figure 2: Caste distribution of cases
Figure3:
Figure3: a)E beta thalssemia showing strong band at HbF abd HbA2 position b)normal control showing strong band at HbA position c)Beta thalassemia trait showing strong band at HbA and visible band at HbA2 position which is stronger than that of normal control d) Hb E trait showing strong band at Hb A and HbA2 position
Table 1 : Age distribution in different hemoglobinopathies and thalassemias Age group in years
1
Table 2 : Sex distribution in different hemoglobinopathies and thalassemias Diagnosis Female Male Total Percentage
the cost of investigator.Screening of parents was advised in all cases though it could not be done in all cases due to unavailability of one or both parents.So finally this study included cases with hemoglobinopathy diagnosed by either electrophoresis and/or HPLC.Those cases with suspected hemoglobinopathy but normal electrophoresis or HPLC findings were excluded Cellulose acetate Electrophoresis at alkaline pH was performed and interpreted according to standard protocol.disease and homozygous HbE presented with microcytic hypochromic anemia whereas sickle cell disease and HbE trait had normocytic normochromic or mild microcytic hypochromic blood picture.RDW was highest for thalassemia major and compound heterozygous for HbE beta thalassemia.Correlation between result of electrophoresis
Table 5 : Mean HbF % in different hemoglobin variants
24 Sickle cell anemia in our study had HbF upto 25% (mean 9.6%).Average Hb F was 19.62% in sickle cell disease in study of Shrikhande AV et al .Increase need for erythropoiesis because of chronic hemolysis or hematuria and pregnancy can precipitate Vitamin B12 and folic acid deficiency in Sickle cell disease leading to macrocytosis.24Howevermacrocytosiswasnotseen in any sickle cell anemia patients in our study.The MCV in sickle cell anemia ranged from 63 to 89 fl in our study(mean 74.7 fl)Hemoglobin levels in HbE beta thalassemia range widely between the different phenotypes, from 3 g/dl or less to as high as 11 g/dl.Mean level of HbF can range from 10-50%.The heterozygous state for HbE is characterized by minimal morphological abnormalities of the red cells and normal red cell indices; HbE constitutes 25%-30% of the hemoglobin.Homozygotes for HbE have hypochromic microcytic red cells with significant morphological abnormalities including increased numbers of target cells They are mildly anemic and the overall hematological findings are very similar to those of heterozygous βbeta thalassemia.25Inour study hemoglobin in HbE beta thalassemia ranged from 4.3 to 7.9gm/dl.Mean HbF in these cases was 30.9% ( range 16% to 42.8%)in our study.The red cell indices were normal to slightly microcytic hypochromic and Red cell distribution width (RDW) ranged from 12.4 t0 16.5 ( Mean 14.1).HbF was also not elevated.There was only one case of HbE homozygous in our study.This patient had hemoglobin 7.1 gm/dl but showed marked anisocytosis and low MCV and MCH.This patient's RDW was 27.
|
2018-11-05T05:37:28.964Z
|
2015-09-14T00:00:00.000
|
{
"year": 2015,
"sha1": "0af17857876ac62ba159ebc510846bd8fc59cb03",
"oa_license": "CCBY",
"oa_url": "https://www.nepjol.info/index.php/JPN/article/download/15642/12599",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0af17857876ac62ba159ebc510846bd8fc59cb03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
108586038
|
pes2o/s2orc
|
v3-fos-license
|
Arthroscopic Evaluation of Knee Cartilage Using Optical Reflection Spectroscopy
Articular cartilage is critical for painless and low-friction range of motion; however, disruption of articular cartilage, particularly in the knee joint, is common. Treatment options are based on the size and depth of the chondral defect, as well as involvement of subchondral bone. The gold standard for evaluation of articular cartilage is with arthroscopy, but it is limited by its ability to objectively judge the depth and severity of chondral damage. Optical reflection spectroscopy has been introduced to objectively assess the thickness of cartilage. We present a technique to systematically evaluate the articular cartilage of the knee using BioOptico optical reflection spectroscopy (Arthrex) to better evaluate those with visible chondral and subchondral defects.
T he role of articular cartilage is to permit joint motion that is near frictionless, with damage to this surface leading to the development of arthritis. Articular lesions are common, with an estimated 1 million patients affected annually in the United States alone. 1 Furthermore, a retrospective review of knee arthroscopy procedures found a 63% prevalence of chondral lesions with an average of 2.7 lesions per knee. 2 Patients with these lesions can present with complaints of knee pain, swelling, and disability. These lesions are difficult to evaluate with plain radiography and magnetic resonance imaging (MRI) is commonly used for diagnosis, but the ability of MRI to diagnose these lesions has been called into question. 1,3 Thus, the gold standard for diagnosis remains visual inspection and probing of the articular surface during arthroscopy. 4 These chondral lesions have a limited potential for healing because of the lack of blood supply, lymphatic system, and neurology connection to the rest of the body in adults. 5 When cartilage injury occurs, the involvement of the subchondral bone dictates the type of healing that takes place. Without penetration of the subchondral bone, there is a brief induction of cell replication and matrix production by adjacent chondrocytes, whereas penetration of the subchondral bone leads to fibrocartilage healing. 5 Because of the limited ability to heal, treatment of these injuries remains a challenge. Current treatment options remain limited to chondroplasty versus chondral resurfacing techniques such as microfracture, osteochondral autograft transfer, osteochondral allograft transplant, or matrix autologous chondrocyte implantation. 6,7 The choice of treatment is often dictated by subchondral bone involvement and characteristics of the chondral defect such as the size, depth, and location of the lesion. The size and depth can be quantified by measuring cartilage thickness, and several methods have been proposed for this. The most frequently used techniques to achieve this include needle probe methods (in which a sharp needle is place into cartilage with force and displacement is measured), high-resolution ultrasound, MRI, and optical coherence tomography based on interferometry. 1 More recently, specially designed arthroscopes have been used to accomplish this goal. 1 As new treatments are introduced, the development of improved methods for quantitatively grading lesions and cartilage quality are important.
One such method is the introduction of optical reflection spectroscopy to determine the thickness of cartilage. 1,8,9 The optical absorption of cartilage is different than that of subchondral bone, and optical reflection spectroscopy can use this to estimate cartilage thickness through the reflectance spectrum taken from the joint surface. 1,8,9 By relating the reflected intensities at specific wavelengths to the reflected intensity of a reference wavelength, cartilage thickness can be estimated. 1,8,9 Consequently, this technology has been shown to be able to evaluate cartilage thickness and visualize the thickness distribution over a knee joint using wavelengths. 9 By incorporating this technology into an arthroscope, measurement of cartilage depth can be made in vivo during an arthroscopy. A previous study determined that when using an arthroscope implemented with this technology, the thickness of cartilage can be estimated to within 0.28 to 0.30 mm. 1 By using an arthroscope implemented with this technology, cartilage depth will be quantified in vivo, providing the surgeon with better information to guide treatment selection. This technique article describes our technique for assessing cartilage depth during arthroscopy of the knee using spectral and texture enhancement provided by BioOptico (Arthrex, Naples, FL).
Technique Using BioOptico
To systematically evaluate the cartilage of the knee using BioOptico technology, the articular surface of the knee is divided into 22 sections to allow for systematic evaluation (Figs 1-4, Video 1). We present a technique to visualize each section. Once the section is adequately visualized, the BioOptico imaging system will assess the depth of the cartilage based on the color-coded schematic shown on the respective screen. No previous grading scale exists to compare reflectance spectroscopy readings to chondral thickness; therefore, a classification system mirroring the Outerbridge system is proposed with corresponding color readouts (types 0-3, Video 1). Type 0 has no color on reflectance spectroscopy and represents near-normal cartilage. Type 1 is colored pink and represents minor cartilage loss. Type 2 is colored red and represents moderate cartilage loss. Type 3 is colored dark red and represents severe cartilage loss (Table 1). Once the operating surgeon is satisfied with the visualization, he or she may manually capture the arthroscopic image along with the BioOptico evaluation for future reference before moving onto the next section. In addition to capturing the arthroscopic BioOptico image, we recommend manually recording the cartilage grade for each section on a 2-dimensional diagram (Fig 5). This can be recorded during the actual procedure by an assistant and used for further documentation.
Position and Preparation
After induction of anesthesia, the patient is positioned in the supine position on the operating table. A post of the surgeon's preference should be placed on the operative side to position the leg and assist in producing a valgus force for diagnostic arthroscopy (Fig 6, Video 1). A nonsterile pneumatic tourniquet is placed on the patient's operative thigh and the lower extremity is prepped and draped in a normal sterile fashion. A surgical marker is used to outline standard anterolateral and anteromedial portals (Video 1). The portals are then established using an 11-blade scalpel and a hemostat is used to widen them.
Diagnostic Technique
After establishing the anterolateral and anteromedial arthroscopic portals, a 30 arthroscope is placed in the anterolateral portal. Evaluation of the articular cartilage using BioOptico technology can now be carried out by following the sequence of a standard diagnostic scope. To better aid in systematically evaluating and documenting the cartilage, the articular surface of the knee is divided into 22 sections.
Beginning with the knee in full extension, the patellofemoral joint is visualized first. With the scope camera pointed superiorly, advance the scope through the anterolateral portal until the superior and inferior medial facet, as well as the superior and inferior central facet, of the patella is visualized (Fig 7, Video 1). Next, a probe is inserted through the anteromedial portal and used to subjectively grade the cartilage on this surface using a grading system such as Outerbridge. Once this surface has been subjectively graded, the mode on the arthroscope can be switched to begin using BioOptico.
Once activated, the articular surface of interest is visualized and the BioOptico imaging system will assess the depth of remaining cartilage and display these depths as corresponding articular cartilage color on the video screen. When the operating surgeon is satisfied with the visualization seen with BioOptico in the section and cartilage depth is adequately assessed, he or she may manually capture the arthroscopic image with BioOptico evaluation for future reference. The BioOptico mode is then deactivated and the arthroscope is rotated 180 downward to evaluate the medial trochlea, divided into a superior and inferior section (Video 1). Again, a probe through the anteromedial portal is used to first evaluate these sections of articular cartilage subjectively in a similar manner to that of the patellar surface. Once subjectively accessed, BioOptico mode is activated and the cartilage surface depth is objectively evaluated and the findings can be recorded as demonstrated above. After evaluation of the central and medial patellofemoral sections is complete from the anterolateral portal, the arthroscope is removed and placed into the anteromedial portal to best visualize the remaining superior and inferior lateral facets of the BIOOPTICO ARTHROSCOPY e401 patella and the corresponding superior lateral and inferior lateral trochlear zones (Fig 8).
The arthroscope is advanced through the anteromedial portal until the superior and inferior lateral patellar facets come into view. Using a probe through the anterolateral portal, these surfaces can be subjectively graded. Next, the BioOptico mode is activated and the lateral patellar facets are objectively graded. The BioOptico mode is then deactivated and the arthroscope is rotated 180 inferiorly to bring the superior lateral and inferior lateral trochlear zones into view. With the BioOptico mode deactivated, the probe, through the anteromedial portal, is used to first subjectively evaluate the surface and then BioOptico mode is reactivated to objectively grade the surface. To use the BioOptico mode with the best accuracy, ensure that the arthroscope is as perpendicular as possible to the cartilage surface being assessed because increasing obliqueness can lead to decreased accuracy of articular cartilage thickness.
With arthroscopic examination of the patellar surface complete, the arthroscope is removed and placed back into the anterolateral portal; the knee is then flexed to approximately 30 and a valgus force is applied to allow visualization of the medial compartment (Video 1). With this degree of flexion, the anterior lateral and anterior medial sections of the medial femoral condyle can be evaluated (Fig 9). A probe is placed through the anteromedial portal and is used to subjectively grade the anterior lateral and anterior medial sections of the medial femoral condyle. Once subjectively graded, the BioOptico mode is then activated; these surfaces are objectively graded and results are recorded. From this position, the anterior and posterior sections of the medial tibial plateau can be visualized by rotating the arthroscope 180 downward (Fig 10). After rotating the arthroscope, a probe is used to subjectively evaluate the cartilage with the BioOptico mode deactivated and then the mode is reactivated to objectively evaluate these surfaces. Next, the arthroscope is retracted slightly and rotated back upward 180 . The valgus force is removed and the knee is then hyperflexed to approximately 110 . This hyperflexion brings the posterolateral and posteromedial sections of the medial femoral condyle into visualization. From this position, the arthroscopic probe is used to subjectively evaluate the posterior sections with the BioOptico mode switched off. Once subjectively evaluated, the mode is reactivated and the posterior sections can be objectively evaluated.
The arthroscope is then retracted from the anterolateral portal and placed back into the anteromedial portal with the knee in slight flexion because attention is now turned to the lateral surfaces of the knee. The knee is brought into a "figure 4" position, with the knee flexed to approximately 90 and the hip in external rotation. This allows for better visualization of the lateral compartment. In this position, the arthroscope is advanced until the lateral femoral condyle comes into view, in particular the anterior medial and lateral sections (Fig 11). Next, the arthroscopic probe is inserted through the anterolateral portal and the surfaces are probed to subjectively evaluate them. Once subjectively evaluated, the BioOptico mode on the arthroscope is activated and the anterior medial and lateral sections are objectively evaluated (Video 1). Next, the arthroscope is rotated 180 downward to facilitate evaluation of the anterior and posterior sections of the lateral tibial plateau (Fig 12). With the BioOptico mode first deactivated, the arthroscopic probe is used to evaluate these surfaces, followed by objective evaluation with the BioOptico mode activated. To evaluate the posterolateral and posteromedial sections of the lateral femoral condyle, the arthroscope is retracted slightly and rotated 180 upward as the knee is brought into hyperflexion >110 . Once hyperflexed, the BioOptico mode is deactivated and the probe is used to provide subjective evaluation of the cartilage. The BioOptico mode is then reactivated, providing for the posterolateral and posteromedial sections of the condyle to be objectively evaluated.
Through our experience, adaptions have been made to better allow for evaluation of the cartilage surface. First, if the surgeon does not feel he or she achieved adequate visualization of the patellofemoral compartment through the standard portals, an accessory superolateral or superomedial portal may be created (Video 1). These portals allow for visualization of the contralateral side of the patellofemoral joint and can be evaluated as previously described. Also, to examine the cartilage surface of interest at an optimal perpendicular angle, it may be useful to use a 70 arthroscopic camera during the procedure. In addition, adjusting the overall brightness of the light source can allow for a clearer examination. The combination of additional portals, alternative arthroscopic cameras, and brightness adjustment allows for optimal visualization of all surfaces in the majority of cases (Table 2).
After evaluation is complete, other planned procedures can be carried out, such as meniscus, chondral, or ligamentous surgery. Once completed, the knee is copiously flushed with irrigation fluid and arthroscopic equipment is removed. Arthroscopic portals may be closed with No. 2-0 nylon interrupted sutures and a sterile dressing placed over the incisions per the operating surgeon's preference.
Discussion
Traumatic and degenerative injuries to the cartilage are 1 of the leading causes of disability worldwide, with chondral lesions commonly found in patients older than age 40. 1 These injuries are commonly to the result of acute trauma or the repetitive overload of the knee joint found in chronic cases. Impaction of the joint surface leads to softening or fissuring of the cartilage, tearing, or delamination. Because of various biologic factors, chondral injuries have a limited ability for healing, leading to the potential to worsen over time. With the limited ability to heal, treatment is limited to conservative measures, debridement and chondroplasty, chondral resurfacing techniques, microfracture, osteochondral autograft transfer, osteochondral allograft transplant, or matrix autologous chondrocyte implantation. 6,7 The diagnosis of chondral injuries can be a difficult task. Using patient signs and symptoms, such as pain, effusion, crepitus, or decreased motion, has been shown to have a low specificity and predictive value. 4 Also, the joint space narrowing, subchondral sclerosis, and loose bodies seen on plain radiography with chondral injuries do not appear until late in the disease process. 10 This leaves MRI or arthroscopic evaluation as the main methods by which cartilage injuries are diagnosed, with MRI being the only noninvasive technique. The ability of MRI to diagnose these lesions, however, depends on both the technique of the MRI and radiologist experience, with its validity being questioned. 3 When MRI is used to evaluate cartilage lesions, 1 study found only a moderate interobserver validity of 0.80. 11 In addition, artifacts (such as the "magic angle" effect) can complicate the evaluation of these lesions and oftentimes small lesions are overlooked. 4,12 Only newer techniques, such as Delayed gadolinium-enhanced MRI of cartilage, make it possible to evaluate initial lesions, but they are not routinely used secondary to prohibitive cost and availability. 4,13 Arthroscopic evaluation is the most valid method to evaluate the cartilage and is considered the gold standard because it allows for the direct viewing of the lesion. 4 Arthroscopy is invasive, but the direct viewing allows for the subjective grading of the cartilage by the surgeon, which dictates treatment. Under direct visualization, a hook is used to palpate the cartilage and assign a grade to the lesion. This palpation has been found to be highly subjective, depending on the manual pressure applied by the surgeon and on the geometry of the distal end of the hook. 14 In addition, there is still no consensus regarding the true validity of arthroscopy to diagnose chondral lesions. 15 A previous study determined that using arthroscopy to diagnose cartilage lesions only had an interobserver agreement of .67 between arthroscopists. 16 Studies have found that there is more agreement when intact cartilage is present or with lesions at the ends of the spectrum, such as Outerbridge I or IV. Brismar et al. 17 determined a mean interobserver agreement of >80% was found with grade I or IV lesions, but agreement dropped to 65% for grade II or III lesions. In a survey of highly trained arthroscopists, the majority of surgeons thought differentiation of high-or low-grade lesions was valid, but almost 50% believed there was a "need for improvement" in differentiation between grade I and II lesions and grade II and III lesions. 4 In addition, 13.3% and 61.9% responded that the incorporation of objective measurements for these intermediate lesions would be "very useful" and "somewhat useful," respectively. 4 By replacing the subjective grade assigned by surgeons with technology that can objectively evaluate cartilage, the validity of arthroscopy to diagnose chondral lesions will be improved, especially those of intermediate grades.
The advent of optical reflection spectroscopy, as found in BioOptico, provides for an objective way to measure cartilage thickness. Cartilage is connective tissue comprising proteoglycans, collagen, and water. Because it lacks blood perfusion, the absorption and scattering properties of light are mainly determined by these components. 9 The subchondral bone, in contrast, has blood perfusion and thus its absorption and scattering properties are mainly determined by hemoglobin and other pigments in the blood. 9 By measuring the difference between the wavelengths that are absorbed and scattered between the 2 tissue types, the cartilage thickness can ultimately be measured. 9 By applying this technology to the video stream provided during arthroscopy, an objective measurement of cartilage thickness in vivo can be accomplished. This method has been shown to objectively measure cartilage thickness with an error of 0.28 to 0.30 mm and the error was lowest when cartilage thickness was <1.5 mm. 1 This roughly correlates to grade II and III cartilage lesions, providing a reliable quantitative assessment of cartilage grades found to have the greatest interobserver variability. Cartilage lesions are typically graded based on the subjective descriptions of surgeons made during arthroscopy and according to either the Outerbridge or the more recent International Cartilage Repair Society grading systems. 9 In both of these systems, grading of the lesion relies on the intact surrounding cartilage thickness. 9 With the quantitative assessment provided by the optical reflection spectroscopy technology, a new grading scheme could be developed that was only dependent on cartilage that remains in the lesion, with the potential to better correlate with symptoms or the ability to heal. 9 An additional possible avenue for this technology is to incorporate it into in-office arthroscopy. This would improve the ability to diagnose early articular degeneration found in osteoarthritis and allow for some treatment modalities that might not be possible with later disease. 9 The limitations of the outlined technique are many. Although reflectance spectroscopy better allows for objective assessment of cartilage lesions, minimal data exist correlating the proposed grading system and progression/symptomatology of said lesions. Additionally, the ability to accurately assess the 22 sections previously outlined requires a learning curve as well as increased operative time. Outside of operative time, minimal additional risk is placed upon the patient with this technique.
The use of BioOptico during arthroscopy allows for the objective measurement of cartilage thickness, allowing for better assessment of lesions. Its use can help guide surgeons in treatment choice, especially in the indeterminate grade lesions that have proven difficult in the past.
|
2019-04-12T13:29:35.675Z
|
2019-03-25T00:00:00.000
|
{
"year": 2019,
"sha1": "d20ca4c3f0f504b4e751213ac452f8e4877f4aa3",
"oa_license": "CCBYNCND",
"oa_url": "http://www.arthroscopytechniques.org/article/S2212628718302093/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b268359b6008194c1b1361fc00aa3e30fbb4da9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225151953
|
pes2o/s2orc
|
v3-fos-license
|
MICROBIAL POTENTIOMETRIC SENSOR TECHNOLOGY FOR REAL-TIME DETECTING AND MONITORING OF TOXIC METALS IN AQUATIC MATRICES
Considering that toxic metals can affect metabolic processes in microorganisms adversely, it can be hypothesized that these metals in water matrices would induce a decrease in metabolic activity of the biofilm microorganisms populating the surface of a sensing electrode, which could be registered as a change in the open-circuit potential (OCP) generated by the biofilm microorganisms. The goal of this study was to test this hypothesis and demonstrate the underlying principle that microbial potentiometric sensor (MPS) technology could be used for long-term and real-time monitoring and detection of rapid changes in metal concentrations in realistic aquatic environments. To address the goal, four objective were addressed: (1) a batch reactor with three graphite-based MPS electrodes was fabricated; (2) a set of single-ion solutions and one multiple ion solution were prepared reflecting realistic concentrations of metals found in electroplating wastewaters; (3) the responses of the MPS to the simultaneous presence of multiple toxic metal ions in a single solution were measured; and (4) the changes of the MPS signals to the presence of individual metal ion solutions were examined. While the hypothesis was validated, the study also revealed that the MPS was sufficiently sensitive to not only detect, but also quantify, toxic metal ion concentrations in aqueous solutions. The coefficients of determination, which were R > 0.995, and responsiveness of < 1 μmol/l for some toxic metal cations, strongly support the performance of MPS technology ranking it in the echelons of expensive analytical tools capable detecting and measuring trace elements.The magnitude of the MPS response was toxic metal specific. When the molar concertation normalizes the inhibition portion of the signal area, the assessed sensitivity order was: Se > Cd > Pb > Ag > Ni > Zn. The study provides valuable information for enforcement agents, environmental professionals, and wastewater treatment operators, so toxic metal pollution and its detrimental impacts can be prevented and mitigated.
INTRODUCTION
Since industrialization began, toxic metal pollution from electroplating operations has been a problem threatening water resources and the ecosystems they support [1]. While the toxicity of the some electroplating metals to living organisms is well established, the demand for electroplated parts and products has led to a marked increase in electroplating operations across the world [2][3][4]. Most of these operations do not employ wastewater treatment technologies to prevent and mitigate environmental damages from discharges of toxic metals [5,6]. Frequently, the toxic metal-laden wastewaters are directly or indirectly released into the hydrosphere or discharged into the sewer systems where they are mixed with municipal sewage or collected stormwaters [7][8][9][10]. Because many of these electroplating operations are batch driven processes, most of the toxic metal emissions are characterized by pulse-discharge profiles. This poses significant challenges to operators tasked with monitoring and controlling their wastewater outflows. Most operators rely on monitoring protocols based on intermittent sampling of effluents (e.g., grab samples) followed by laboratory analysis. The approach of periodic sample collection and laboratory analysis does not accurately describe the toxic metal content of water effluents, considering the low sampling frequency and scenarios where irregular intermittent discharges significantly biases the analytical results. Technologies capable of real-time monitoring are needed to identify and document parties illegally discharging regulated toxic metals into the environment. Unfortunately, such technologies that enable continuous, real-time monitoring or detection of toxic metal pollution in natural aquatic or wastewater matrices, are neither cost effective, if available, nor available for most electroplating operations. Without viable monitoring technologies, users are unable to accurately monitor the discharges of pollutants. Regulatory authorities would also greatly benefit from costeffective technologies to enforce regulations and prevent illegal releases of toxic metals into natural or human-made aquatic systems.
The ecotoxicological implications of harmful metals released into natural waters or sewage collection systems are well documented [11][12][13][14][15]. The vast majority of toxic metals interfere with biochemical processes impacting microbial to multicellular organisms [16]. Toxic metal pollutants adversely affect the proper operation of biological wastewater treatment reactors and consequently prevent effective and efficient removal of organic pollutants from the waste-streams and disrupt water reclamation [8,17]. Many wastewater treatment operations that receive toxic metal-laden wastewaters are unable to meet their discharge permits because these metals inhibit the metabolism of the microorganisms in the mixed liquor, are present in the effluent discharges, and accumulate in the biosolids, rendering them hazardous waste.
A recent study by Burge et al. [18] demonstrated a novel type of microbial potentiometric sensor (MPS) capable of detecting changes in the local aquatic environment surrounding the biofilm populated sensing electrodes. Considering that toxic metals can affect metabolic processes in microorganisms adversely, it is reasonable to hypothesize that these metals in water matrices would induce a decrease in metabolic activity of the biofilm microorganisms populating the surface of the sensing electrode. This effect could be registered as a change in the open-circuit potential (OCP) generated by the biofilm, as described by Burge et al. (2020). The typical MPS response, the OCP of the biofilm versus Ag/AgCl electrode, ranges from approximately -700 to +800 mV, depending on the system and the biochemical conditions. The goal of this study is to test this hypothesis and demonstrate the underlying principle that MPS technology could be used for long-term and real-time monitoring and detection of rapid changes in metal concentrations in realistic aquatic environments.
In this study, four objectives were completed to investigate whether the MPS could be utilized to detect the presence and quantify the concentrations of metal ions in solutions. First, a batch reactor with three graphite-based MPS electrodes was fabricated, and the biofilm on the electrodes was cultivated until a steady OCP signal versus the reference electrode was established. Secondly, a set of single-ion solutions and one multiple ion solution were prepared reflecting realistic concentrations of metals found in electroplating wastewaters. Next, the responses of the MPS to the simultaneous presence of multiple toxic metal ions in a single solution were measured. Lastly, the changes of the MPS signals to the presence of individual metal ion solutions were examined.
Batch reactor fabrication and cultivation of the biofilm on the MPS electrodes
The cylindrical batch reactor was fabricated from a clear polycarbonate tube with a diameter of 15 cm and a height of 12.5 cm (Fig. 1). The total reactor volume was 2.3 l. The top of the reactor was equipped with several ports for introduction and removal of solutions. The reactor chamber was equipped with a magnetic stirrer to ensure the complete mixing of solutions within the chamber.
Three graphite MPS indicator electrodes and a commercially-available combination ORP (Model SE300, Milwaukee Instruments) were fabricated into the top and walls of the reactor (Fig. 1). The three MPS and the single ORP were referenced using a single silver/silver chloride electrode fabricated within the combination ORP probe. The MPS electrodes were fabricated from 0.625 mm graphite rods mounted within a PVC-threaded fitting. The three MPSs, an ORP electrode and the reference (Ag/AgCl) elecrodes were connected to a high impedance (> 700 MΩ) B10 signal acquisition board (Burge Environmental, Inc. Tempe, AZ). The B10 board was connected to a Raspberry Pi computer to transmit the data via a Wi-Fi network to a cloud-based data storage system. An open-source dashboard (Redash) was used to download the data for analysis and visualization. The reactor was filled with an anaerobic solution generated by mixing dechloronated municipal water with leaf litter (dissolved organic carbon -DOC > 80 mg/l; tap water from Tempe, AZ). This anaerobic solution was introduced into the reactor and allowed to equilibrate for a period of more than two months to ensure the population of an endemic biofilm onto the surface of the three MPS graphite electrodes. Two weeks of equilibration typically yield biofilm growth suitable for MPS measurements [18]. During the biofilm cultivation period, the solution was continuously stirred and the experimental chamber was located in a dark room, to prevent the growth of photosynthetic microorganisms, and held at temperature of 23 ± 1°C.
Preparation of toxic metal solutions representative of industrial pollutants
Stock solutions were prepared to reflect the toxic metal compositions commonly present in wastewaters entering the international wastewater treatment facility at the United States/Mexico border [19,20]. The concentration of the final multiple ion solution within the reactor were 4, 8, and 16 times greater than the reported average wastewater metal concentrations, and corresponded to reported peak values. Although somewhat higher than the reported values, these concentrations are an order to two orders of magnitude lower than the toxic metal concentrations released by the metal-plating operations. It is assumed that the actual heavy metal-laden wastewaters at the emission point source are at least 50 to 100 times higher than the concentrations tested in this investigation. For the single ion solution experiments, only concentrations that were four times greater than the average values, as reported at the entrance of the international wastewater treatment plant, were tested. The final toxic metal concentrations introduced into the reactor via a pulse injection are presented in Table 1. The toxic metal cation solutions were prepared using nitrates, acetates, or standard solutions, except for cadmium, where chloride salts were used. Acetate anions were used to provide nutrients for the microorganisms in the biofilm and facilitate the signal recovery. The toxic metal anions used in the study were sodium salts. All salts used in the preparation of the solutions were ACS reagent grade (> 96 %) or higher purity (Sigma Aldrich, Acros Organics, ICN Biomedicals, or Baker). Sodium salt solutions of the counterions (nitrate, acetate, and chloride) were prepared at the same concentrations as the metal solutions and tested to ensure that the changes in the MPS signal resulted from the presence of the toxic metal and not from a matrix effect.
T a b l e 1
Examining the microbial potentiometric sensor cumulative response to the simultaneous presence of multiple metals in an aqueous solution
After the signal of all three active MPS indicator electrodes stabilized, which occurred when the measured microbial response (OCP) was approximately 776 ± 2 mV, multiple ion stock solutions were introduced into the reactor using a syringe (pulse injection volumes of 10 ml, 20 ml, and 40 ml). Considering that the hardness of the water was at least 150 mg/l as CaCO3 and the reactor volume was at least 57 times greater than the injected metal stock solution (i.e., the dilution at 40 ml injection produced a dilution factor of 57), pH drifts associated with the injections were considered negligible and did not meaningfully influence the experiment.
The response data was evaluated by integrating the area of the curve between the highest and lowest point of the generated signal, which corre-sponds to the inhibition portion of the signal. This data analysis method ensured that only the inhibiting effects of the toxic metals on the metabolic activities of the microorganisms were reported as results. The reasoning behind this type of signal area integration was based on the behavior of the microorganisms in the biofilm upon exposure to toxic chemicals. When toxins are introduced, the MPS signal rapidly decreases and correlates to the inhibition of the metabolic (e.g., electron generation/cytochrome storage) processes [18]. Once toxic metals are immobilized, or otherwise removed from the environment, the inhibitory effects are eliminated, and normal metabolic activities resume, which can be observed by a characteristic increase in the microbial OCP signal (recovery portion of the signal).
Examining the microbial potentiometric sensor response to aqueous solutions with a single toxicmetal ion
Six of the seven toxic metals investigated in the cumulative response were used to examine the MPS response to individual metal ions. Specifically, MPS responses to Se, Cd, Pb, Ag, Ni, and Zn were investigated. For each toxic metal investigated, separate aliquots of 10 ml of each single-ion solution were pulse injected into the reactor. For silver, aliquots of 10 ml and 20 ml were injected to confirm the observation from the multiple ion solution tests and to observe if the same pattern exists for single ions where doubling the concentration for a single ion results in an increase in signal magnitude. Because the objective was to see whether there will be a response, and not to quantify and calibrate the response, the injections were conducted once the signal reached ± 5 mV of the stable baseline. This approach demonstrated that the sensors could respond even when the biofilm has not fully recovered from the preceding toxic metal inhibition. This test indicates that MPS is suitable for industrial applications that require realtime data because the sensor does not need to fully recover the baseline microbial response to generate meaningful results before the sensor is exposed to additional toxins. The integrated area of the inhibition portion of the signal was divided by the concentration to compare the relative MPS response to each toxic metal. Although not as accurate as creating a calibration curve for each element, this type of signal normalization allows for approximate evaluation of the relative impact of each toxic metal on the MPS signal. Figure 2a illustrates the responses of the three MPSs to the simultaneous introduction of different concentrations of mixed toxic metals. The area of the valleys (inverse signal peaks) increased with doubling and quadrupling the concentrations of the metals within the reactor. When the net molar concentrations, comprised of the concentration of all ions, were plotted against the integrated areas of the inhibition portion of the signal, three-point calibration curves were obtained for each of the MPS electrodes. As illustrated in Figure 2b, the R 2 values for all three curves, representing each MPS electrode, exceeded 0.995. These highly correlative relationships between the concentrations of metal ions and integrated areas of the microbial responses exceeded the expectation beyond the postulated hypothesis that the sensors will respond to the presence of toxic metals in water, and strongly suggested the utility of the MPS to quantify metal concentrations in industrial settings. Such high coefficients of determination exceed the performance of many analytical methods which are currently accepted as standards in water quality analysis [21,22]. The MPS 3 electrode appeared to be the most sensitive to the induced changes, as illustrated by the highest slope of the calibration curve. Sensitivity differences among MPS electrodes are expected because of the composition and nature of biofilm on the individual graphite surfaces may vary. Considering that the biofilm sensing surface is a living organism populating the graphite surface of the MPS, no two biofilms on the individual MPSs are expected to be exact duplicates. The biofilms may vary based on microbial composition, density, thickness of the biofilm and several other factors. While the MPSs may not exhibit identical signal response in terms of the magnitude of the signal response (and subsequent recovery) to a toxic metal, each MPS is capable of responding to stimuli in the same manner (e.g., toxins will inhibit metabolic processes followed by a gradual recovery of the metabolic processes). As such, even when placed in the same environment, two biofilms may not recover identically from an adverse effect of a same toxin. Table 1 summarizes the actual total net concentrations for the different pulse injections. The MPS were able to detect toxic metal concentrations down to ~ 27 μmol/l (10 ml injections). The shape and magnitude of the signals suggest that this sensor technology can detect even lower concentrations of metals in water matrices. Furthermore, the form of the signal indicated that the biofilm is capable of recovering to its baseline conditions relatively quickly, which is likely due to the ability of microorganisms to immobilize the toxic metals and neutralize its toxic effects. When the control solution was added (no toxic metals), the signal decreased about 3 to 4 mV for a few minutes, which was a couple of mV over the baseline signal variation. These results suggest that the sensitivity of the MPS technology may be even higher, however, it would be challenging to differentiate a 4 mV signal change to a ±2 mV baseline signal (noise). Because this is a first study of this kind, there is no literature available. Consequently, it is impossible to compare the heavy metal sensitivity of this technology with other data. Figure 3 illustrates the responses of the MPS electrodes to individual toxic metal ions. Although the magnitude of the response varied slightly, the shape of the response curves for each electrode was similar for an individual toxic metal. The magnitude of the two Ag response curves for different concentrations was consistent with the trends observed for the injections of aqueous solu-tions containing mixtures of toxic metal ions. All MPS electrodes produced higher signals as the silver concentration increased.
Microbial potentiometric sensor response to the presence of a single individual toxic metal concentrations in an aqueous solution
The nickel injection indicated the MPS 3 signal did not fully recover to its original baseline value, this observation is probably due to the significant toxic effect created by the high nickel concentration. Nonetheless, this baseline did not affect the performance or sensitivity of the MPS 3 electrode for the inhibition portion of the signal, which was evident in the response curves of the metals following the nickel injection. It could be postulated that MPS 3 would have returned to a normal baseline if sufficient recovery time was allowed; however, this aspect of the experiment was beyond the exploration scope of this study.
From Figure 3, it can be observed that the MPS exhibited highest sensitivity to selenium and lowest sensitivity to zinc. When the molar concentration normalizes the inhibition portion of the signal area, the following sensitivity order could be proposed: Se > Cd > Pb > Ag > Ni > Zn. Although this sensitivity order must be confirmed by comparing the responses for the same molar concentrations, the sole fact that the sensors could differentiate toxic metals is beyond the expectations of this study. Furthermore, the ability of this technology to detect concentrations of < 1 μmol/l (i.e., < 10 μg/l) for some toxic metals surpasses any assumptions inferred in the hypothesis of this study.
CONCLUSIONS
The goal of this study was to demonstrate the underlying principle that MPS technology could be used for real-time monitoring and detecting rapid changes in toxic metal concentration in aquatic environments. This study represents a first of its kind and opens new avenues for developing heavy metal monitoring strategies.
During the investigation, the original hypothesis was validated, however, the study also revealed other unique capabilities of this sensor technology. The experiments indicated that the MPS was sufficiently sensitive to not only detect, but also quantify, toxic metal ion concentrations in aqueous solutions. The coefficients of determination, which were R 2 > 0.995, and responsiveness of < 1 μmol/l for some toxic metal cations, strongly support the performance of MPS technology, ranking it in the echelons of expensive analytical tools capable detecting and measuring trace elements. The drawback, however, of traditional analytical tools compared to the MPS is that they are significantly more expensive, cost more to manage, and cannot be operated in a continuous, real-time, and long-term monitoring mode. Following the maxim, "if it cannot be monitored, it cannot be enforced", the MPS technology enables real-time monitoring of toxic metal pollution in aquatic environments. Results from MPS provide valuable information for enforcement agents, environmental professionals, and wastewater treatment operators, so toxic metal pollution and its detrimental impacts can be prevented and mitigated. The low cost of MPS technology could significantly reduce the expenses associated with environmental remediation and treatment if an environment is contaminated. When coupled with artificial intelligence technology and other data, MPS technology may offer a unique opportunity to become the tool for complete monitoring of toxic pollution in aquatic environments.
In context of understanding the mechanistic details driving the performance of this technology, further research would need to be conducted. Elucidating how specific toxic metals affect the changes in the open-circuit potential could advance the understanding of this technology and help optimize its sensitivity. However, these research questions are beyond the scope of this study and represent a foundation for future quests.
|
2020-10-29T09:08:46.266Z
|
2020-10-14T00:00:00.000
|
{
"year": 2020,
"sha1": "e7232a8beda21f63b5509bfafbbe89398fd5bab2",
"oa_license": "CCBYNC",
"oa_url": "https://mjcce.org.mk/index.php/MJCCE/article/download/2088/799",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "54d13df34092207c8a9601bc242de4aeb2709bfa",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
199447771
|
pes2o/s2orc
|
v3-fos-license
|
Perspectives on the Impact of Sampling Design and Intensity on Soil Microbial Diversity Estimates
Soil bacterial communities have long been recognized as important ecosystem components, and have been the focus of many local and regional studies. However, there is a lack of data at large spatial scales, on the biodiversity of soil microorganisms; national or more extensive studies to date have typically consisted of low replication of haphazardly collected samples. This has led to large spatial gaps in soil microbial biodiversity data. Using a pre-existing dataset of bacterial community composition across a 16-km regular sampling grid in France, we show that the number of detected OTUs changes little under different sampling designs (grid, random, or representative), but increases with the number of samples collected. All common OTUs present in the full dataset were detected when analyzing just 4% of the samples, yet the number of rare OTUs increased exponentially with sampling effort. We show that far more intensive sampling, across all global biomes, is required to detect the biodiversity of soil microorganisms. We propose avenues such as citizen science to ensure these large sample datasets can be more realistically achieved. Furthermore, we argue that taking advantage of pre-existing resources and programs, utilizing current technologies efficiently and considering the potential of future technologies will ensure better outcomes from large and extensive sample surveys. Overall, decreasing the spatial gaps in global soil microbial diversity data will increase our understanding on what governs the distribution of soil taxa, and how these distributions, and therefore their ecosystem contributions, will continue to change into the future.
The geographic ranges of biological species, and therefore the biodiversity of ecosystems, are continually changing over ecological and evolutionary timescales. The collation of national and international databases has proven vital to better understand patterns in current species distributions, supporting evidence-based conservation efforts (Jetz et al., 2012), and to predict species range-shifts under, for example, climatic change and future land use scenarios (Thomas et al., 2004;Pompe et al., 2008). However, although climate and land use projections are increasingly highly resolved, often at resolutions of a few kilometers or finer (Chen et al., 2015;Abatzoglou et al., 2018), the spatial grain of resolution for most known species distributions remains far coarser (Jetz et al., 2012). Microorganisms, the most abundant group of organisms on Earth, are key players in global biogeochemical cycles, yet only limited attempts have been made to characterize their distributions across wide geographic ranges using analyses of large datasets. This is especially true for soil microbial communities, where environmental heterogeneity leads to many distinct microbial habitats (Fierer, 2017), and global dissimilarities in soil physico-chemical characteristics present unique considerations to ensure accurate cataloging of their diversity across landscapes, regions and continents. Substantial efforts are required to reduce the gaps in soil microbial diversity data, which will require studies with adequate sampling depth across all global biomes.
Systematic surveys of microbial life are essential for providing new perspectives on bacterial distributions and the causal processes driving these patterns. Understandably, the significant effort and costs associated with consistently sampled national or global studies means it is common to see research that covers large spatial extents, but with spatially irregular sampling and relatively low replication. Even the most extensive national-scale datasets of soil bacterial biogeography, such as surveys of the British Isles (Griffiths et al., 2011) and Australia (Bissett et al., 2016), use non-uniform sampling designs, and may comprise of sample replication that is biased toward more populated and/or accessible areas. To avoid or account for these biases, random or regular (e.g., grid-based) sampling is considered desirable, but is rarely attempted (Powell et al., 2015;Terrat et al., 2017). Therefore, to inform approaches for expanding global soil microbial datasets, it is useful to understand the effects of these alternative sampling approaches on our estimations of bacterial soil community structure.
Comprehensive global soil microbial biodiversity datasets must be assembled from regional studies; however, the relative comparability and compatibility of regional datasets will determine how useful a given global dataset would be. Thus, here, we explore a dataset that does not suffer from the usual sampling limitations of many regional datasets in the published literature to determine the possible effects of variation in sampling design and replication on detection of soil microbial biodiversity. Using bacterial community data collected across a 16-km regular sampling grid within France as part of the French Soil Quality Monitoring Network (Ranjard et al., 2010;Terrat et al., 2017), we quantified the effects of sampling strategy and intensity for soil bacterial biodiversity estimates (see Supplementary Appendix S1). This analysis shows that the most common OTUs were, in fact, detectable from the analysis of only ∼4% samples collected ( Figure 1A, as indicated by the plateau of the curve). This is largely irrespective of whether samples were collected from random locations, in a regular grid format, or proportionally to represent the natural diversity of soil environments (Figures 1B, 2). This pattern held true, even if a geographic subset of the dataset was analyzed (Supplementary Appendix S3 and Supplementary Figure S2). The dominance of a relatively small number of bacterial taxa is similarly reported at the global scale (Delgado-Baquerizo et al., 2018b). Variation in sampling design and intensity that is commonly observed among regional datasets may therefore not be an important consideration for capturing common and dominant bacterial taxa at a global scale.
While intensive sampling of local environments may not be required for the detection of many common taxa, sampling intensity significantly impacts community diversity measures, largely caused by the increased detection of rare OTUs with greater sampling effort ( Figure 1A). Taxa may be rare due to their low local abundance, habitat specificity or restricted geographic FIGURE 1 | (A) Taxa accumulation curve showing the OTUs detected by the random (lines) and grid (points) sampling approaches. The lines indicate the number of rare (>0.001% of total reads; solid line) and common (<0.001% of total reads; dashed line) OTUs detected with increased random sampling; 100 permutations were used, with sites added in a random order, to calculate average values. Standard deviations are indicated in gray. Red points indicate the number of rare (hollow points) and common (filled points) OTUs detected with decreasing grid size (and therefore increased sampling intensity). (B) The number of unique and shared OTUs detected by the different sub-sampling approaches.
spread, but can have a disproportionately large influence on ecosystem processes (Jousset et al., 2017;Karimi et al., 2018). Conditionally rare bacterial taxa (Shade et al., 2014) may be more metabolically active, even when present in low abundance (Dimitriu et al., 2010), and their vast genetic resource has been shown to enhance the functionality of more abundant microbes, via the horizontal transfer of beneficial genes (Low-Décarie et al., 2015). The number of rare OTUs detected in the French dataset did not appear to be influenced by sampling design (i.e., grid vs. random sampling) but did increase with increasing sample numbers ( Figure 1A). Even with the inclusion of all available samples, a complete plateau in the increase of rare OTUs detected was not reached, although the number of new OTUs detected did decrease with increasing sample numbers ( Figure 1A). This suggests that decisions about sampling intensity within national biodiversity monitoring are crucial for generating datasets that will be globally comparable where the distributions of rare taxa are of interest. FIGURE 2 | The number of common ( > 0.001% of total reads) and rare ( < 0.001% of total reads) OTUs captured by different sampling approaches; All samples: Locations of samples comprising the complete dataset which we subsampled, containing 1798 samples collected on a 16 km grid (Terrat et al., 2017), Representative: Sampling described by Orgiazzi et al. (2018) to capture a range of different land uses, soil properties and climatic conditions (n = 144), Random: 144 samples randomly selected from the complete dataset (100 permutations were used and the average ± standard deviation is given), Grid: 151 samples collected in an approximate grid format.
Perhaps the most comprehensive and coordinated effort to catalog microbial diversity across a range of environments around the globe is the Earth Microbiome Project (EMP; (Gilbert et al., 2014), highlighting the substantial progress that can be made through cooperative research. However Adding to this knowledge that the spatial scaling of variation in microbial community structure differs widely across spatial scales (Constancias et al., 2015), substantial efforts must be made to further reduce global gaps in soil bacterial diversity data. Sampling to proportionally represent the relative diversity of different soil environments, or even to over-represent rare environments, or conditions, may be required for valid statistical analysis at global scales, since different environmental gradients dominate community assembly across different biomes and spatial scales. For example, soil pH is often strongly correlated with bacterial diversity , to the extent that it can be used to generate global predictions of bacterial diversity . However, there are certain biomes where this is not true, such as grasslands where instead aridity drives bacterial diversity (Maestre et al., 2015). Such findings highlight the importance of conducting surveys of microbial life appropriate for data analysis at multiple scales because understanding of what affects bacterial community composition at small scales cannot necessarily be extrapolated to make reliable conclusions at larger scales. Grid-based sampling designs are the most statistically powerful way to achieve this, providing that the resolution of the grid is finer than the scale processes of interest (Hirzel and Guisan, 2002;Mallarino and Wittry, 2004;Nanni et al., 2011).
Increasing the size of national and international soil microbial datasets can be achieved by increased cooperation among research facilities, and perhaps even between researchers and the general public. Taking a leaf out of the macro-organism ecologist's handbook, pursuing a "citizen science" approach is considered particularly useful for the collection of samples from more remote areas (Bahls, 2014), although consistent and well documented sample treatments must be ensured to allow accurate comparability and reproducibility (Dickie et al., 2018). There are already many examples studies where the public has been engaged to help collect data for ecological surveys of birds, trees and tropical reef species (Mckenzie et al., 2007;Ockendon et al., 2009;Roelfsema et al., 2016). Arguably, collecting and transporting soil samples requires much less time and expertise than identifying and monitoring animals and plants. Since public engagement in macro-organism surveys has been shown to be a successful biodiversity monitoring tool (Devictor et al., 2010), and is increasingly being utilized for soil microbial surveys (e.g., microblitz 1 ), this is an avenue worth exploring to increase global coverage of bacterial community data.
Ensuring better sampling designs and global coverage alone will not be sufficient; ecologists are increasingly interested in understanding the factors affecting the present day distributions of organisms. This requires microbial DNA to be collected in tandem with a suite of relevant physicochemical variables; however, a shortcoming of many of large-scale studies published to date is the limited range of metadata collected, as the high costs associated with exhaustive soil analyses remains a major obstacle. A notable workaround for this problem is where microbial surveys are partnered with soil physicochemical monitoring programs (Dequiedt et al., 2011;Griffiths et al., 2011;Ranjard et al., 2013;Hermans et al., 2017) which include a comprehensive list of soil nutrients, physical characteristics and heavy metal concentrations. The benefits of collecting biodiversity data alongside traditional large scale soil monitoring programs is increasingly being recognized (Orgiazzi et al., 2018). As environmental monitoring agencies become more aware of the utility for microbial data to report on the health and production potential of diverse environments, existing monitoring programs are increasingly likely to be adapted to provide valuable support for microbiological investigations, helping to identify key correlates associated with changes in community composition and taxon presence across diverse spatial and temporal scales. 1 www.microblitz.com.au Microbial ecologists have tended to describe changes in composition and diversity from DNA sequence data, often without naming individual taxa, or even groups of bacteria. Arguably, this approach has inhibited our understanding of the natural history of bacteria (Martiny and Walters, 2018). However, unlike DNA fingerprinting methods, which previously dominated large-scale molecular assessments of microbial community diversity (Gobet et al., 2014), next-generation sequencing (NGS) allows taxa to now be identified from their unique DNA barcodes and grouped at various taxonomic levels. It is essential we go beyond describing general changes in microbial community composition, to looking at individual taxa, or phylogenetic or functional groups of taxa, in more detail, in the same way that traditional ecologists studying plants and animals characterize biodiversity by describing and naming the species present (Fierer, 2017;Martiny and Walters, 2018). Encouragingly, with more paired microbial and metadata being collected, NGS technologies are beginning to be used to assess not only taxonomic data, but also to make predictions of microbial functional community attributes. The expense associated with adequately sequencing complex soil metagenomes using shotgun DNA approaches mean that although microbial functional diversity has been assessed under different biomes and land uses (Fierer et al., 2012;Mendes et al., 2015), coordinated efforts to collect metagenome data from large scale soil datasets remain extremely limited. Nevertheless, scientists can capitalize on the increased availability of soil taxonomic and associated metadata to make informed predictions of the biogeography of microbial taxa and traits. As the spatial extent and grain of soil microbial community surveys increases, the relationship between soil variables such as pH, and concentrations of nutrients or potential pollutants and the distribution and relative abundance of microbial taxa are becoming better understood (Hermans et al., 2017;Karimi et al., 2018). This allows ever stronger predictions to be made regarding the environments where specific organisms or groups of organisms might be found (Delgado-Baquerizo et al., 2018a), even for organisms that are yet to be cultured or are only known from their 16S rRNA sequences.
Rapidly improving molecular methods means we also need to consider how samples collected today can be used with technology that may not yet be available, or financially achievable, for use in large scale biodiversity monitoring methods. Technological changes are very likely to occur for how extracted DNA (or RNA) is analyzed, but improvements and changes may also occur in how raw sample material is processed. For example to extract genetic material. DNA extraction biases have repeatedly been shown to exhibit biases and limitations for different sample and organisms types (Luna et al., 2006;Wagner Mackenzie et al., 2015;Hermans et al., 2018). Future improvements to current DNA extraction techniques, or the development of new methods, could lead to desires to re-analyze previous samples to obtain more accurate representations of the microbial communities that were present. It has previously been shown that bacterial DNA can be extracted from dried soil samples over a century after the soil was stored (Clark and Hirsch, 2008), and that DNA can be maintained for months at −80 • C (Gorokhova, 2005). However, more research needs to be conducted to determine the effect of time and storage conditions on microbial community composition in raw sample material, and the degradation of DNA over years, rather than months. Following current best practice storage methods for the large sample numbers that will be generated by national, and global surveys of microbial diversity is essential. This will provide not only a 'snapshot in time' of the current biodiversity of soil bacteria globally, but also allow the application of future biodiversity monitoring methods without repeating the labor intensive, and expensive sampling process.
Significant progress has been made in the last decade to catalog microbial diversity across the globe, yet the lack of systematic approaches for sampling across national and global scales, is leading to unbalanced datasets which are failing to cover all of the planet's biomes. Greater coordination among researchers, collaboration with soil monitoring agencies and the general public, could facilitate the collection of more spatially extensive and -intensive datasets. Extensive sampling of soils across the globe, to identify the microbial taxa residing within them and their functions, is essential to increase our understanding of natural variation in these communities, the effect that human land use has on microorganisms, and the impact that climatic change may have on future ecosystem function.
AUTHOR CONTRIBUTIONS
SH analyzed the data. All authors contributed to the writing of the manuscript.
|
2019-08-07T13:05:30.035Z
|
2019-08-07T00:00:00.000
|
{
"year": 2019,
"sha1": "85d899fed9f749422aa17f3677f5fcf15fa8e381",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.01820/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85d899fed9f749422aa17f3677f5fcf15fa8e381",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
}
|
173985709
|
pes2o/s2orc
|
v3-fos-license
|
Learning from Japan for Possible Improvement in Existing Disaster Risk Management System of Nepal
Nepal and Japan, both are multi-hazard prone countries having experience of devastating disasters. It is difficult, if not impossible, to stop natural hazard events at source. However, the impact can be reduced significantly by prevent-ing them from turning into disasters. The impact of disasters can vary depending on the capacity to handle the situation; and the capacity depends on the level of preparedness and mitigation measures taken in advance. Japan has set example for the rest of the world when it comes to Disaster Risk Management (DRM). Recovery and reconstruction after disasters are not just to develop the area as it was earlier, but it has to be taken as an opportunity for developing better than earlier, which is called as “Build Back Better”. This concept was raised by Japanese Government in UN World conference, Sendai in 2015 [1]. Dynamic, evolutionary and proactive DRM policy and plans with innova-tion, and the use of science and technology to find solutions, and effective implementation of the policy and plans, coupled with the culture of safety among the citizens, and the spirit of never give up “Nana KarobiYa Oki” (Seven times fall down, Eight times get up), are the unique features that every country should learn from Japan’s DRM mechanism. This paper is an effort to buy-in the good practices from Japan to improve DRM system in Nepal. It is a product of three-month intensive research in the University of Tokyo under a PhD research that consisted of reviewing existing DRM documents and several interactive meetings with stakeholders in Japan.
Background
Hundreds of thousands of lives are being lost globally due to natural hazard events and a huge loss amounting US$1.5 trillion has been lost during the last decade alone, and this trend is continuing as exposure in hazard-prone countries grow more rapidly than vulnerability which is reduced [2].Asia is considered the most vulnerable continent with 83 percent population affected by disasters globally during 1991-2000.Further, in 2015, almost 32,550 people were killed, more than 108 million people were affected and assets US$70.3 billion were damaged by 574 disasters.Out of the disaster-related deaths globally, 67 percent were in Asia [3].
Japan, though it is small country covering just 0.25 percent area of the earth, due to its geographical, topographical and metrological features, it is highly prone to different types of natural hazards mainly torrential rains, typhoons, heavy snowfalls, earthquakes and tsunamis, etc.It is considered more prone to earthquake because of its location at the crushing point of four tectonic plates; about 20% of the world's earthquakes of magnitude 6 or greater have occurred in or around Japan [4].It is also said that now the country has entered into a seismically active period and experts have estimated that within next 30 -50 years; there is a possibility of occurring 4 -5 earthquakes of M8 and 40 -50 earthquakes of M7 [5].The country has already gone through many earthquakes, significantly large human and property losses such as the Great East Japan Earthquake in 2011 and the Kobe Earthquake in 1995.Beside earthquakes, the country is also known as the home of around 110 active volcanoes as it sits in the Circum-Pacific Volcanic Belt or "Ring of Fire" [6] and also suffers from many different kinds of other natural disasters.
The Prime Minister of Japan H.E. Shinzo Abe said while addressing the 3rd UN World Conference on Disaster Risk Reduction in 2015 in Sendai, "Japan is a disaster-prone country and has been working hard on disaster risk reduction for a long time" [2].Japan has experienced many natural disasters, realizing this fact the Government of Japan has paid much attention for disaster risk reduction and emergency management through encouraging the mechanism of self-help, mutual assistance and public support.The Government has continually been undertaking initiatives that constitute "public support", which include measures undertaken before disaster strikes: for example, building embankments and other hard infrastructure measures, as well as soft infrastructure measures, such as conducting drills [7].It has been considered as one of the best examples in the world for disaster risk management.
Nepal, located on the astride boundary of two active tectonic plates has diverse physiography and climatic variation, is prone to multi-hazard events [8].
Nepal recurrently is exposed to landslides, fires, earthquakes, floods, glacier lake Open Journal of Earthquake Research outburst floods (GLOF), thunderstorms and avalanches, etc. causing significant loss of property and lives every year, in average 2 deaths every day [9] [10].Lack of awareness among the people at different levels, lack of timely revision and effective implementation of policy and guidelines, haphazard urbanization and development activities, etc. are the major causes to turn the hazards into devastating disasters [11] [12].Nepal has recently enacted Disaster Rick Reduction and Management Act (DRR&MA, 2017), which has adopted recent global trend of focusing proactive approach in DRM, and several policies in support of DRR&MA, have been developed.Hence, in guidance of such act and policies, Nepal has to develop long term strategic plans for effective disaster management.In this context, the practices adopted in the countries like Japan, which has already set examples in effective DRM, can be taken as guidance for effective DRM in Nepal.
Disaster Risk Management (DRM) Policy Environment in Japan
Globally in recent decades, there have been several initiatives in disaster risk management.National authorities, regional and international stakeholders have made investments in terms of time, expertise and budget to better understand disaster genesis and dynamics.As a result of several global conferences and discussions, and analyses of lessons learned from different disasters, now the focus has largely been shifting from relief and rehabilitation to disaster risk reduction and management [3].In the context of Japan, the Government has considered as the national priorities to protect the country's land, saving lives of its people, livelihood and property from disasters [4].Further it has made a significant investment for reducing risk rather than spending more in emergency response activities after disasters, good example of investing 51 percent in "mitigation and preparedness" of the total disaster-related project budgets was done during 1990-2010 [1].Japan has three levels of government namely, national government, prefectures, and municipalities.In each level the respective heads have full responsibility in their jurisdiction.Accordingly, comprehensive disaster prevention plans have been developed defining the roles and responsibilities to be performed in different stages.The national council on disaster management, led by the Prime Minister, has been established under the Basic Act on Disaster Management, which also assigns responsibilities to the ministers, heads of public institutions and experts.The main role of the council is to formulate and promote major disaster management policies, including the Basic Plan of Disaster Management.
Though Japan has a quite long history of disaster management policy development, the Disaster Countermeasures Basic Act was a turning point for strengthening disaster management system when it came into effect in 1961 after the 1959 Ise-wan Typhoon killing 5098 people.This act clearly defines the roles and responsibilities for federal government and develops acumulative and organized disaster prevention structures [4].It is important to note that Japan has a good practice of evaluating response situations during each disaster and the lessons Open Journal of Earthquake Research learned from those disasters are analyzed to reflect in current organizational mechanism at different levels to develop required tools and materials, and accordingly amendments/revisions are made in existing policies and guidelines [13].Hence, the act and policies are revised as per the context based on the current need and situation.For instance, since its first enactment in 1961, the Basic Act on Disaster Management has been amended six times as of 2016 [7].Realizing that some 80 percent of total deaths in the 1995 Kobe Earthquake were due to the collapsed of buildings, having low safety against earthquake because they were constructed prior to the 1981 building code came into action [6], the Act on Promotion of the Earthquake-proof Retrofit of Buildings was enacted [7].The Tohoku Earthquake and Tsunami (also known as the Great East Japan Earthquake and Tsunami) has given a lesson that the need of Extreme Disaster Management Headquarters to grasp a whole picture of the damage and action to be taken immediately without waiting requests for assistance from the affected areas in big disasters [14].Several policies and acts were developed and amended after the Tohoku Earthquake, such as the Act on Promotion of Tsunami Countermeasures 2011; Act on Development of Areas Resilient to Tsunami Disasters 2011; Amendment of Disaster Countermeasures Basic Act; Act for Establishment of the Nuclear Regulation Authority, 2012; Act on Reconstruction from Large-Scale Disasters; Amendment of the Act on Promotion of the Earthquake-proof Retrofit of Buildings, etc. [4].Similarly, after the 2016 Kumamoto Earthquake, revisions were made mainly on the Basic Plan for Disaster Risk Reduction; the Guide to Preparing Detailed and Practical Evacuation Plans in Case of Volcanic Eruption and the Guidelines for Evacuation recommendations [7].
Disaster Management Planning
Acts and legal provisions in Japan are implemented, and continuously revised based on the experiences and lessons from each and every disaster.The result is gradual and sustained enhancement of resilience on a long-term period.As stipulated in the Basic Act on Disaster Management, there are four levels of basic plans for Disaster Management in Japan, namely, National Basic Plan for Disaster Risk Reduction, Prefecture Basic Plan for Disaster Risk Reduction, Municipality Basic Plan for Disaster Risk Reduction and Community Disaster Risk Reduction Plan (Figure 1).The National Basic Plan was developed in 1963 and revised several times including the latest revision in 2017, mainly based on the learnings from the disasters and changes in policies and government's structure.This plan was entirely revised in 1995 after the experiences from the 1995 Earthquake Disaster.This plan is considered the foundation for the country's disaster management measures [15].The National Basic Disaster Risk Reduction Plan has to be approved by the National Council on Disaster Management.Established in the Cabinet Office based on the Disaster Countermeasures Basic Act, it is chaired by the Prime Minister and comprises all members of the Cabinet, heads of major public corporations and experts.It deals with policies-related issues involving all the ministries of the Cabinet.All 47 prefectures have the Open Journal of Earthquake Research risk reduction to fulfill their responsibilities.The act also promotes the participation of stakeholders in disaster risk reduction efforts and activities, including encouraging them to take their own preparedness initiatives to cope with disasters and mitigate the adverse effects [15].
DRM Capacity Development and Awareness Activities
The Government has strategically started different programs to promote understanding and enhance capacity among the DRM stakeholders at different levels.
In this connection, the Cabinet Office has initiated a "Program for Developing Disaster Management Specialists" for developing people who can promptly and appropriately support disaster management including disaster response and can form a network between national and local authorities.It has been providing trainings on different themes at different locations based on the respective context.The Central Disaster Management Council of the Government sets out basic guidelines for the drill exercises at national and local levels and outlines the "Disaster Reduction Drill Plan" stipulating overview of drills and exercises im-Open Journal of Earthquake Research plemented by the government.To enhance disaster resilience of community and to reduce disaster risk involving all stakeholders, the Government has declared 1st of September as "Disaster Prevention Day".Every year, nation-wide, the whole week of September 1 is observed as "Disaster Prevention Week" conducting different disaster risk reduction and awareness activities.
Similarly, November 5 has been designated as "Tsunami Disaster Prevention Day" and observed nation-wide conducting tsunami awareness and risk reduction activities.Based on the learnings from past disasters, schools and communities are provided with awareness, education and skill development trainings through different means, such as publications, publication materials, online and hands-on trainings, etc. Considering the spirit of self-help and mutual assistance, and to strengthen the capacity of the communities, communities are encouraged to develop Community Disaster Management Plans including disaster preparedness, risk reduction, capacity enhancement activities.Realizing the important role of volunteers during disasters, as large number of volunteers assisted in earthquake disaster in 1995, the Government has declared 17 January as the "Disaster Reduction and Volunteer Day", and the whole week of 15 to 21 January as "Disaster Reduction Volunteer Week".The events organized throughout the country in coordination and cooperation with national and local authorities, local communities and other stakeholders [15].However, as shared by one of the active volunteers during interaction for this research, now there are problems of dwindling and ageing membership due to the engagement in other activities of the young generations such as in full time jobs and education, etc.The Government assists circulating "Business Continuity Guidelines" to the private enterprises and companies for developing Business Continuity Plans (BCPs) to ensure continue operation of business, safety and security of employees, and reducing risk of disasters [4].
Major Disasters in Japan and Countermeasures
Japan has been experiencing fatalities and loss of property repeatedly due to a variety of hazards from the ages.Beside earthquakes, Japan has a big threat of volcanoes as it sits in "Ring of Fire".Japan shares almost 10% of the total active volcanoes in the earth [4].The snow related hazards are also common.For example, during 2010-2012, more than 100 deaths were counted each winter.Similarly, the country is prone to different types of water and wind-related hazards.Among them, floods, landslides, sedimentations, tidal waves and storms, typhoons, torrential rains are common.
Disasters in Japan, for the purpose of disaster management, can be broadly categorized in two types-1) natural hazard disasters and 2) accident disasters [4].The natural hazard includes-earthquake, storm and flood, volcano and snow related hazards, whereas accident disasters include those related to maritime, aviation, rail road, road, nuclear, hazardous materials, large scale fire and forest fire.Japan has developed a well-structured disaster management system.A Open Journal of Earthquake Research Minister of State for Disaster Management is appointed to the Cabinet, and the Disaster Management Bureau formulates basic policy on disaster management and also responsible for overall coordination on response to large-scale disasters.
However, generally in case of any disasters, the respective authorities will manage the situation, depending on the scale and impact.If the scale of disaster is small that will be addressed by the respective municipality.If it is beyond the capacity of municipality, the prefecture will be requested.The central governmentgets involved as well if the capacity of prefecture is short of the needs.
However, if the scale of disaster is large enough, the central government does not wait for the call fromlower authorities.Instead, emergency measures are called into action immediately.For example, during the 2014 Hiroshima landslides, the Onsite Disaster Management Headquarters was set up and headed by a State Minister of Cabinet Office.In case of large-scale disaster, a meeting is organized within 30 minutes, meeting and extreme disaster management headquarters will be established led by the Prime Minister [14].
Having experienced many disasters with great loss in terms of human lives and property, Japan's legislation for disaster management addresses all phases of disasters from damage mitigation, preparedness, early warning damage assessment, response, recovery, to reconstruction, with defined roles and responsibilities of various agencies and departments.All relevant stakeholders, including private sectors, get involved in implementing various countermeasures [4].The initiatives taken and the commitments made by the Government of Japan mobilizing different local, national and international stakeholders for reducing the disaster loss and damage are highly commendable [17].Countermeasures initiated for some disasters are discussed in the following sections.
Countermeasures for Earthquake
Studies have pointed out that Japan can be struck by big earthquakes in near future mainly in the areas such as Nankai Trough, the Japan and Chishima Trenches, and directly below Tokyo and Kinki regions [4].The major trenches and likely earthquake zones are presented in Figure 2.
Considering this fact, the government has paid attention and designated the potential areas and urged the government authorities and other concerned stakeholders including private sectors for implementing disaster reduction measures in accordance with the laws and regulations.The Government is developing a plan to expedite the countermeasures by administrative entities and private sectors.The threatened areas are being prepared for the expected scenario earthquakes suggested by experts.For instance, as the countermeasures for Tokyo Metropolitan Inland Earthquake, enactment of the Act on Special Measures for Tokyo Metropolitan Inland Earthquake (November 2013), designation of areas in need for urgent measures (2014), and formulation of Business Continuity Plan by Central Government, etc.
There are 4377 seismic observation points set up throughout the country.The Japan Metrological Agency (JMA) has the capability to issue information about
Countermeasures for Volcano
The Japanese islands are the home to 110 active volcanoes (Figure 3).The volcanoes, once they erupt, give very little time for evacuation.Japan has experienced heavy damage by volcanoes in the past.The Government has paid attention for accurate monitoring/observation and timely dissemination of appropriate information for evacuation before and during eruption.JMA has a network of monitoring 47 volcanoes 24 hours a day for issuing eruption warnings [4].
In accordance with the "Guideline for Disaster Management System Concerning Evacuation in the Event of Volcano Eruption (2008)" and the "Recom-Open Journal of Earthquake Research mendations for Countermeasures against Large-scale Volcano Disaster (2013)", several actions have been taken, such as establishment of Volcano Disaster Management Councils; a wide area coordinating framework consisting of various volcano related government agencies; preparation of volcano hazard maps for different scenarios; development of evacuation plans, routes and methods and establishment of working group for promotion of volcano disaster prevention, etc.
Countermeasures for Tsunami
Having long and complex coastlines, Japan remains under imminent threat of earthquake induced tsunamis.It has experienced great loss of lives and property in the past (1896,1933,1944,1946,1960,1968,1970,1983,1993 and in 2011).
The Great East Japan Earthquake and subsequent Tsunami in March 2011 killed more than 18,000 people and caused heavy loss of property in Japan.There have been several efforts made as the countermeasures for tsunami risk reduction.
The tsunami warning service was set up in 1952, which consist of 300 sensors [18].JMA issues the tsunami warnings within 2 to 3 minutes after the earthquake and subsequently gives information about the possible height and arrival time in the respective locations.Network has been developed to transmit such information to the concerned authorities/stakeholders and residents in timely
Countermeasures for Storm and Flood
It is said that, in Japan, one-half of the population is concentrated in possible inundation areas, which account for about 10% of the national land [4].
Countermeasures for Snow Disasters
Mainly in winters, the cold winds blowing from Siberia meets with the warm current flowing up the coast from the south bringing heavy snowfalls in Japan.
In the winter of 2006, 152 deaths were reported, and more than 100 snow related incidents were reported during 2010-2012 [4].
Incorporating the lessons from heavy snowfall in 2013, the policies and guidelines are being reviewed and revised.Based on Act of Special Measures for Heavy Snowfall Areas, the measures for securing traffic and communication and protecting agricultural land and forestry have been taken.Considering the risk of avalanches, the projects for protecting communities and strengthening the system of warning and evacuation are implemented.The municipalities have taken initiation for preventive measures and conducting public awareness programs in their respective areas.
Unique Features of DRM in Japan
Having gone through many devastate.ngdisasters, including the 1923 Tokyo Earthquake, 1995 Kobe (Great Hanshin-Awaji) Earthquake, 2011 Great East Japan Earthquake and many others, Japan has done more than most setting an example of effective disaster risk management system in the world.Some of the unique features of disaster risk management in Japan, realized in the course of this research are stated here.
Culture of Safety, Part of Life
Though people don't remember/memorize the plan and policies, most Japanese have good understanding of disaster preparedness measures and hence in-Open Journal of Earthquake Research ternalized them in their daily life as they have been learning and applying as they have grown up.DRM actions have been taken as a part of everybody's life.The Government has facilitated each sector in such a way that everybody is going in the direction of being and/or developing disaster resilient communities.For instance, special days have been designated by the Government for getting people's attention and encouraging for disaster risk reduction initiatives time and again.The senior Government representatives, including the Prime Minister, get involved in such activities seriously, which has given worth for working jointly among the citizens and taken ownership by all.The Government, local authorities, and even the communities have given high priority for preparedness against disasters.Due to the culture of feeling individuals' responsibility, hard work and sense of humanity for helping each other, no matter how the scale of crisis, Japan has demonstrated recovery in a better way than it was before the disaster.The mechanism of mutual-support is exemplary.As mentioned in the Tokyo Disaster Prevention Plan, about 98 percent of the rescued people were by family and neighbors in the 1995 Kobe Earthquake [19].
"Nana KarobiYa Oki" (Seven Times down, Eight Times Up) As the famous Japanese proverb, "Nana KarobiYa Oki" (literally it means "Seven times down, Eight times up").This proverb encourages for never giving up.It has been proven in the context of disaster risk management in Japan.Japan was toppled many times but has never laid down.Instead, it has always bounced back and stood in a better way than it was earlier.The country has taken each disaster as an opportunity for improvement from different aspects.A good example can be taken an efficient response of Japan in the immediate aftermath of so-called triple disaster (earthquake, tsunami and nuclear crises) in 2011 [20].Regarding the post disaster reconstruction approaches, Japan was the first country bringing the concept of "Build-Back Better" in the UN World Conference, Sendai in 2015 [1].
Science and Technology Based Solutions
Saving citizens' lives and property is the national priority in Japan.It has made significant investment for disaster prevention and mitigation than focusing only in response activities.Experts and scientists working in different institutions have been involved for disaster risk reduction activities and given responsibility as the members of the Central Disaster Management Council or the member of specialists' groups.The research outcomes have informed for policy formulation, planning and finding practical solutions/countermeasures for DRM at various levels.Japan has developed the world's most sophisticated earthquake early warning system, and the early warning system for many other disasters including, heavy rainfall volcano, tsunami, snow fall, etc., which covers the whole territory of the country.For instance, the Urgent Earthquake Detection and Alarm System (UrEDAS), used by Japan Railway (JR) Group, shuts off the power supply system of running Shinkansen and conventional rail services automatically when preliminary earthquake tremors are detected and deemed likely to interfere with rail services [3].
Dynamic, Evolutionary and Proactive DRM Policy Environment
The Disaster Countermeasures Basic Act, 1961, is the cornerstone of disaster management policy environment in Japan.This sets the foundation for DRM in Japan.Based on this act, several plans and policies have been formulated concerning disaster prevention and mitigation, emergency response, recovery and reconstruction.The Basic Plan for Disaster Risk Reduction, developed under the Disaster Countermeasures Basic Act, sets out the comprehensive and long-term plans for DRM in Japan [3].Japan has a good practice of evaluating each disaster, and analyzing the gaps, hence based on the analyzed learnings, the existing policies and plans are revised, and if required new plan and policies are developed.For instance, the Disaster Countermeasures Basic Act has been revised several times after its enactment in 1961, and several other policy and plans, such as Basic Disaster Management Plan, have been revised and developed as per the need after different disasters.There are policy and guidelines for every sector guiding every Government authority and NPO for disaster risk reduction activities without any confusions, which has helped all stakeholders, including private sectors, to take DRM initiatives in their own [7].
Take-Home from Japan for Improved Disaster Risk Management in Nepal
Having conducted several interactions, observation visits and reviewing literature on DRM practices in Japan, the authors are highly encouraged to think about how Nepal can get benefit from the Japanese experience and improve upon the current status of disaster risk management in Nepal.In particular, at a time when Nepal has recently transformed to federal set up of government from a centralized unitary system, there is a fresh opportunity to define the responsibilities of the national and local level governments and other stakeholders and improving DRM policy and guidelines for developing disaster resilient country.
Nepal, like Japan, a country of multi-hazard risks in the world largely owing to its diverse physiography, active seismicity, inadequate interventions for disaster risk reduction and alert of socio-economic vulnerabilities.A landlocked country, Nepal does not experience oceanic disasters and volcanos.However, apart from earthquake, fires (both-domestic and forest), floods, landslides, Glacial Lake Outburst Floods (GLOF), droughts, avalanches are quite common causing significant loss of lives and property every year.Earthquakes, mainly, have devastated the country many times affecting its economy and development negatively, and disrupting daily life of the citizens.In this context, as the DRM mechanism adopted in Japan has been considered as one of the successful examples in the world, the learnings from Japan might be helpful for overcoming with the challenges and improving DRM mechanism as a whole in Nepal.
The mechanism adopted in every aspects of DRM in Japan may exactly not be replicable in the context of Nepal, mainly due to reasons, such as weaker economy, geographical hardship and lack of connectivity by roads and infrastruc-Open Journal of Earthquake Research tures, differences in political system and technological awareness to name very few.However, the following learnings from Japan can be instrumental to move forward and build a resilient Nepal.
Reliable Early Warning System
Developing reliable early warning systems can alert people, as required, timely for taking safety measures against potential risk of hazard events.Just for an example, during three-month stay in Tokyo, the principal author experienced tremor three times, and was little bit panic every time.However, persons sitting next to him were all okay; they did not even care what happened.Later he came to know that the Japanese colleagues were confident because, if there occurs a big shaking, they know what they should do and in case of a big and far earthquake they were supposed to receive the warnings for evacuation and following safety measures.They said, what they experienced was just normal.
Build Back Better
Each disaster should be considered as an opportunity for building back better, but not just reconstruction as what it was before.This has been achieved after major disasters in Japan such as after 1995 Kobe (Great Hanshin-Awaji) Earthquake and 2011 Great East Japan Earthquake and Tsunami, etc.This approach has to be adopted in the context of ongoing reconstruction activities of post 2015 earthquake in Nepal.
Proactive Approach
Priority should be given for investing in pre-disaster activities (mitigation and preparedness), then for response-related activities.Giving priority to pre-disaster activities will ultimately reduce the risk significantly, decreasing the required investment for response during and after disasters.
Periodic Review and Update of DRM Policy and Plans
After each disaster, critically analyze the countermeasures taken by all sectors and practicality of provisions made by the existing policy and plans, and programs based on the identified needs, revise and develop the policy and plans.
Sector Specific Plans and Standard Operating Procedures DRM is collective effort.Every sector has roles and responsibilities for disaster risk management.Hence, under the guidance of national act, sector specific plans, policies, guidelines and standard Operating Procedures (SOPs) should be developed for effective DRM.Respective sectors should be given the responsibility for reviewing and updating such policies, plans and guidelines.
Culture of Safety
Promote peoples' understanding on the potential disasters and help them to imagine about the consequences and their actions to cope with, in case of real disasters.An effective awareness activity should achieve positive change in perception and beliefs of every individuals, when every citizen understands the value of safety and integrates/considers in daily activities as culture of safety, that ultimately reduces disaster risk.
Promote the mechanism of self-effort and mutual-help for effective disaster risk management.
Motivate and encourage citizens for getting involved in DRM activities in different ways (designating special days for different themes could be one of the good examples).
Cash the experiences and real stories from disasters motivating people for getting prepared against disasters.
Conclusion
Nepal and Japan, both are multi-hazard prone countries; both have experienced several devastating disasters.The natural hazard events will occur naturally; nobody can stop at source, but the level of impact can be significantly varied depending on the level of preparedness, mitigation measures and response capacity, and as a whole effectiveness of DRM mechanism.Due to several factors, the level of risk is many times higher to the people living in Nepal than people living in Japan.For instance, a study carried out by GeoHazard International, United Nations Centre for Regional Development (UNCRD) shows that a person living in Kathmandu is about 60 times more likely to be killed by earthquake than a person living in Tokyo [6].It was in 1982 the first act, Natural Calamity Relief Act, enacted addressing disaster-related issues in Nepal.Since then there have been several efforts for policy development and institutional arrangement for reducing risk management.However, despite these efforts in formulating disaster management policy and plans, as compared to many other countries, Nepal remains far behind in reviewing and updating them, which has pushed the country's risk management approach backward and the primarily reactive [21].
In this context, Nepal can learn from Japan as it has been considered as leader for readiness [18] and one of the successful examples for disaster risk management in the world.The specific disaster risk reduction measures can be different as per the local context; however, in general the principles are same and can be adopted anywhere.Japan has set up proficient level in DRM system, which includes, hazard specific countermeasures, public awareness, contextual revision of policy and plans and above all its perseverance in ruling innovations in coping with disasters.Although Nepal differs in socio-economic cultural and geopolitical aspects from Japan, but these good practices of DRM in Japan can be certainly be taken as a guide to develop effective DRM system in Nepal.
Figure 1 .
Figure 1.Basic plans for disaster management in Japan.Source: Cabinet office Japan (modified).
Figure 2 .
Figure 2. Major trenches and likely earthquake zones.Source: Ministry of Education, Culture, Sports, Science and Technology.
manner.Based on the experiences from the 2011 earthquake and tsunami the Act on Promotion of Tsunami Countermeasures has been developed, which includes strengthening tsunami observation system, education and training and developing required facilities among others.
Figure 3 .
Figure 3. Distribution of active volcanoes in Japan.Source: Cabinet office Japan, White Paper on Disaster Management in Japan, 2015 (Created by the Cabinet Office from the Japan Meteorological Agency Website).
JMA has Automated Meteorological Data Acquisition System for observing meteorological phenomena that cause storm and flood disasters.Based on the automated measures of rainfall, air temperature and wind direction/speed and weather, the JMA announces the forecasts and warnings for preparing against possible disasters.Based on the Flood Control Act and the Sediment Disaster Prevention Act, 417 rivers are covered by flood warning system and 1555 rivers subject to water-level notifications.The municipalities are urged to prepare flood hazard maps and disseminate among the communities.As of March 2014, 1272 municipalities have published such maps.Furthermore, there has been taken several measures such as formulation of Working Group for Studying Comprehensive Counter Measures against Sediment Disasters and Basic Policies for Metropolitan Area Large-scale Water Hazards etc. [4].
|
2019-05-28T14:02:12.758Z
|
2019-03-28T00:00:00.000
|
{
"year": 2019,
"sha1": "48a0aaca88bbc72af1f443f5bfa522fa5651ac3a",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/pdf/OJER_2019052416115073.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "48a0aaca88bbc72af1f443f5bfa522fa5651ac3a",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
256858540
|
pes2o/s2orc
|
v3-fos-license
|
Butyrate in Human Milk: Associations with Milk Microbiota, Milk Intake Volume, and Infant Growth
Butyrate in human milk (HM) has been suggested to reduce excessive weight and adipo-sity gains during infancy. However, HM butyrate’s origins, determinants, and its influencing mechanism on weight gain are not completely understood. These were studied in the prospective longitudinal Cambridge Baby Growth and Breastfeeding Study (CBGS-BF), in which infants (n = 59) were exclusively breastfed for at least 6 weeks. Infant growth (birth, 2 weeks, 6 weeks, 3 months, 6 months, and 12 months) and HM butyrate concentrations (2 weeks, 6 weeks, 3 months, and 6 months) were measured. At age 6 weeks, HM intake volume was measured by deuterium-labelled water technique and HM microbiota by 16S sequencing. Cross-sectionally at 6 weeks, HM butyrate was associated with HM microbiota composition (p = 0.036) although no association with the abundance of typical butyrate producers was detected. In longitudinal analyses across all time points, HM butyrate concentrations were overall negatively associated with infant weight and adiposity, and associations were stronger at younger infant ages. HM butyrate concentration was also inversely correlated with HM intake volume, supporting a possible mechanism whereby butyrate might reduce infant growth via appetite regulation and modulation of HM intake.
Introduction
Butyrate is a short-chain fatty acid (SCFA) detectable in human milk (HM) [1], with concentrations ranging from 0.1-0.75 mg/100 mL between studies [1][2][3]. This four-carbon fatty acid is reported to have anti-inflammatory properties and may be protective against obesity and insulin resistance [4]. Animal studies in mice and rats showed that butyrate could improve insulin sensitivity and metabolic dysfunctions caused by exposure to a high-fat diet [5][6][7].
Butyrate is synthesized in the gut by anaerobic bacteria through the fermentation of nondigestible carbohydrates. Compared to other SCFAs, such as propionate and acetate, butyrate is reported as the greatest source of energy used by colonic epithelial cells [8]. In infants, potential origins of butyrate are either oral intake, e.g., through HM [1] or solid food, or production by bacterial fermentation of dietary compounds in the colon, presumably human milk oligosaccharides (HMOs) in infants receiving HM [9]. In contrast
Study Design and Population
The Cambridge Baby Growth and Breastfeeding Study (CBGS-BF, 2015-2019) was a longitudinal prospective cohort aiming to identify factors in HM that may influence the rate of infant growth and hence alter obesity risk later in life. Parameters of HM intake and composition were measured, including HM intake volume using a deuterium-labelled water technique; repeated longitudinal HM collection and composition analyses, including macronutrients, butyrate, and HMOs; and explorative analyses of microbiota in HM and infant guts.
The study design has been reported previously [14] (Supplementary Table S1). In brief, recruitment of mother-infant pairs took place at birth at the Rosie Maternity Hospital, Cambridge, England. Strict inclusion/exclusion criteria were applied: to mothers, including intention to breastfeed from birth until at least 6 weeks of age, nonobese body mass index (HMI) [15] before pregnancy (<30 kg/m 2 ), no significant illness before/during pregnancy, no antibiotic or steroid consumption in the 30 days prior to delivery, and no regular consumption of probiotics; and to infants, including singletons, born at term via vaginal delivery, with birthweight >-1.5 sex-and gestational age-adjusted SDS according to the UK 1990 growth reference [16,17].
Research clinic visits were conducted at birth, 2 and 6 weeks, and then 3, 6, and 12 months, mostly at the research facility at the hospital, or at home if not feasible. Each infant clinic visit was scheduled based on the exact age of infants with +8 days tolerance for birth, 2-, and 6-weeks visits, and +28 days for 3 months onwards.
The study was approved by the National Research Ethics Service Cambridgeshire 2 Research Ethics Committee (IRAS No 67546, REC No 11/EE/0068, original date of ethical approval 31 March 2011, date of amendment approval 7 July 2015). All mothers provided informed written consent for themselves and their infants.
Anthropometry
Birth weight was recorded from the medical records postdelivery. At all other time points, infants were weighed naked without nappies and before feeding using a Seca 757 electronic baby scale (Seca Ltd., Hamburg, Germany) to the nearest 1 g. Infant supine length was measured using a Seca 416 infantometer (Seca Ltd., Hamburg, Germany) to the nearest 0.1 cm. To assess subcutaneous fat at various regions and to estimate relative subcutaneous body fat [18], skinfold thickness (SFT) was measured at 4 sites (triceps, subscapular, flank, and quadriceps) in triplicate on the left-hand side of the body using a Holtain Tanner/Whitehouse Skinfold Caliper (Holtain Ltd., Crymych, UK).
All anthropometry and body composition measurements were performed by one of three trained paediatric research nurses.
HM Sample Collection
For butyrate analysis, self-collected postfeed HM samples (usually 10-15 mL) were provided by mothers using hand or electric breast pumps at each visit, from birth/colostrum until 12 months if mothers were still breastfeeding, either exclusively or partially. All samples were kept frozen at −20 • C until the time of analysis.
To study HM microbiota composition, a complete HM expression from one breast was collected at 6 weeks infant age using a breast pump. Mothers cleaned the breast using antiseptic liquid, dried it with sterile paper towels, and discarded a few drops of HM prior to sample collection.
HM Butyrate Analysis
HM samples were defrosted and thoroughly homogenised before assays. The homogenate (400 µL) was mixed with CDCl3 solvent (400 µL) for 10 min prior to centrifugation (30 min, 10,000 rpm). The resulting nonpolar fraction was used to measure SCFAs using 1 H-Nuclear magnetic resonance (NMR) spectra. Butyrate quantification was conducted as described previously [1].
HM Intake Volume
The volume of HM received by the infant was estimated using the dose-to-the-mother deuterium-oxide ( 2 H 2 O) turnover technique [19]. When infants were approximately 4 weeks of age, mothers were given deuterium-enriched (tracer) water to drink, which would be incorporated into HM and passed to the infant during breastfeeding. Urine samples were collected from both mothers and infants daily for a period of 2 weeks. 2 H enrichment in the urine samples was measured by isotope ratio mass spectrometry as described previously [19].
2.6. HM Microbiome Analysis 2.6.1. DNA Extraction from HM Samples HM samples were thawed at room temperature (RT), and 0.5 mL milk was added to a 2.0 mL screw cap tube containing 0.5 g of sterilised 0.1 mm zirconia beads and 0.5 mL lysis buffer (500 mM NaCl, 50 mM Tris-HCl (pH 8.0), 50 mM EDTA, 4% SDS). After mixing, 500 µL phenol and 200 µL chloroform were added. The suspension was thoroughly mixed and the FastPrep instrument (MP Biomedicals, Santa Ana, CA, USA) was used for lysis at 5 m/s for 2 times 40 sec at room temperature, with in between cooling on ice for 1 min. Thereafter, samples were centrifuged at 16,000× g for 5 min at 4 • C. The resulting water phase was transferred to a fresh tube; 250 µL phenol and 250 µL chloroform were added and thoroughly mixed, and samples were centrifuged (16,000× g, 5 min, 4 • C); this process was then repeated twice. The resulting water phase was again transferred to a fresh tube, mixed with 250 µL of chloroform, and centrifuged at 16,000× g for 5 min at 4 • C. Next, the final water phase (+/− 500 µL) was transferred to a fresh tube and added with 2 µL of 10 mg/mL RNase A (Qiagen, diluted in TE buffer), and the mixture was incubated at 37 • C for 15 min. Subsequently, the DNA was purified (mag mini kit, LGC Biosearch Technologies, Middlesex, UK) according to the following protocol: 400 µL of the RNase-treated final water phase was transferred to 1.5 mL tubes containing 800 µL binding buffer and 10 µL magnetic beads and mixed by pipetting. The mixture was shaken (30 min, 700 rpm, RT), and the supernatant was removed using magnetic separation (1 min). The magnetic beads were washed with 200 µL Wash Buffer 1 using gentle mixing and incubated at RT for 5 min, and supernatant was removed by magnet separation; the washing procedure was repeated with Wash Buffer 2. The magnetic beads were shaken and dried (10 min, 500 rpm, 55 • C). Next, 63 µL elution buffer was added, and tubes were incubated at 55 • C for 15 min, whilst being shaken at 9500 rpm. Then, tubes were placed in the magnet separator for 3 min, and 50 µL of the elution buffer was transferred to a fresh tube. Finally, DNA was stored at −20 • C until further processing.
Library Preparation and 16S MiSeq Sequencing
For the library PCR step in combination with sample-specific barcoded primers, purified PCR products were shipped to BaseClear BV (Leiden, The Netherlands). PCR products were purified, checked on a Bioanalyzer (Agilent), and quantified. This was followed by multiplexing, clustering, and sequencing on an Illumina MiSeq with the paired-end (2×) 300 bp protocol and indexing. The sequencing run was analysed with the Illumina CASAVA pipeline (v1.8.3) by demultiplexing based on sample-specific barcodes. From the raw sequencing data, low quality of sequence reads, reads containing adaptor sequences, or PhiX control with an in-house filtering protocol were discarded and only "passing filter" reads were selected. On the remaining reads, we performed a quality assessment using the FASTQC quality control tool version 0.10.0. (http: //www.bioinformatics.babraham.ac.uk/projects/fastqc/, accessed on 8 February 2022).
Calculation and Statistical Analyses
Weight, length, and BMI values were converted to sex-and age-adjusted standard deviation scores (SDS) using the UK 1990 growth reference at birth and WHO growth standards at later time points (LMS Growth [26]). Internal SDS were calculated for each skinfold thickness site by calculating the residuals from linear regression models, adjusted for sex and age, and then the mean skinfolds SDS across the four sites was calculated as a measure of infant adiposity.
Low-vs high-butyrate groups were arbitrarily defined based on the median of HM butyrate concentrations.
Continuous variables were summarised as mean ± standard deviation or median (interquartile range) and categorical variables as number (%).
Multiple linear regression models were run with HM butyrate concentration at 6 weeks (when all infants were still exclusively breastfed) as the predictor and infant growth gains (expressed as SDS changes) as outcomes, including infant sex, birth weight SDS, GA, and postnatal age at visit (in days) as covariates.
To capitalise on the longitudinal growth and macronutrient intake data with appropriate handling of missing values, linear mixed-effects models were used to examine the associations between butyrate concentration with anthropometry and body composition parameters, i.e., weight, height, BMI, and mean skinfolds. The models were adjusted for the same covariates as above, additionally with 0-3 months feeding history (exclusively breastfed vs mixed-fed) with further correction for HM intake volume in sensitivity analysis.
For HM microbiota analyses, multivariate redundancy analyses (RDAs) were performed on 69 samples by 16S rRNA gene sequencing in Canoco version 5.11 using default settings of the analysis type "Constrained" [27]. Relative abundance values of genera were used as response data, and metadata as explanatory variable. Variation explained by the explanatory variables corresponds to the classical coefficient of determination (R2) and was adjusted for degrees of freedom (for explanatory variables) and the number of cases. Canoco determined RDA significance by permutating (Monte Carlo) the sample status. To assess microbiota composition differences between samples with relatively high vs. low butyrate levels, samples were split in two equal groups based on BM butyrate concentration (with ranges as follow: 0.17-0.75 mg/dL and 0.8-2.65 mg/dL for low-vs high-butyrate, respectively). A nonparametric Mann-Whitney U test (two-tailed) was applied on all taxa, as implemented in Graphpad Prism 5.01 (San Diego, CA, USA). FDR correction for multiple testing was applied, unless stated otherwise.
All analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA) and R version 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria). In all analyses, p values < 0.1 were considered statistical trends; p values < 0.05 indicated statistical significance.
Results
In total, 71 singleton and full-term born healthy infants were included in the longitudinal models analysing associations between HM butyrate and growth. Of these, 47 had complete measurements of HM intake volume and HM microbiota at 6 weeks of age. All 71 infant participants provided stool samples; two were excluded from microbiome analysis due to having too low read count after sequencing. Of the 69 infant participants included in the microbiome analysis, 56 had butyrate measurements over time (Supplementary Table S2). Table 1 presents the baseline characteristics of the population analysed, participants with butyrate concentrations at 6 weeks of age, and participants with butyrate intake measurements between 4-6 weeks of age. Figure S1). Among mothers who continued EBF for at least 3 months, HM butyrate concentrations increased with infant age, from 0.76 mg/100 mL at 2 weeks to 1.42 mg/100 mL at 3 months of age (p = 0.027, Table 2).
Characterisation of HM Microbiota and Associations with HM Butyrate Concentrations
Overall, microbiota composition in HM samples collected at 6 weeks comprised typical taxa previously reported in HM, with characteristic high relative abundance of various skin-associated bacteria, such as Staphylococcus, Streptococcus, and Cutibacterium, as well as Acinetobacter and Lactococcus [28,29] (Supplementary Figure S2).
RDA on the genus level showed that HM butyrate concentration was associated with HM microbiota composition (variation explained 1.07%; p = 0.036) (Figure 1a). From the taxa indicated by RDA to be associated with HM butyrate, HM Acinetobacter relative abundance indeed showed a positive trend with HM butyrate concentration. The relative abundance of Acinetobacter was slightly higher in HM samples with relatively high butyrate levels compared to HM samples with relatively low butyrate levels (non-adjusted p = 0.086, Figure 1b). However, no statistically significant association between HM butyrate and any taxon was detected, including typical butyrate-producing bacteria (data not shown).
taxa indicated by RDA to be associated with HM butyrate, HM Acinetobacter relative abundance indeed showed a positive trend with HM butyrate concentration. The relative abundance of Acinetobacter was slightly higher in HM samples with relatively high butyrate levels compared to HM samples with relatively low butyrate levels (non-adjusted p = 0.086, Figure 1B). However, no statistically significant association between HM butyrate and any taxon was detected, including typical butyrate-producing bacteria (data not shown).
Associations between HM Butyrate, HM intake Volume, and Infant Growth
An overall negative relationship between HM butyrate concentration and infant weight (−0.60 + 0.23 SDS/g/100 mL, p = 0.01) was detected using longitudinal modelling of all repeated HM butyrate concentrations and infant growth from birth to age 12 months (Table 3, Supplementary Figure S3). The relationship weakened with increasing age (pinteraction = 0.02). A similar relationship between HM butyrate and infant BMI was observed (Table 3).
Associations between HM Butyrate, HM Intake Volume, and Infant Growth
An overall negative relationship between HM butyrate concentration and infant weight (−0.60 + 0.23 SDS/g/100 mL, p = 0.01) was detected using longitudinal modelling of all repeated HM butyrate concentrations and infant growth from birth to age 12 months (Table 3, Supplementary Figure S3). The relationship weakened with increasing age (p-interaction = 0.02). A similar relationship between HM butyrate and infant BMI was observed (Table 3). We explored further relationships with HM butyrate concentrations and intakes at age 6 weeks. At 6 weeks, HM butyrate concentration correlated negatively with HM intake Figure 2). After 6 weeks, 14 infants discontinued EBF at 6-12 weeks, 52 discontinued EBF at 3-6 months, and 28 continued EBF for at least 6 months. At
We explored further relationships with HM butyrate concentrations and intakes at age 6 weeks. At 6 weeks, HM butyrate concentration correlated negatively with HM intake volume (Pearson R = −0.29, p = 0.047, Figure 2). After 6 weeks, 14 infants discontinued EBF at 6-12 weeks, 52 discontinued EBF at 3-6 months, and 28 continued EBF for at least 6 months. At Moreover, cross-sectional HM butyrate concentrations at 6 weeks, but not intakes, were inversely associated with weight and height gains from 0-6 weeks (Table 4, Figure 3). When corrected for HM intake volume, the significant associations between HM butyrate levels and early growth gains were no longer detected (data not shown). HM butyrate concentrations and HM butyrate intakes at age 6 weeks were positively associated with growth and adiposity from 6 weeks to 12 months (Table 4), consistent with a weakening negative relationship between HM butyrate and growth with age (Table 3). Moreover, cross-sectional HM butyrate concentrations at 6 weeks, but not intakes, were inversely associated with weight and height gains from 0-6 weeks (Table 4, Figure 3). When corrected for HM intake volume, the significant associations between HM butyrate levels and early growth gains were no longer detected (data not shown). HM butyrate concentrations and HM butyrate intakes at age 6 weeks were positively associated with growth and adiposity from 6 weeks to 12 months (Table 4), consistent with a wea-kening negative relationship between HM butyrate and growth with age (Table 3). Unstandardised estimates ± standard errors are displayed. All models are adjusted for infant sex, gestational age, postnatal age at visit, and birth weight SDS. "6 weeks-12 months" models are additionally adjusted for 0-3 months feeding history (exclusively breastfed vs. mixed-fed). Significant p values are highlighted in bold.
Discussion
Butyrate is one of the SCFA detected in the gut as a product of bacterial fermentation of undigested dietary fibres [30]. However, the origin and role of butyrate in HM is not Unstandardised estimates ± standard errors are displayed. All models are adjusted for infant sex, gestational age, postnatal age at visit, and birth weight SDS. "6 weeks-12 months" models are additionally adjusted for 0-3 months feeding history (exclusively breastfed vs. mixed-fed). Significant p values are highlighted in bold.
Discussion
Butyrate is one of the SCFA detected in the gut as a product of bacterial fermentation of undigested dietary fibres [30]. However, the origin and role of butyrate in HM is not yet well understood. This study explored the potential origin of HM butyrate from HM microbiota. Previous studies have demonstrated local/intestinal butyrate is likely produced by Clostridiales-dominant microbiota such as Faecalibacterium prausnitzii, Eubacterium rectale, or Roseburia intestinalis [31,32]. However, butyrate in HM may likely be produced through a different route, since the RDA in this study found no significant associations between HM butyrate concentration and the relative abundance of previously reported butyrate producers. A trend of association between the relative abundance of Oscillospira (known as a butyrate producer [33,34]) and HM butyrate was observed; however, the other typical butyrate producer, Faecalibacterium [35], displayed null association with HM SCFA and was not detected among the top 20 taxa (Figure 1). As butyrate-producing bacteria in the gut microbial community typically are anaerobes, their presence in HM might not be readily expected. In addition, the microbiota profiling method used in this study only assessed the relative abundance of bacterial groups and did not reflect bacterial metabolic activity represented through bacterial gene expression. Therefore, increased bacterial metabolism might have contributed to the increased butyrate levels instead of changes in bacterial community composition. Alternatively, butyrate might have passed from the maternal circulation into HM, but this was beyond the scope of the current study.
There was a positive trend between butyrate concentrations and non-butyrate-producing bacterial taxa in HM, such as Acinetobacter. A recently published study detected an acetyl-CoA (ACoA) pathway, known as the main butyrate-producing pathway, in Acinetobacter strains [36]. However, to the best of our knowledge, there is no evidence of actual production of butyrate by Acinetobacter, and therefore this genus is not categorized yet as a typical butyrate producer.
Acinetobacter has also been consistently identified in HM microbiota [37], especially in samples that were collected without preceding aseptic cleansing to the breast [38]. In this study, HM sampling for microbiota analysis was performed using a careful aseptic technique, under direct supervision of the paediatric research nurses.
Abundance of Acinetobacter in HM microbiota has been associated with food allergy in infants [39]. This might be related to its influence (as a member of the HM microbiota) on infant gut microbiota development or its interactions with other bacterial groups in the infant intestinal tract [40]. However, Acinetobacter specific effects on overall infant health are not well studied yet. In addition, distinct strains of Acinetobacter have been shown to be susceptible to direct antimicrobial effects of butyrate [41], and it might therefore be surprising that increased butyrate levels in HM were associated with higher abundance of Acinetobacter. However, the balance of antimicrobial effects of HM compounds such as butyrate but also HMOs and Lactoferrin [42,43] affecting Acinetobacter as well as other bacterial species might have resulted in changes in overall HM microbiota composition indirectly leading to a net increase in Acinetobacter abundance.
Another aim of this study was to examine if butyrate in HM influences HM intake and ultimately infant weight and adiposity. In the current analysis, longitudinal models displayed overall negative associations between HM butyrate concentrations and measures of infant weight and adiposity, similar to our previous report [1], which could potentially prevent excess weight gain and obesity during childhood. From both animal and human studies, butyrate and its producing bacteria have been linked to a lower risk of obesity and metabolic complications, including liver fibrosis [6] and insulin resistance [44,45]. Butyrate may also act as an anti-inflammatory mediator in metabolic diseases [4]. In a piglet model, butyrate appeared to influence lipid metabolism by accentuating adipogenesis and lipid accumulation, possibly via glucose uptake upregulation and de novo lipid production [46]. Furthermore, serving as the source of energy for colonocytes, butyrate may affect energy intake and energy balance, as 10% of energy intake may be attributed to dietary residues entering the large intestine [47]. Other additional unknown mechanisms might also underlie the inverse relationship between HM butyrate and infant growth.
Cross-sectionally, when examining butyrate intakes through HM rather than concentrations, the inverse associations between butyrate and early growth became less visible. Since HM butyrate concentration was inversely correlated with HM intake volume, it could be speculated that the associations between butyrate content and early growth were either mediated or confounded by lower HM intake, i.e., the high butyrate concentration in HM might be the reason for low HM intake in some infants. Some recent animal studies [48,49] have reported that acute oral butyrate administration via intragastric gavage rapidly induced satiety and decreased food intake in mice, presumably via modulation of neuropeptide XY neurons via vagal nerves [50]. In addition, other SCFAs such as propionate have also been reported to be key molecules governing the signaling pathway within the gut-brain axis and influencing appetite [51]. Consequently, the interplay between butyrate odor and/or taste in HM and its effect on appetite regulation may potentially lower infant HM intake and contribute to attenuated HM intake and early infant weight gain.
In this study, negative associations between HM butyrate and infant weight and BMI seemed to be stronger at early, rather than later, infant ages. Since infants in this study were solely dependent on HM consumption during the first 6 weeks of age, the mechanistic pathway of butyrate on early growth modulation could be mainly speculated to occur via appetite regulation and HM intake reduction.
Moreover, in our longitudinal models (Table 3), HM butyrate was overall negatively associated with growth, but with a positive interaction with age (butyrate*age), indicating a weakening in this negative relationship, and it is likely that positive "catch-up" growth occurs as infants are introduced to other forms of nutrition.
Reflecting on the current setup, the strengths of this study include the estimation of butyrate intake alongside its concentration measurement by quantifying HM intake volume, which is not routinely included in many HM research cohorts. To the best of our knowledge, this is the only study that investigates the links between HM butyrate concentration, HM intake volume, and HM microbiota. The longitudinal design of this study also enabled us to analyse the associations between HM butyrate and subsequent weight and adiposity gains during infancy.
However, although applying strict inclusion criteria allowed us to omit some confounding factors, e.g., mode of delivery and antibiotics use during antenatal period, the relatively small numbers of samples have limited sensitivity analyses in this study. Although a lot of antenatal/maternal information was recorded through the perinatal questionnaire, detailed maternal diet during the breastfeeding period that might influence butyrate levels in HM was not available. Future large longitudinal studies with more complete maternal data are needed to examine the link between HM butyrate, HM intake volume, and growth gains during infancy in more detail.
Conclusions
In this current infant cohort, we observed a weak association between HM butyrate and HM microbiota composition. The lack of relationship between HM butyrate and its typical bacterial taxa producers might suggest alternative sources of butyrate in HM, such as maternal transfer. However, changes in butyrate concentration in HM might in turn have modulated HM microbiota composition through antimicrobial effects. We also observed an overall negative influence of HM butyrate on early infant weight and adiposity gains, which might have potentially been mediated by appetite modulation and decreased HM intake volumes.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nu15040916/s1, Figure S1: HM butyrate concentration based on EBF duration; Figure S2: Average composition of HM microbiota; Figure S3: Longitudinal weight gain trajectories based on HM butyrate concentration; Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data that support the findings of this study are available on request from Professor Ken Ong (ken.ong@mrc-epid.cam.ac.uk).
Hospital, Cambridge, UK; and all laboratory staff at the Department of Paediatrics, University of Cambridge, especially Karen Whitehead and Dianne Wingate. The authors also thank Priya Singh and Michelle Venables (MRC Elsie Widdowson Laboratory) for measuring breastmilk intake volume and all laboratory staff at Vervoort's group, Department of Agrotechnology and Food Sciences, Wageningen University for measuring HM butyrate concentrations. We would also like to thank Eric AF van Tol and Marieke Schoemaker for their involvement during initiation of the study.
Conflicts of Interest: J.A.v.D. and G.G. are current employees of Reckitt/Mead Johnson Nutrition, and MC was also an employee of Mead Johnson Nutrition at the time of the study. No other authors declare a conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
2023-02-15T16:21:41.519Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ad2180fa37c371e150d39fb6702d6c020aff6063",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/15/4/916/pdf?version=1676276278",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b0dcc4a807772a2101e4d06b9324c9c8112ccb8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
197588649
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of operating conditions of a cooling installation with carbon dioxide
. In response to international regulations, natural refrigerants such as carbon dioxide are more and more frequently used in the refrigeration industry. Due to thermodynamic properties, R-744 is used in the transcritical cycle as an individual refrigerant. In the hereby article, high pressure of CO 2 and air temperature values were analysed. The measurements were conducted on the gas cooler side and involved external air temperature values in the summer period between 1 June to 30 September 2018. The “Booster” installation was used in one of Polish supermarkets. Correlations required to determine the optimal pressure of carbon dioxide depending on ambient temperature were presented in the article. The equations presented hereby allowed to maximize the energy efficiency ratio. An optimal high pressure for one of the correlations from literature was calculated on the basis of the measurement of ambient temperature. Actual and optimal pressure values of carbon dioxide were compared in the analysed period of time. to solar radiation.
Introduction
Climate changes occurring on the earth trigger the increase of interest in environmental protection, which leads, inter alia, to the reduction of greenhouse gas emissions and the elimination of harmful chemical compounds. The cooling industry is subjected to restrictions in the area of environmental care, mainly due to adverse parameters of synthetic refrigerants used in cooling installations.
The selection of refrigerants was undertaken by the European Commission. As the criterion for withdrawal of individual refrigerants, the ODP indicator was initially adopted, followed by GWP [3]. The Directive of 1 January 2017 prohibits the use of refrigerants with a GWP ratio greater than 150 in new automotive air conditioning installations [6,11]. Due to the elimination of subsequent refrigerants, the analysis of the use of natural refrigerants such as carbon dioxide (R-744) has begun. * Corresponding author: artbie@agh.edu.pl 1000 00 00 EKO-DOK 2019 5 5 As compared to synthetically obtained refrigerants, carbon dioxide is environmentally friendly, even though it is considered one of the main greenhouse gases. The GWP value for CO2 is 1. In comparison to, for example, the refrigerant R-404A (GWP = 3300), 3.3 tons of CO2 affects the greenhouse effect in the same way as 1 kilogram of R-404A [2].
Pursuant to the PN EN ISO 378 standard, the R-744 is qualified to the group A1, which means that it is non-flammable, non-toxic and safe during operation, unless concentration, threatening one's health or life is exceeded [9]. Despite the very good environmental properties, carbon dioxide, due to its thermodynamic parameters, creates some problems in the design and use of refrigeration systems. Its critical temperature is only 31°C, at the pressure of 73.75 bar [4]. In this work, the actual high pressure values obtained in the gas cooler were analyzed in a booster system with carbon dioxide installed in one of the Polish supermarkets. The analysis was based on the results of measurements in the summer period, from 1 June to 30 September 2018, because during this period, due to the high temperature of the air, the installation most often worked in supercritical conditions.
Booster systems
As an independent refrigerant, carbon dioxide can occur in both the subcritical and supercritical cycles. In Polish conditions, due to the high temperature of the upper heat source, the cooling cycle with carbon dioxide is a supercritical cycle. If the temperature of the CO2 on the high pressure side is greater than or equal 31°C, then the installation works in the supercritical zone, at the pressure exceeding 74 bar, and the gas itself does not condense but it cools. In practice, it happens when the outside air temperature starts to exceed 20-22°C. In relation to synthetic refrigerants, in the supercritical cycle, carbon dioxide is characterized by a lower energy efficiency, mainly due to the increased work of the compressor. Unfavourable thermodynamic parameters of such a system must be taken into account by system manufacturers, increasing the size of individual system components. With the increase in the size of the carbon dioxide system components, the prices of these systems go up, well above the costs of classic cycle with synthetic refrigerant. Despite thermodynamic and economic difficulties, transcritical cycles are successfully used, among others, in supermarkets, most often in booster systems. An example of a system scheme together with the cycle on the log(p)-h diagram is shown in Figure 1. 5 5 In large supermarkets, gas coolers, through fans, are cooled by air at ambient temperature. Thus, in the summer period, the system often works in a supercritical mode. Therefore, the question arises, what should the pressure of carbon dioxide be in order to maintain the highest possible value of the Energy Efficiency Ratio (EER) or Coefficient of Performance (COP)? Do the actual values of CO2 pressure on the second compression stage obtained under real conditions for supermarket installations coincide with the results of theoretical considerations?
Theoretical, high pressure of carbon dioxide
The EER or COP of the CO2 system can be affected by factors such as the design of the gas cooler, the pressure in this exchanger and the temperature of carbon dioxide at the outlet of the gas cooler [7]. The exergy analysis shows that regardless of the operating conditions of the gas cooler, it loses about 30.7% of the total exergy of the system, which is the largest loss in the installation. In comparison, the expansion valve loses about 24.9%, and the compressor 19.5% of the total exergy [13]. When optimizing such a system, special attention should be paid to the high pressure exchanger, in particular to the high pressure of carbon dioxide, which is dependent on the temperature of the upper heat source. The literature includes many publications that describe the optimal CO2 pressure in the gas cooler, depending on the air temperature or the outlet temperature of the carbon dioxide.
Correlations are aimed at achieving the maximum energy efficiency (EER).
Y. T. Ge and S.A Tassou [8], proposed the relationship for supercritical cycle, between high pressure of carbon dioxide and the value of the energy efficiency ratio depending on the air temperature ( Figure 2).
Fig. 2.
Variation of energy efficiency ratio with high pressure side and air inlet temperature for transcritical booster installation [8].
In [8], it was observed that the EER decreases with increasing air temperature and for each air temperature value it reaches a different maximum at different values of carbon dioxide pressure. On this basis, an equation describing the optimal pressure (in bars) prevailing in the gas cooler was created depending on the value of the air temperature. The correlation between Ge and Tassou takes the form [8]: (1) S. Sawalha published a work [12], in which he made the characteristics of the EER for supercritical cycle depending on the high pressure and outlet temperature of carbon dioxide from the high temperature exchanger. The equation describing the optimal pressure of carbon dioxide in the gas cooler was derived. The correlation which is based on theoretical analysis takes the form [12]: Y. Chen and J. Gu in their work [5] dealt with the optimization of pressure in the booster system with the internal heat exchanger (IHX) and they proposed a correlation for calculating the optimal pressure of CO2 [5]: The publication [10] describes the dependence to determine the optimal pressure of CO2 in a high temperature exchanger. For the supercritical system, the optimal pressure of carbon dioxide depends on two parameters: the outlet temperature of the refrigerant R-744 from the gas cooler and the evaporation temperature. The dependence presented in [10] takes the form:
Measurement and calculation results
The analysis of the high pressure of carbon dioxide was carried out for the refrigeration installation (booster system) of the Warsaw supermarket. Measurements of high pressure and air temperature were made during the summer, i.e. from 1 June to 30 September 2018. Data concerning air parameters for the Warsaw area came from the register made by the Institute of Meteorology and Water Management in Warsaw. The authors have assumed that an optimal pressure of CO2 could be function of an outside air temperature (according to [8]) without any additional heating, e.g. due to solar radiation. A diagram was drawn showing the distribution of the actual and theoretical pressure value as a function of the analyzed time. The total number of hours is shown on the horizontal axis. The analyzed period lasted 2928 hours. On the pressure axis, the pressure limit was set, above which the installation worked in supercritical cycle. The formula (1) was used for theoretical analysis because this correlation takes into account the change in CO2 pressure as a function of the outside temperature and because it can be used for two operating states. The distribution of actual and theoretical pressure is shown in Figure 3.
The actual values of carbon dioxide pressure were significantly higher than the values recommended for maximizing the EER. According to the formula (1), the optimal pressure of carbon dioxide in the high temperature exchanger increases linearly, above the air temperature of 27°C. As it can be seen from the measurements, the optimal CO2 pressure values are definitely lower than the empirical values. It is worth paying attention to the final phase of the diagram. The optimal and actual CO2 pressure reaches similar values at air temperatures much lower than 20℃. In this area, the system is in a subcritical state for the majority of the analysed time. Analyzing the diagrams, it was noted that the discrepancy between the measured and optimal value of CO2 pressure depends on the time of a day, and above all on the weather conditions. For high air temperatures, often above 30°C, there were significant differences between the actual and optimal carbon dioxide pressure values after 12 p.m.. In these areas of the diagrams also much lower relative humidity was recorded as compared to the rest of the day. Figure 4 shows that for the warmest day, the supercritical parameters were maintained around the clock (air temperature range from 20 to 32°C). The measured values of carbon dioxide pressure significantly differed from the correlation described by the equation (1) for most of the day (16 hours). A similar situation is shown in Figure 7. With low relative humidity and high air temperature, the installation worked for the most part of the day in the supercritical cycle and similarly to Figure 4, after 12 p.m., a growing disproportion between the optimal and the actual value of CO2 pressure can be observed. Figure 6 clearly shows that for the most humid day, with high humidity and low outside temperature, the actual pressure of carbon dioxide was very close to the optimal values. A similar situation took place on the day with the lowest measured temperature ( Figure 5). The actual pressure of carbon dioxide oscillated towards the values resulting from the optimization relationship. Disproportion was noticed only about noon with the lowest humidity recorded on that day. In contrast to the other figures, for this day the values of actual CO2 pressure lower than the optimal values were recorded.
Analyzing Figures 4 and 7, one should consider the actual values of outside air temperature available during the operation of the installation. The question is why the system, despite the temperature of the outside air much lower than the critical temperature for CO2, reached such high values of carbon dioxide pressure at 12 o'clock already? The reason of this situation could be the possibility of additional warming up the outside air due to solar radiation. Gas coolers are usually mounted outside supermarkets, usually on the roof. Therefore, it could happen that air can be locally, additionally heated by various elements in the immediate vicinity of the exchanger (or roof cover), which are exposed to 00 5 5 solar radiation. The amount of moisture in the air and the occurrence of atmospheric precipitation is also worth considering. The water vapour contained in the air can intensify the heat exchange in air heat exchangers. Figure 8 summarizes the total number of hours for which the difference between the actual and optimal pressure of carbon dioxide was: a) below 1 bar, b) from 1 to 2 bar, c) from 2 to 5 bar, d) from 5 to 10 bar , e) from 10 to 20 bar, f) above 20 bar.
The difference between the optimal and actual pressure of R-744 was less than 1 bar for only 243 hours, which is only 8.3% of the time period studied. For 278 hours, this difference was in the range of 1-2 bar. If we assumed an acceptable increase or decrease in the high pressure of carbon dioxide at 5 bar and assumed this range as the optimal operating time of the system, the installation would work 1313 hours (45%) within the maximum EER. It is worth noticing that for about 20% of the time, the difference between the actual pressure and the theoretical pressure value was in the range of 10-20 bar, which is a significant deviation.
According to Figure 2, the deviation of the high pressure of CO2 from the nominal point causes a reduction in the EER. Figure 9 presents the characteristics of the decrease in the EER ratio, depending on the difference between the actual and the optimal pressure. The diagram was drawn up on the basis of figure 2 and the work [8]. 5 5 The increase in CO2 pressure by 1 bar decreased the EER by approximately 1%, in relation to the optimal pressure value. The increase in the high pressure of carbon dioxide by 5 bar (which was assumed to be an acceptable increase in pressure) resulted in the EER drop from 2 to 5%, depending on the air temperature. For the maximum analyzed pressure difference of 20 bar, the EER was 8 to 18% less than the theoretical value.
Conclusions
The work shows the impact of the measured high pressure values on an energy indicators of a "Booster" refrigeration installation. Literature sources which present correlations of an optimal pressure value of CO2 on an ambient temperature have been shown. The analysis of the gas cooler functioning in the actual installation was made, on the basis of the meteorological data on particular days. For one of equations proposed, the optimal pressure of carbon dioxide was amounted on the basis of the outside air temperature. The convergence between actual and optimal high pressure values (observed in figures 6 and 7) reveal the assumption made in chapter 4 seem to be proper. In the result of a conducted analysis it has been proved that weather conditions have a significant influence on the high pressure value of carbon dioxide, especially temperature and humidity of the air. Gas coolers are cooled by outside air. It causes the change of parameters of an outside air that may effect parameters of a refrigerant in heat exchangers. Actual and optimal high pressure values of carbon dioxide were convergent on days with the outside air temperature below 20°C and the high relative humidity. A reverse situation was recorded for warmer days in the analysed period of time. Actual pressure values of CO2 were much higher than the optimal pressure when the outside air temperature was above 30°C. A significant difference was noticed for the afternoon when the solar radiation is the highest. Analyzing the diagrams 8 it was noted that the system did not work efficiently for more than half the time (discrepancies between measured and optimal values of the CO2 pressure from 5-20 bar). The EER drop reaches 2-10% at pressure discrepancies between optimal and measured values from 5-10 bar and 4-18% at a pressure discrepancy between 10-20 bar. For the largest difference between an optimal and a real pressure CO2 the EER drop can even exceed 20%.
According to the authors' hypothesis, the main cause of the difference between actual and calculated pressures of CO2 is the exposure of the gas cooler on a solar radiation. High-pressure heat exchangers usually are mounted outside or on the roofs of large buildings. The outside air which cools the gas cooler could be heated up by various elements of the installation or by the roof cover. In the results, warmer air gets on the inlet of the gas cooler and its temperature value is a signal for the high pressure controller. In consequence, higher temperature of air on the gas cooler inlet causes reaching higher pressure of CO2. Authors assume that optimal pressure values depend on an outside air temperature which should be measured in the point where disruptors like a solar radiation will be eliminated. This value of the outside air temperature should be taken into consideration within the investigated heat exchanger; however in this case it cannot the point of references. According to the figure 9, the increase of the outside air temperature on the inlet to the gas cooler and indirectly a high pressure value of CO2 causes the drop of an energy efficiency ratio. At a constant temperature of a lower heat source, which is determined by a technological demand, the increase of the temperature of an upper heat source causes the increase of a compression power and higher exploitation costs. Although the detailed analysis was conducted only for one case, this problem occurs in another cooling installations with a carbon dioxide. Therefore authors turn attention to the method of exploitation of gas coolers. One of the propositions on how to solve the problem is to , 0 2019) E3S Web of Conferences https://doi.org/10.1051/e3sconf /2019 ( 0 0 100 1000 00 00 EKO-DOK 2019 5 5 choose a shaded place to install a heat exchanger. Moreover, spraying the heat exchanger with water during a high solar radiation is worth considering. 5 5
|
2019-07-19T20:04:05.392Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6e2c53d14a29f3bea36a5604acd78e414374e89c",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/26/e3sconf_eko-dok2019_00005.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "695984a9fdd4955ea319b1a2e2a8aa5f50900666",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
233408048
|
pes2o/s2orc
|
v3-fos-license
|
In Search of Optimal Data Placement for Eliminating Write Amplification in Log-Structured Storage
Log-structured storage has been widely deployed in various do-mainsofstoragesystemsforhigh performance.However,itsgarbage collection (GC) incurs severe write amplification (WA) due to the frequent rewrites of live data. This motivates many research studies, particularly on data placement strategies, that mitigate WA in log-structured storage. We show how to design an optimal data placement scheme that leads to the minimum WA with the future knowledge of block invalidation time (BIT) of each written block. Guided by this observation, we propose SepBIT , a novel data placement algorithm that aims to minimize WA in log-structured storage. Its core idea is to infer the BITs of written blocks from the underlying storage workloads, so as to place the blocks with similar estimated BITs into the same group in a fine-grained manner. We show via both mathematical and trace analyses that SepBIT can infer the BITs by leveraging the write skewness property in real-world storage workloads. Evaluation on block-level I/O traces from real-world cloud block storage workloads shows that SepBIT achieves the lowest WA compared to eight state-of-the-art data placement schemes.
INTRODUCTION
Modern storage systems adopt the log-structured design [24] for high performance. Examples include flash-based solid-state drives (SSDs) [1,5], file systems [11,17,22,24,26], table stores [4], storage management [2], in-memory storage [25], RAID storage [14], and cloud block services [33]. Log-structured storage transforms random write requests into sequential disk writes in an appendonly log, so as to reduce disk seek overhead and improve write performance. It also brings various advantages in addition to high write performance, such as improved flash endurance in SSDs [17], unified abstraction for building distributed applications [2,4], efficient memory management in in-memory storage [25], and load balancing in cloud block storage [33].
The log-structured design writes live data blocks to the appendonly log without modifying existing data blocks in-place, so it regularly performs garbage collection (GC) to reclaim the free space of stale blocks, by reading a segment of blocks, removing any stale blocks, and writing back the remaining live blocks. The repeated writes of live blocks lead to write amplification (WA), which incurs I/O interference to foreground workloads [14] and aggravates the I/O pressure to the underlying storage system.
Mitigating WA in log-structured storage has been a well-studied research topic in the literature (see §5 for details). In particular, a large body of studies focuses on designing data placement strategies by properly placing blocks in separate groups. He et al. [12] point out that a data placement scheme should group blocks by the block invalidation time (BIT) (i.e., the time when a block is invalidated by a live block; a.k.a. the death time [12]) to achieve the minimum possible WA. However, without obtaining the future knowledge of the BIT pattern, how to design an optimal data placement scheme with the minimum WA remains an unexplored issue. Existing temperature-based data placement schemes that group blocks by block temperatures (e.g., write/update frequencies) [7,16,22,27,29,35,36] are arguably inaccurate to capture the BIT pattern and fail to group the blocks with similar BITs [12].
To this end, we propose SepBIT, a novel data placement scheme that aims to minimize the WA in log-structured storage. It infers the BITs of written blocks from the underlying storage workloads and separates the placement of written blocks into different groups, each of which stores the blocks with similar estimated BITs. Specifically, it builds on the skewed write patterns observed in real-world cloud block storage workloads at Alibaba Cloud [19]. It performs separation on the written blocks into user-written blocks and GCrewritten blocks (defined in §2.1). It further performs separation on each set of user-written blocks and GC-rewritten blocks by inferring the BIT of each block, so as to perform more fine-grained separation of blocks into groups with similar estimated BITs. To the best of our knowledge, in contrast to existing temperature-based data placement schemes that group blocks by block temperatures [7,16,22,27,29,35,36], SepBIT is the first practical data placement scheme that groups blocks with similar BITs, backed by the analysis of real-world cloud block storage workloads. To summarize, this paper makes the following contributions: • We first design an ideal data placement strategy that achieves the minimum WA in log-structured storage, based on the (impractical) assumption of having the future knowledge of BITs of written blocks. Our analysis not only motivates how to design a practical data placement scheme that aims to group the written blocks with similar BITs, but also provides an optimal baseline for our evaluation comparisons. • We design SepBIT, which performs fine-grained separation of written blocks by inferring their BITs from the underlying storage workloads. We show via both mathematical and trace analyses that our BIT inference is effective in skewed workloads. We also show that SepBIT achieves low memory overhead in its indexing structure for tracking block statistics. • We extensively evaluate SepBIT using the real-world cloud block storage workloads at Alibaba Cloud [19]. The workloads contain the block-level I/O traces of multiple volumes. We show that SepBIT achieves the lowest WA compared to eight state-of-theart data placement schemes; for example, it reduces the overall WA by 8.6-15.9% and 9.1-20.2% when the Greedy [24] and Cost-Benefit [24,25] algorithms are used for segment selection in GC, respectively. SepBIT also reduces the per-volume WA by up to 44.1%, compared to merely separating user-written and GC-rewritten blocks in data placement.
BACKGROUND AND MOTIVATION
We introduce how GC works in log-structured storage and how we use data placement to mitigate WA in GC ( §2.1). We then present an ideal data placement scheme that achieves the minimum WA, and state its limitations in practice ( §2.2). Finally, we discuss the limitations in existing data placement schemes via trace analysis to motivate our design ( §2.3).
GC in Log-Structured Storage
We consider a log-structured storage system that comprises multiple volumes, each of which is assigned to a user. Each volume is configured with a capacity of tens to hundreds of GiB and manages data in an append-only manner. It is further divided into segments that are configured with a maximum size (e.g., tens to hundreds of MiB). Each segment contains fixed-size blocks, each of which is identified by a logical block address (LBA) and has a size (e.g., several KiB) that aligns with the underlying disk drives. Each block, either from a new write or from an update to an existing block, is appended to a segment (called an open segment) that has not yet reached its maximum size. If a segment reaches its maximum size, the segment (called a sealed segment) becomes immutable. Updating an existing block is done in an out-of-place manner, in which the latest version of the block is appended to an open segment and becomes a valid block, while the old version of the block is invalidated and becomes an invalid block.
Log-structured storage needs to regularly reclaim the space occupied by the invalid blocks via GC. A variety of GC policies can be realized, yet we can abstract a GC policy as a three-phase procedure: • Triggering, which decides when a GC operation should be activated. In this work, we assume that a GC operation is triggered for a volume when its garbage proportion (GP) (i.e., the fraction of invalid blocks among all valid and invalid blocks) exceeds a pre-defined threshold (e.g., 15%). • Selection, which selects one or multiple sealed segments for GC.
In this work, we focus on two selection algorithms: (i) Greedy [24], which selects the sealed segments with the highest GPs, and (ii) Cost-Benefit [24,25] Figure 2: Example of the ideal data placement scheme.
sealed segments and writes back the remaining valid blocks into one or multiple open segments. The space of the selected sealed segments can be reused. A log-structured storage system sees two types of written blocks: in addition to the user-written blocks issued from either a new write or an update, there are also GC-rewritten blocks due to the rewrites of the valid blocks during GC. Thus, GC inevitably incurs write amplification (WA), defined as the ratio of the total number of both user-written blocks and GC-rewritten blocks to the number of userwritten blocks. To avoid aggravating the I/O pressure to the storage system, it is critical to minimize WA.
In this work, we aim to design a proper data placement scheme that mitigates the WA due to GC. Figure 1 shows the workflow of a general data placement scheme, which separates all written blocks (i.e., user-written and GC-rewritten blocks) into different groups and writes the blocks to the open segments of the respective groups. The data placement scheme is compatible with any GC policy (i.e., independent of the triggering, selection, and rewriting policies).
Ideal Data Placement
We present an ideal data placement scheme that minimizes WA (i.e., WA=1). We also elaborate why it is infeasible to realize in practice, so as to motivate the design of an effective practical data placement scheme. System model. We first define the notations. Consider a writeonly request sequence of blocks that are written to a log-structured storage system. Let be the number of blocks in the request sequence and be the segment size (in units of blocks). Let = ⌈ ⌉ be the number of sealed segments in the system, and let 1 , 2 , · · · , denote the corresponding sealed segments. Let (where ≥ 1) be the invalidation order of the -th block in the request sequence based on the BITs of all blocks (where 1 ≤ ≤ ), meaning that the -th block is the -th invalidated block among all invalid blocks. Placement design. For the ideal placement scheme, we make the following assumptions. Suppose that the system has the future knowledge of the BITs of all blocks, and hence the invalidation order of the -th block in the request sequence (where 1 ≤ ≤ ).
It also allocates open segments for storing incoming blocks, and performs a GC operation whenever there are invalid blocks in the system (i.e., one segment size of invalid blocks).
The system writes the -th block to the ⌈ ⌉-th open segment. If the -th (where 1 ≤ ≤ ) open segment is full, it is sealed into the sealed segment . Thus, stores the blocks with the invalidation orders in the range of [( − 1) · + 1, · ]. The first GC operation is triggered when there exist invalid blocks; according to the placement, all such blocks must be stored in 1 . Thus, the first GC operation will choose 1 for GC, and there will be no rewrites as all blocks in 1 must be invalid. In general, the -th GC operation (where 1 ≤ ≤ ) will choose for GC, and there will be no rewrites as contains only invalid blocks. Figure 2 depicts an example of the ideal data placement scheme. Consider a write-only request sequence with = 8 blocks with three LBAs , , and . We fix the segment size as = 2. We show the status of the volume at time 2 and time 6 when the second block and sixth block are written, respectively. At time 2, we have appended to 1 and to 2 , as their invalidation orders are 2 and 3, respectively. Note that all blocks in 1 become invalid when the third block is written at time 3, and at this time we can perform a GC operation to reclaim the free space occupied by 1 . Note that the GC operation does not incur any rewrite. Later, at time 6, the system appends to 3 since its invalidation order is 5.
Limitations and lessons learned. While the ideal data placement scheme achieves the minimum WA, there exist two practical limitations. First, the scheme needs to have future knowledge of the BIT of every block to assign the blocks to the corresponding open segments, but having such future knowledge is infeasible in practice. Second, the scheme needs to provision = ⌈ / ⌉ open segments to hold all blocks in the request sequence in the worst case, thereby incurring high memory overhead as increases. Having too many open segments also incurs substantial random writes that lead to performance slowdown.
A practical data placement scheme should address the above two limitations. Without the future knowledge of BITs, it should effectively infer the BIT of each written block. With only a limited number of available open segments, it should group written blocks by similar BITs instead of placing them in strict invalidation order. Our goal is to address the limitations driven by real-world cloud block storage workloads.
Motivation
We show via trace analysis that existing data placement schemes cannot accurately capture the BIT pattern and group the blocks with similar BITs for the effective WA mitigation [12]. Specifically, we consider 186 volumes from a real-world cloud block storage system ( §4.2) and analyze the lifetime of each user-written block. We define the lifespan of a block as the number of user-written bytes in the whole workload since it is first written until it is invalidated (or until the end of the traces). We make three key observations. Observation 1: User-written blocks generally have short lifespans. We say that a block has a short lifespan if its lifespan is smaller than the write working set size (WSS) (i.e., the number of unique written LBAs multiplied by the 4 KiB block size). In our traces, each volume typically has a write WSS of at least 10 GiB. We examine the percentages of user-written blocks that fall into different lifespan range groups with short lifespans that are smaller than 10 GiB. Figure 3 shows the boxplots of the percentages over all volumes. We find that in a large fraction of volumes, their userwritten blocks tend to have short lifespans. For example, half of the volumes have more than 49.3% of user-written blocks with lifespans shorter than 8 GiB, and have more than 27.6% of user-written blocks with lifespans shorter than 1 GiB. In contrast, GC-rewritten blocks generally have long lifespans as they are not updated before GC.
Our findings show that user-written blocks and GC-rewritten blocks can have vastly different BIT patterns, in which user-written blocks tend to have short lifespans, while GC-rewritten blocks tend to have long lifespans. Existing data placement schemes either mix user writes and GC writes [7,16,22,29], or focus on user writes [27,35,36], in the data placement decisions. Failing to distinguish between user-written blocks and GC-rewritten blocks can lead to inefficient WA mitigation. Instead, it is critical to separately identify the BIT patterns of user-written blocks and GC-rewritten blocks.
Observation 2: Frequently updated blocks have highly varying lifespans. We investigate frequently updated blocks, referred to as the blocks whose update frequencies (i.e., the number of updates) rank top 20% in the write working set (i.e., the set of LBAs being written) of a volume. Specifically, for each volume, we divide the frequently updated blocks into four groups, namely top 1%, top 1-5%, top 5-10%, and top 10-20%, so that the blocks in each group have similar update frequencies. To avoid evaluation bias, we exclude the blocks that have not been invalidated before the end of the traces. For each group of a volume, we calculate the coefficient of variance (CV) (i.e., the standard deviation divided by the mean) of the lifespans of the blocks; a high CV (e.g., larger than one) implies that the lifespans have high variance. Figure 4 shows the boxplots of the CVs in each group across all volumes. We observe that frequently updated blocks with similar update frequencies have high variance in their lifespans (and hence the BITs); for example, 25% of the volumes have their CVs exceeding 4.34, 3.20, 2.14, and 1.82 in the four groups top 1%, top 1-5%, top 5-10%, and top 10-20%, respectively. Our findings also suggest that existing temperature-based data placement schemes that group the blocks with similar write/update frequencies [7,16,22,27,29,35,36] Figure 6: SepBIT workflow.
cannot effectively group blocks with similar BITs, and hence the WA cannot be fully mitigated.
Observation 3: Rarely updated blocks dominate and have highly varying lifespans. We examine the write working set of each of the 186 volumes and define the rarely updated blocks as those that are updated no more than four times during the onemonth trace period. We observe that rarely updated blocks occupy a high percentage in the write working sets of a large fraction of volumes. Specifically, in half of the volumes, more than 72.4% of their write working sets contain rarely updated blocks. We further investigate the lifespans of those rarely updated blocks. For each volume, we divide the rarely updated blocks into four groups that are partitioned by the lifespans of 4 GiB, 16 GiB, and 64 GiB. We then calculate the percentage of those blocks that fall into each group. Figure 5 shows the boxplots of the percentages of rarely updated blocks in different groups of lifespans across all volumes. We see that in 25% of the volumes, more than 50.7% of the rarely updated blocks have their lifespans shorter than 4 GiB. Also, for all groups of lifespans, the percentages are non-negligible (e.g., all groups have their medians larger than 10.0%). In other words, rarely updated blocks have large deviations of lifespans (and hence BITs) in a volume. As in Observation 2, our findings again suggest that existing temperature-based data placement schemes cannot effectively group the rarely updated blocks with similar BITs. Rarely updated blocks are often treated as cold blocks with low write frequencies, so they tend to be grouped together and separated from the hot blocks with high write frequencies. However, their vast differences in BIT patterns make temperature-based data placement schemes inefficient in mitigating WA.
SEPBIT DESIGN
We present SepBIT, a novel data placement scheme aiming for minimizing WA in log-structured storage systems. We first present an overview of SepBIT ( §3.1). We then show how we infer the BITs of both user-written blocks ( §3.2) and GC-rewritten blocks ( §3.3) using both mathematical and trace analyses. We provide the implementation details of SepBIT, especially on how it separates blocks on-the-fly ( §3.4).
Design Overview
SepBIT works by separating blocks into different classes of segments via inferring the BITs of user-written and GC-rewritten blocks, such that the blocks with similar estimated BITs are grouped together in the same class. SepBIT infers the lifespans (defined in §2.3) of blocks, and in turn infers the corresponding BITs of blocks. For user-written blocks (i.e., Classes 1-2), SepBIT stores the short-lived blocks (with short lifespans) in Class 1 and the remaining long-lived blocks (with long lifespans) in Class 2. For GC-rewritten blocks (i.e., Classes 3-6), SepBIT appends the blocks from Class 1 that are rewritten by GC into Class 3, and groups the remaining GC-rewritten blocks into Classes 4-6 by similar BITs being inferred.
The design intuition behind SepBIT is as follows. For each userwritten block, SepBIT examines its last user write time to infer its lifespan. If the user-written block is issued from a new write, SepBIT assumes that it has an infinite lifespan. Otherwise, if the user-written block updates an old block, SepBIT uses the lifespan of the old block (i.e., the number of user-written bytes in the whole workload since its last user write time until it is now invalidated) to estimate the lifespan of the user-written block, as shown in Figure 7(a). Our intuition is that any user-written block that invalidates a short-lived block is also likely to be a short-lived block ( §3.2). Then if the short-lived blocks are written at about the same time, their corresponding BITs will be close, so SepBIT groups them into same class (i.e., Class 1). For the long-lived blocks (including the user-written blocks from new writes), SepBIT groups them into Class 2.
For each GC-rewritten block, SepBIT examines its age, defined as the number of user-written bytes in the whole workload since its last user write time until it is rewritten by GC, to infer its residual lifespan, defined as the number of user-written bytes since it is rewritten by GC until it is invalidated (or until the end of the traces), as shown in Figure 7(b). As a result, the lifespan of a GC-rewritten block is its age plus its residual lifespan. Our intuition is that any GCrewritten block with a smaller age has a higher probability to have a short residual lifespan ( §3.3), implying that GC-rewritten blocks with different ages are expected to have different residual lifespans. Thus, SepBIT can distinguish the blocks of different residual lifespans based on their ages and group the GC-rewritten blocks with similar ages into the same classes.
Our design intuition builds on the assumption that the access pattern is skewed, so as to infer the BITs of blocks for data placement. We justify our assumption and verify the effectiveness of SepBIT via the mathematical analysis for skewed distributions and the trace analysis from real-world cloud block storage workloads ( §3.2 and §3.3). To adapt to changing workloads and GC policies, SepBIT dynamically monitors the workloads to separate user-written blocks and GC-rewritten blocks into different classes ( §3.4).
Inferring BITs of User-Written Blocks
We show via both mathematical and trace analyses the effectiveness of SepBIT in estimating the BITs of user-written blocks based on the lifespans. Let be the total number of unique LBAs in a working set; without loss of generality, each LBA is denoted by an integer from 1 to . Let (where 1 ≤ ≤ ) be the probability that LBA is being written. Consider a write-only request sequence of blocks, each of which is associated with a sequence number and the corresponding LBA . Let and ′ (where ′ < ) denote the sequence numbers of a new user-written block and the corresponding invalid old block, respectively (i.e., = ′ ). Recall from §3.1 that SepBIT estimates the lifespan (denoted by ) of the user-written block using the lifespan (denoted by ) of the old block ′ , so the estimated BIT of block is equal to the current user write time plus the estimated lifespan ; note that both and are expressed in units of blocks. We claim that if is small, is also likely to be small. To validate the claim, let 0 and 0 (both in units of blocks) be two thresholds. We then examine the conditional probability of ≤ 0 given the condition that ≤ 0 subject to a workload of a different skewness. If the conditional probability is high for small 0 and 0 , then our claim holds. Mathematical analysis. Formally, we investigate the following conditional probability: where the denominator is expressed as: and the numerator is expressed as: We analyze the conditional probability via the Zipf distribution, in which we set = (1/ )/ =1 (1/ ), where 1 ≤ ≤ for some non-negative skewness parameter . A larger implies a more skewed distribution. Here, we fix = 10 × 2 18 , which corresponds to a working set of 10 GiB with 4 KiB blocks. We then study how the conditional probability Pr( ≤ 0 | ≤ 0 ) varies across 0 , 0 , and . 0 and 0 of up to 4 GiB, which is less than the write WSS ( §2.3). Overall, the conditional probability is high for different 0 and 0 ; the lowest one is 77.1% for 0 = 4 GiB and 0 = 0.25 GiB. This shows that a user-written block is highly likely to have a short lifespan if its invalidated block also has a short lifespan. In particular, the conditional probability is higher if 0 is smaller (i.e., the invalidated blocks have shorter lifespans), implying a more accurate estimation of the lifespan of the user-written block. Figure 8(b) next shows the conditional probability for varying 0 and , where we fix 0 = 1 GiB. Note that for = 0, the Zipf distribution reduces to a uniform distribution. Overall, the conditional probability increases with (i.e., for a more skewed distribution). For example, for = 1, the conditional probability is at least 87.1%. On the other hand, for = 0, the conditional probability is only 9.5%. This indicates that the high accuracy of lifespan estimation only holds under skewed workloads. Trace analysis. We use the block-level I/O traces from Alibaba Cloud ( §4.2) to validate if the conditional probability remains high in real-world workloads. To compute the conditional probability, we first find out the set of user-written blocks that invalidate old blocks with ≤ 0 . Then the conditional probability is the fraction of blocks with ≤ 0 in the set. Figure 9 shows the boxplots of the conditional probabilities over all volumes for different 0 and 0 . In general, the conditional probability remains high in most of the volumes. For example, for 0 = 4 GiB, the medians of the conditional probabilities are in the range of 49.6-84.3%, and the 75th percentiles are in the range of 67.3-96.7%. Also, the conditional probability tends to be higher for a smaller 0 . age, so the estimated BIT of the GC-rewritten block is equal to the current GC write time plus the estimated residual lifespan. However, characterizing directly GC-rewritten blocks is non-trivial, as it depends on the actual GC policy (e.g., when GC is triggered and which segments are selected for GC) ( §2.1). Instead, we model GC-rewritten blocks based on user-written blocks. If a user-written block has a lifespan above a certain threshold, we assume that it is rewritten by GC and treat it as a GC-rewritten block with an age equal to the threshold. We can then apply a similar analysis for user-written blocks as in §3.2. We define the following notations. As each GC-rewritten block is a user-written block before being rewritten by GC, we identify each GC-rewritten block by its corresponding user-written block with sequence number . Let , , and be its lifespan, age, and residual lifespan, respectively, such that = + ; each of the variables is measured in units of blocks. We claim that has a higher probability to be small with a smaller . To validate the claim, let 0 and 0 (both in units of blocks) be the thresholds for the age and the residual lifespan, respectively. We examine the conditional probability of ≤ 0 + 0 given the condition that ≥ = 0 subject to a workload of different skewness. The conditional probability specifies the fraction of GC-rewritten blocks whose residual lifespans are shorter than 0 among all GC-rewritten blocks with age 0 (note that the GCrewritten blocks are modeled as user-written blocks with lifespans above 0 ). If the conditional probability is higher for a smaller 0 subject to a fixed 0 , then our claim holds.
Inferring BITs of GC-Rewritten Blocks
Mathematical analysis. Formally, we investigate the following conditional probability: where the numerator and denominator correspond to all userwritten blocks whose lifespans range from 0 ≤ ≤ 0 + 0 and ≥ 0 , respectively. According to the computation in §3.2, the numerator and denominator are respectively: As in §3.2, we use the Zipf distribution and fix = 10 × 2 18 unique LBAs. We study how the conditional probability Pr( ≤ 0 + 0 | ≥ 0 ) varies across 0 , 0 , and . Figure 10(a) first shows the conditional probability for varying 0 and 0 , where we fix = 1. We focus on a large value of 0 of up to 32 GiB since we target long-lived blocks. We also vary 0 up to 8 GiB. Overall, for a fixed 0 , the conditional probability decreases as 0 increases. For example, given that 0 = 8 GiB, the probability with 0 = 2 GiB is 41.2%, while the probability for 0 = 32 GiB drops to 14.9%. This validates our claim that GC-rewritten blocks with different ages are expected to have different residual lifespans. Thus, we can distinguish the GC-rewritten blocks of different residual lifespans based on their ages. Figure 10(b) further shows the conditional probability for varying 0 and , where we fix 0 = 8 GiB. For a small , the conditional probability has a limited difference for varying , while the difference becomes more significant as increases. For example, for = 0 (i.e., the uniform distribution), there is no difference varying 0 ; for = 0.2, the difference of the conditional probability between 0 = 2 GiB and 0 = 32 GiB is only 3.5%, while the difference for = 1 is 26.4%. This indicates that our claim holds under skewed workloads, and we can better distinguish the GC-rewritten blocks of different residual lifespans under more skewed workloads.
Trace analysis. We also use block-level I/O traces from Alibaba Cloud ( §4.2) to examine the conditional probability in real-world workloads. We first identify the set of blocks with ≥ 0 in the workload, and then compute the conditional probability as a fraction of blocks with ≤ 0 + 0 in the set. Figure 11 depicts the boxplots of the conditional probabilities over all volumes for different 0 and 0 . For a fixed 0 , the conditional probabilities have significant differences for varying 0 . For example, if 0 increases from 1 GiB to 16 GiB and we fix 0 = 8 GiB, the 75th percentiles of probabilities drop from 52.2% to 18.7%.
Implementation Details
We describe how to tune the thresholds for separating blocks into different classes. We also provide the algorithmic details of SepBIT and discuss its memory usage.
Threshold selection. We assign blocks into different classes by their estimated BITs with multiple thresholds: for user-written blocks, we define a lifespan threshold for separating short-lived blocks and long-lived blocks; for GC-rewritten blocks, we need multiple age thresholds to separate them by ages ( §3.1). We configure the thresholds via the segment lifespan of a segment, defined as the number of user-written bytes in the workload since the segment is created (i.e., the time when the first block is appended to the segment) until it is reclaimed by GC. Specifically, we monitor the average segment lifespan, denoted by ℓ, among a fixed number of recently reclaimed segments in Class 1. For each user-written block, if it invalidates an old block with a lifespan less than ℓ, we write Find lifespan of the invalidated block ′ due to 16: if < ℓ then GarbageCollect is triggered by a GC operation according to the GC policy ( §2.1). It performs GC and monitors the runtime information of the reclaimed segments. It selects a segment for GC based on the selection algorithm (e.g., Greedy or Cost-Benefit ( §2.1)). It sums up the lifespans of collected segments from Class 1 as ℓ , and computes the average lifespan ℓ = ℓ / for every fixed number (e.g., = 16 in our current implementation) of reclaimed segments.
UserWrite processes each user-written block . It first computes the lifespan of the invalidated old block ′ . If is less than ℓ, UserWrite appends (which is treated as a short-lived block) to the open segment of Class 1; otherwise, it appends (which is treated as a long-lived block) to the open segment of Class 2.
GCWrite processes each GC-rewritten block that corresponds to some user-written block . If is originally stored in Class 1, GCWrite appends to the open segment of Class 3; otherwise, GCWrite appends to one of the open segments of Classes 4-6 based on the age of . Currently, we configure the age thresholds as three ranges, [0, 4ℓ), [4ℓ, 16ℓ), and [16ℓ, +∞), for Classes 4-6, respectively, based on our evaluation findings. Nevertheless, we have also experimented with different numbers of classes and thresholds, and we observe only marginal differences in WA.
Memory usage. SepBIT only stores the last user write time of each block as the metadata alongside the block on disk, without maintaining a mapping from every LBA to its last user write time in memory. Specifically, for user-written blocks, SepBIT only needs to know whether the lifespan of the invalidated block is shorter than a threshold. It is thus sufficient for SepBIT to track only the recently written LBAs in a first-in-first-out (FIFO) queue in memory. In our current implementation, SepBIT dynamically adjusts the queue length according to the value ℓ. If the FIFO queue is full, each insert of an element will dequeue one element from the queue. If ℓ increases, the FIFO queue allows more inserts without dequeueing any element; if ℓ decreases, the FIFO queue dequeues two elements for each insert until the number of elements drops below ℓ. If the LBA exists in the FIFO queue and its user write time is within the recent ℓ user writes, SepBIT writes it into Class 1.
On the other hand, for GC-rewritten blocks, SepBIT retrieves them during GC and can examine the user write time directly from the metadata of each GC-rewritten block, so as to assign the GCrewritten block to the corresponding class without any memory overhead incurred.
EVALUATION
We evaluate SepBIT via trace-driven simulation using real-world cloud block storage traces, by comparing SepBIT with eight existing data placement schemes.
Data Placement Schemes
We compare SepBIT with eight existing temperature-based data placement schemes, namely Dynamic dAta Clustering (DAC) [7], SFS [22], MultiLog (ML) [22], extent-based identification (ETI) [27], MultiQueue (MQ) [35], Sequentiality, Frequency, and recency (SFR) [35], Fading Average Data Classifier (FADaC) [16], and WARCIP [36]. Take DAC [7] as an example. DAC associates each LBA with a temperature-based counter (quantified based on the write count) and writes blocks to the segments of different temperature levels. Each user write promotes the LBA to a hotter segment while each GC write demotes the LBA to a colder segment. Other temperaturebased data placement schemes follow the similar idea of DAC, and mainly differ in the definition of per-LBA temperature-based counters and the promotion/demotion of LBAs across segments. Note that the above existing schemes are mainly designed for mitigating the flash-level WA in SSDs, yet they are also applicable for general log-structured storage.
We also consider three baseline strategies. In our evaluation, we annotate the lifespan of each block in the traces in advance, so that we can compute the BIT of the block at the user write time.
By default, we configure six classes (each containing one open segment) for data placement for all schemes, except for NoSep, SepGC, and ETI. For NoSep, we configure one class for all userwritten and GC-rewritten blocks; for SepGC, we configure two classes, one for user-written blocks and one for GC-rewritten blocks; for ETI, we configure two classes for user-written blocks and one class for GC-rewritten blocks. For MQ, SFR, and WARCIP, as they focus on separating user-written blocks only, we configure five classes for user-written blocks and the remaining class for GCrewritten blocks. For DAC, SFS, ML, FADaC, and FK, since they do not distinguish user-written blocks and GC-rewritten blocks, we let them use all six classes for all blocks. We adopt the default settings as described in the original papers of the existing schemes.
Trace Overview
We use the public block-level I/O traces from Alibaba Cloud [19] for our trace-driven evaluation. The traces contain I/O requests (in multiples of 4 KiB blocks) from 1,000 virtual disks, referred to as volumes, over a one-month period in January 2020. While the traces are from a single cloud provider, they actually comprise a variety of workloads (e.g., virtual desktops, web services, key-value stores, and relational databases), and hence are representative for us to evaluate SepBIT and other data placement schemes under diverse workloads.
A previous study [33] shows that log-structured storage can be built atop the block storage stack at Alibaba Cloud for load balancing. Based on this motivation, our evaluation treats each volume in the traces as a standalone volume in the log-structured storage system ( §2.1). Each volume performs data placement and GC independently. Our goal is to mitigate the overall WA across all volumes.
We pre-process the traces for our evaluation as follows. We only consider write requests as they are the only contributors of WA. Since some volumes in the traces have limited write requests to trigger sufficient GC operations, we remove such volumes to avoid biasing our analysis. Specifically, we focus on the volumes with sufficient write requests: each volume has a write working set size (WSS) (i.e., the number of unique LBAs being written multiplied by the block size) above 10 GiB and a total write traffic size (i.e., the number of written bytes) above 2× its write WSS. To this end, we select 186 volumes, which account for a total of over 90% of write traffic of all 1,000 volumes. The 186 volumes contain 10.9 billion write requests, 410.2 TiB of written data (with 390.2 TiB of updates), 20.3 TiB of write WSS (with 17.2 TiB of update WSS). Each of the 186 volumes has a write WSS ranging from 10 GiB to 1 TiB and a write traffic size ranging from 43 GiB to 36.2 TiB. Since the WSS varies across volumes, we configure the maximum storage space of each volume as 1− , where GPT denotes the GP threshold to trigger GC.
Results
Summary of findings. We summarize our major evaluation findings as follows.
• SepBIT achieves the lowest WA among all data placement schemes (except FK) for different segment selection algorithms (Exp#1), different segment sizes (Exp#2), and different GP thresholds (Exp#3). It also reduces the 75th percentiles of per-volume WAs over all 186 volumes. In particular, when the segment size is small, SepBIT even has a lower WA than FK (Exp#2). • We provide a breakdown analysis on SepBIT, and show its effectiveness of achieving a low WA by performing separation on each set of user-written blocks and GC-rewritten blocks independently (Exp#4). • We provide a memory overhead analysis and show that SepBIT achieves low memory overhead for majority of volumes (Exp#5).
Default GC policy. Our default GC policy uses Cost-Benefit [24,25] for segment selection and fixes the segment size and the GP threshold for triggering GC as 512 MiB and 15%, respectively. In Exps#1-#3, we vary each of the configurations to evaluate different data placement schemes. Exp#1 (Impact of segment selection). We compare SepBIT with existing data placement schemes using Greedy [24] and Cost-Benefit [24,25] for segment selection in GC ( §2.1). Figures 12(a) and 12(b) depict the overall WA across all 186 volumes under Greedy and Cost-Benefit, respectively. With separation in data placement, SepBIT reduces the overall WA of NoSep by 28.5% and 39.8% under Greedy and Cost-Benefit, respectively. More importantly, SepBIT achieves the lowest WA compared to all existing data placement schemes (except FK). It reduces the overall WA of SepGC and the eight state-of-the-art data placement schemes (i.e., excluding NoSep and FK) by 8.6-15.9% and 9.1-20.2% under Greedy and Cost-Benefit, respectively. Compared to FK, the overall WA of SepBIT is 13.5% and 3.1% higher under Greedy and Cost-Benefit, respectively. In short, SepBIT is highly efficient in WA mitigation under real-world workloads. Note that some data placement schemes even show a higher WA than SepGC, which performs simple separation of user-written blocks and GC-rewritten blocks, mainly because they fail to effectively group blocks with similar BITs ( §2.3). 64 and 1.50), respectively. This shows that the WA reduction of SepBIT is effective in individual volumes with diverse workloads. In particular, Cost-Benefit is more effective in the WA mitigation of SepBIT than Greedy, as the gap of the 75th percentiles between SepBIT and the second lowest one increases from 1.8% in Greedy to 9.4% in Cost-Benefit. Compared to FK, for 75th percentiles, SepBIT has 23.6% and 12.9% higher WA under Greedy and Cost-Benefit, respectively.
Exp#2 (Impact of segment sizes). We vary the segment size from 64 MiB to 512 MiB. For fair comparisons, we fix the amount of data (both valid and invalid data) to be retrieved in each GC operation as 512 MiB, meaning that a GC operation collects eight, four, two, and one segment(s) for the segment sizes of 64 MiB, 128 MiB, 256 MiB, and 512 MiB, respectively. We focus on comparing NoSep, SepGC, WARCIP, SepBIT, and FK, as they show the lowest WAs among existing data placement for various segment sizes.
Figures 13 depicts the overall WA versus the segment size. Overall, using a smaller segment size yields a lower WA, as a GC operation can perform more fine-grained selection of segments for more efficient space reclamation. Again, SepBIT achieves the lowest WA compared to all existing data placement schemes; for example, its WA is with 5.5%, 8.2%, and 10.0% lower than WARCIP for the segment sizes of 64 MiB, 128 MiB, and 256 MiB, respectively. Interestingly, SepBIT even has a lower WA (by 3.9-5.7%) than FK when the segment size is in the range of 64 MiB to 256 MiB. The reason is that FK currently groups blocks of close BITs in five open segments, while the last open segment stores all blocks (we now configure six classes in total) ( §4.1). If the segment size is smaller, FK can only group fewer blocks in the limited number of open segments, so it becomes less effective of grouping blocks of close BITs.
Exp#3 (Impact of GP thresholds). We vary the GP thresholds from 10% to 25%. We again focus on comparing the overall WAs of NoSep, SepGC, WARCIP, SepBIT, and FK as in Exp#2. Figure 14 shows the overall WA versus the GP threshold. A larger GP threshold has a lower WA in general, as it is easier for a GC operation to select segments with high GPs. SepBIT still shows the lowest WA. It has 5.0-13.8% lower WAs than WARCIP for different GP thresholds. Compared to FK, SepBIT has comparable WAs with differences smaller than 1.8%, for different GP thresholds.
Exp#4 (Breakdown analysis). We analyze how different components of SepBIT contribute to the WA mitigation. Recall that SepBIT separates written blocks into user-written blocks and GCrewritten blocks, and further separates each set of user-written blocks and GC-rewritten blocks independently. In our breakdown analysis, we consider NoSep (i.e., without separation) and SepGC (i.e., which separates written blocks into user-written blocks and GC-rewritten blocks). We further consider two variants: • UW: It further separates user-written blocks based on SepGC, but without separating GC-rewritten blocks. It maintains three classes: Classes 1 and 2 store short-lived blocks and long-lived blocks as in SepBIT, respectively, while Class 3 stores all GCrewritten blocks. • GW: It further separates GC-rewritten blocks based on SepGC, but without separating user-written blocks. It maintains four classes: Class 1 stores all user-written blocks, and Classes 2-4 store GC-rewritten blocks as in Classes 4-6 of SepBIT. Figures 15(a) WA by 4.8% and 7.0%, respectively. The findings show that more fine-grained separation of each set of user-written blocks and GCrewritten blocks brings further WA reduction. SepBIT achieves 7.0% and 4.9% WA reduction compared to UW and GW, respectively, thereby showing that SepBIT successfully combines the advantages brought by UW and GW. Figure 15(b) further shows the cumulative distributions of the WA reductions of UW, GW, and SepBIT compared to SepGC across all volumes. We see that UW, GW, and SepBIT can reduce the WA of most of the volumes. The 75th percentiles of reductions of UW and GW are 11.4% and 6.9%, respectively, and their highest WA reductions are 43.3% and 24.5%, respectively. By combining UW and GW together, the 75th percentiles of the WA reductions of SepBIT compared to SepGC improves to 19.3% with highest WA reduction as 44.1%.
Exp#5 (Memory overhead analysis). We analyze the memory overhead of SepBIT in real-world workloads. Recall that SepBIT only needs to track the unique LBAs inside the FIFO queue ( §3.4), instead of maintaining mappings for all LBAs in the write working set. We report the memory overhead reduction of SepBIT as one minus the ratio of the number of unique LBAs in the FIFO queue to the number of unique LBAs in the write working set. To quantify the reduction, for each volume, we collect all values of the number of unique LBAs in the FIFO queue in runtime when ℓ (defined in §3.4) is updated. To avoid bias due to the cold start of simulation, for each volume, we exclude the beginning 10% of the values. We also collect the number of unique LBAs at the end of simulation. We consider two cases, namely (i) the worst case and (ii) the snapshot case. In the worst case, we use the maximum number of unique LBAs in the FIFO queue for all volumes; it assumes that each volume has its peak number of unique LBAs in the FIFO queue and incurs the most memory. In the snapshot case, we use the number of unique LBAs at the end of the simulation; it represents a snapshot of the system status. From our analysis, we find that in the worst case, SepBIT reduces the overall memory overhead by 44.8%, while in the snapshot case, SepBIT reduces the overall memory overhead by 71.8%. To calculate the actual memory overhead, suppose that the mapping for each LBA costs 8 bytes. Since the aggregated write WSS of the 186 volumes is 20.3 TiB ( §4.2), SepBIT reduces the overall memory overhead from 20.3 · 2 40 2 12 · 8=41.6 GiB to 41.6 · (1 − 71.8%)=11.7 GiB. Figure 16 further depicts the cumulative distribution of the memory overhead reductions across volumes under both the worst case and the snapshot case. In the worst case, SepBIT reduces the memory overhead by more than 72.3% in half of the volumes and the highest memory overhead reduction is 99.5%; in the snapshot case, the median reduction is 93.1% with the highest reduction as 99.7%. The reason of the differences among volumes is due to their different degrees of skewness. The volumes with higher skewness see more aggregated traffic patterns, and hence the number of recently updated LBAs is much smaller compared to their write WSS.
RELATED WORK
In this section, we review related work on GC designs for different log-structured storage systems. GC in SSDs. We evaluated several existing data placement schemes ( §4.1) for mitigating the WA of flash-level GC in SSDs. Other data placement schemes build on the use of program contexts [15] or the prediction of block temperature based on neural networks [38]. Some empirical studies evaluate the data placement algorithms on an SSD platform [18], or characterize how real-world I/O workloads affect GC performance [34]. In particular, Yadgar et al. [34] also investigate the impact of the number of separated classes in data placement based on the temperature-based data scheme in [29]. In contrast, SepBIT builds on the BIT for data placement, backed by the empirical studies from real-world I/O traces.
Besides data placement, existing studies propose segment selection algorithms to reduce the WA of flash-level GC. In addition to Greedy and Cost-Benefit ( §2.1), Cost-Age-Times [6] considers the cleaning cost, data age, and flash erasure counts in segment selection. Windowed Greedy [13], Random-Greedy [20], and dchoices [30] are variants of Greedy in segment selection. Desnoyers [9] models the WA of different segment selection algorithms and hot-cold data separation. SepBIT can be used in conjunction with existing segment selection algorithms. GC in file systems. Several studies examine the GC performance for log-structured file systems. Matthew et al. [21] improve the GC performance by adapting GC to the system and workload behaviors. SFS [22] separates blocks by hotness (i.e., write frequency divided by age). Some reduce WA using file system hints in data placement; for example, WOLF [32] groups blocks by files or directories, while hFS [39] and F2FS [17] separate data and metadata. Besides WA reduction, some studies focus on mitigating the GC interference [3] and fragmentation in log-structured file systems [10,23,37]. GC for RAID and distributed storage. Some studies address the GC performance issues in RAID and distributed storage, such as reducing the WA of Log-RAID systems [8] and mitigating the interference between GC and user writes via GC scheduling in RAID arrays [15,28]. RAMCloud [25] targets persistent distributed in-memory storage. It proposes two-level cleaning to maximize memory utilization by coordinating the GC operations in memory and disk backends. It also corrects the original Cost-Benefit algorithm [24] for accurate segment selection.
CONCLUSION
We propose SepBIT, a novel data placement scheme that mitigates WA caused by GC in log-structured storage by grouping blocks with similar estimated BITs. Inspired from the ideal data placement that minimizes WA (i.e., WA=1) using future knowledge of BITs, SepBIT leverages the skewed write patterns of real-world workloads to infer BITs. It separates written blocks into user-written blocks and GC-rewritten blocks and performs fine-grained separation in each set of user-written blocks and GC-rewritten blocks. To group blocks with similar BITs, it infers the BITs of user-written blocks and GCrewritten blocks by estimating their lifespans and residual lifespans, respectively. Evaluation on the block-level I/O traces from Alibaba Cloud shows that SepBIT achieves the lowest WA compared to eight state-of-the-art data placement schemes.
|
2021-04-28T09:15:41.015Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "0763022628fcc74701619dee3155da5c4769ca7f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0763022628fcc74701619dee3155da5c4769ca7f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
41312036
|
pes2o/s2orc
|
v3-fos-license
|
Battle of the Genomes: The Struggle for Survival in a Microbial World
Although this book’s title promises the excitement of a 21st-century computer game, the cover photograph of Robert Koch in 1883 provides a better clue to the contents. The general plan is a survey of 20th-century genetics, illustrated by insights into human coevolution with microbial pathogens. Early chapters focus on familiar examples, including G6PD deficiency and sickle cell trait as adaptations to malaria, as evidence for pathogen-driven natural selection. Later chapters discuss more recent research findings, varying from female preference for the scent of males with dissimilar human leukocyte antigen types to the role of human CFTR membrane protein in infection with Salmonella Typhi. All of these are such good stories that science writer Matt Ridley included briefer versions in Chapter 9 of his popular book Genome: The Autobiography of a Species in 23 Chapters (1).
Battle of the Genomes: The Struggle for Survival in a Microbial World discusses in some detail how catastrophic epidemics of cholera, bubonic plague, and smallpox could explain the emergence of certain common human genetic mutations. Some of these mutations are deleterious; for example, CFTR ΔF508, which reduces the risk for typhoid, causes cystic fibrosis in persons who inherit 2 copies. Other mutations are beneficial, such as CCR5 Δ32, which may have protected carriers from smallpox and now reduces the risk for HIV infection. In general, the author’s review of the evidence for and against these hypotheses, which remain speculative, is evenhanded and up-to-date. His accounts of the human and social effects of epidemic diseases and the origins of public health are full of lively anecdotes and colorful detail. Interspersed throughout are personal asides, clinical pearls, and lengthy tutorials on basic science topics, such as DNA replication and gene splicing.
Although this book is far more information dense than are popular books for the lay public, its many shortcomings in terms of organization, depth, and documentation (including surprisingly few references) diminish its value to scholarly readers. More than anything else, it resembles an intellectually inspired but somewhat disorganized professor’s medical school lecture, which would probably be more fun to hear in person than to read. Meanwhile, those who are interested in a 21st-century account of the battle of the genomes may want to wait. Rapid advances in genomic science and technology are opening the way to better understanding of biology, evolution, and medicine, but the full integration of these disciplines is still at a relatively early stage. The idea that genes of 1 species can influence whole ecosystems, described by Richard Dawkins in 1982 as the “extended phenotype” (2), is only now giving rise to new perspectives on community genetics (3).
Prions are believed to be the causative agents of a group of rapidly progressive neurodegenerative diseases called transmissible spongiform encephalopathies, or prion diseases. They are infectious isoforms of a hostencoded cellular protein known as the prion protein. Prion diseases affect humans and animals and are uniformly fatal. The most common prion disease in humans is Creutzfeldt-Jakob disease (CJD), which occurs as a sporadic disease in most patients and as a familial or iatrogenic disease in some patients. Whether prions are infectious proteins that act alone to cause prion diseases remains a matter of scientifi c debate. However, mounting experimental evidence and lack of a plausible alternative explanation for the occurrence of prion diseases as both infectious and inherited has led to the widespread acceptance of the prion hypothesis.
Interest in prion disease research dramatically increased after the identifi cation in the 1980s of a large international outbreak of bovine spongiform encephalopathy (BSE, also known as mad cow disease) in cattle and after accumulating scientifi c evidence indicated the zoonotic transmission of BSE to humans causing variant CJD. In recent years, secondary bloodborne transmission of variant CJD has been reported in the United Kingdom.
Prions: The New Biology of Proteins describes the current state of knowledge about the enigmatic world of prion diseases. The book is organized into 12 mostly brief chapters, which nicely summarize the various types of prion diseases and the challenges associated with their diagnosis and treatment. These sections review the biology of prions, the underlying hypotheses for prion replication, and the biochemical basis for strain diversity. Chapters 2 through 5 describe the various characteristic features of prions, including the historical evolution of the prion hypothesis, a detailed description of the possible mechanisms by which the normal prion protein is converted into the pathogenic form, and the cellular biology and putative functions of the normal prion protein.
The author's lucid descriptions of the various topics are supported by diagrams and key references. Subsequent chapters describe prion disease laboratory diagnostic tools that are available or under development. Chapter 9 succinctly summarizes the most likely target sites, from the formation of the infectious agent to its effects on neurodegeneration, which can be exploited for likely therapeutic development. The same chapter describes the various antiprion compounds that have been or are being tested as therapeutic interventions for prion diseases.
The book is unusual because its entire content was exclusively authored by 1 person, resulting in a paucity of in-depth information in some areas, which may have been provided by multiple authors. However, all things considered, the book can be a valuable resource for scientists beginning to understand the world of prion diseases, the underlying biochemical mechanism of disease occurrence, and the challenges associated with the diagnosis and treatment of prion diseases. Although this book's title promises the excitement of a 21st-century computer game, the cover photograph of Robert Koch in 1883 provides a better clue to the contents. The general plan is a survey of 20th-century genetics, illustrated by insights into human coevolution with microbial pathogens. Early chapters focus on familiar examples, including G6PD defi ciency and sickle cell trait as adaptations to malaria, as evidence for pathogendriven natural selection. Later chapters discuss more recent research fi ndings, varying from female preference for the scent of males with dissimilar human leukocyte antigen types to the role of human CFTR membrane protein in infection with Salmonella Typhi. All of these are such good stories that science writer Matt Ridley included briefer versions in Chapter 9 of his popular book Genome: The Autobiography of a Species in 23 Chapters (1).
Ermias D. Belay
Battle of the Genomes: The Struggle for Survival in a Microbial World discusses in some detail how catastrophic epidemics of cholera, bubonic plague, and smallpox could explain the emergence of certain common human genetic mutations. Some of these mutations are deleterious; for example, CFTR ΔF508, which reduces the risk for typhoid, causes cystic fi brosis in persons who inherit 2 copies. Other mutations are benefi cial, such as CCR5 Δ32, which may have protected carriers from smallpox and now reduces the risk for HIV infection. In general, the author's review of the evidence for and against these hypotheses, which remain speculative, is evenhanded and up-to-date. His accounts of the human and social effects of epidemic diseases and the origins of public health are full of lively anecdotes and colorful detail. Interspersed throughout are personal asides, clinical pearls, and lengthy tutorials on basic science topics, such as DNA replication and gene splicing.
Although this book is far more information dense than are popular books for the lay public, its many shortcomings in terms of organization, depth, and documentation (including surprisingly few references) diminish its value to scholarly readers. More than anything else, it resembles an intellectually inspired but somewhat disorganized professor's medical school lecture, which would probably be more fun to hear in person than to read. Meanwhile, those who are interested in a 21st-century account of the battle of the genomes may want to wait. Rapid advances in genomic science and technology are opening the way to better understanding of biology, evolution, and medicine, but the full integration of these disciplines is still at a relatively early stage. The idea that genes of 1 species can infl uence whole ecosystems, described by Richard Dawkins in 1982 as the "extended phenotype" (2), is only now giving rise to new perspectives on community genetics (3).
Marta Gwinn*
*Centers for Disease Control and Prevention, Atlanta, Georgia, USA The 5th edition of Ash and Orihel's Atlas of Human Parasitology is a superb, up-to-date compendium of protozoan and metazoan parasites. It also covers vectors and uncommon parasites found in humans. The authors present the material in a clear and concise manner that encourages one to delve more deeply into the structure and function of these unique and fascinating organisms. It is a must for persons interested in medical zoology and geographic medicine. Laboratory personnel, directors, and teachers who need a refresher course or additional training will fi nd the book very valuable.
The Atlas of Human Parasitology is an essential treatise for helping to protect our citizens at home, deployed military personnel, and global travelers from parasitic infections. The quick keys to the identifi cation of protozoans, helminths, and arthropods are helpful for distinguishing pseudoparasites from harmful ones. The labeling of various stages of the color images with letters, numbers, and arrows is extremely useful.
Attention has been given to opportunistic infections found in patients with AIDS. This book opens new vistas in helping to understand the global impact of AIDS and parasitic infections. The glossary and current references provide a ready resource for those interested in learning more about host-parasite relationships.
As an extra bonus, readers will fi nd this edition a visual feast that integrates science and the arts. This book is highly recommended reading.
|
2018-05-08T18:39:34.602Z
|
2007-06-01T00:00:00.000
|
{
"year": 2007,
"sha1": "725d3f77b670b8f79954c7bf21faa1a0f87bd0eb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3201/eid1306.070332",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "04d6f836ca0ae5c00ff806324110c976c6b5a91f",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
145180126
|
pes2o/s2orc
|
v3-fos-license
|
The Rise of Digital Multimedia Systems "
With this essay, I want to understand why interactive and relational media forms have become so ubiquitous so quickly. Comparing the nexus of cinema and nationalism with the contemporary dyad of digital media and transnationalism (or globalisation), we can ask whether digital multimedia systems have arisen to reflect and impel our contemporary psychic and social conditions. Because multimedia rarely gets ‘locked-off’, its component elements can always be pulled apart, sent back to their databases and then instantaneously rearranged into newly iterated federations. In this respect it is like our unstable contemporary lives, so buffeted with ever-altering values, opportunities, anxieties and obligations all upwelling because of globalisation, migration and multiculturalism.
1 Ian Watt, The Rise of the Novel: studies in Defoe, Richardson, andFielding, Harmondsworth: Penguin, 1963 (first published by Chatto &Windus, 1957), p.7. Seeking to understand why the novel emerged so quickly and with so much influence during the early eighteenth century, Watt started from the premise that artistic forms often mimic the psychological, social and political conditions prevalent in the particular era that gives rise to them.
He contended that early novelists such as Daniel Defoe and Henry Fielding developed literary techniques for dramatising the emergence of the bourgeois individual, with its private sensibility and self-reflective interior monologue.Watt showed how novelists quickly innovated a set of textual conventions to sketch settings and evoke the innermost thought-flows and mood-swings of focal characters in imagined narrative worlds which readers could compare to their own world.And he showed how these characters might stand in and speak for the readers themselves as they tried to grasp the intricacies of an ever-altering world of proliferating detail and increased secular opportunity.
You may be wondering, already, what this has to do with interaction and computer-based media.Please bear with me.I'll get to it.But to do so I must finish and extend Watt's story of the novel.
Different from the allegory or the religious parable, which are part of the oral tradition and which reinforce established moral codes, the novel arose to address questions of personal agency and ethical innovation, to help readers scrutinise the intellectual and emotional intricacies of a new moral and political universe.To this end, the novelistic character was invented as a kind of new technology whereby readers could examine a psychic model of a possible personality and thereby measure options for themselves.Here was a cultural form that empowered readers to reflect on all the novelty that defined their changeful times.No wonder it was suddenly popular.It was needed.Equally important, the rise of a popular new cultural form not only reflects but also adds momentum to the changes that define the turbulent times.
Or to say it bluntly, cultural forms tend to get invented and become popular at exactly the time they are needed.They show us some of the occulted workings of our confusing moment.In this process, there is usually an interplay between intuition and intellection.This interplay creates discourse, which leads to analytical knowledge, enabling increased efficiency and evocative powers in whatever cultural form is being considered.
Through this process, the novel was eventually superseded (which is not to say eliminated) by a new predominant form, cinema, which emerged at a time when individual psychologies were changing yet again, this time to absorb the modern world's kinetics (hence the name: cinema).
Here was a cultural form able to represent and analyse the tumult of sensory 'attack' that assailed every individual psyche once the speedy, mechanical modes of transport, communication and commodity production became widespread during the industrial revolution.Moreover, cinema responded not only to psychological factors.Social and political forces were at play too.The start of the twentieth century, when cinema loomed, was a time when new nations and social masses were forming, when throngs were wondering how to fuse several scattered constituencies into new states.Thus in conjunction with other distance-devouring technologies, especially the railroad and the telegraph, cinema helped individuals and communities imagine unified new worlds gathered in a spatio-temporal frame where previously there had been only estranged and disconnected populations clustered in locations unable to synchronise across great administrative time-lags.
With the advent of cinema, audiences could envisage associations with far-flung people and places all meshing in 'organic' rhythms as fast as heartbeats and almost as quick as thought.The movies projected lively new casts and scenarios.For example, a new nation --a social, spatiotemporal amalgam --could be envisaged where once it had been unimaginable.Editing systems founded on the principle of montage federated new states of possibility.Seeing these new states could lead to believing.This happened in Japan, France, Britain, USA and Australia, to name just the obvious cases.(It was the official reason for the establishment of John Grierson's legendary documentary unit in Britain during the 1920s: the unit was directed to 'show the nation to the nation'.) In Australia, cinema enabled people in Gympie, Sydney and Adelaide, let's say, to share a perceptual and a conceptual frame where, before, they had been dissociated.Civic reality and cinematic possibility --each impelled the other.A nation could be construed as a new federation, and it could be imagined in place of the squabbling states that had previously been misaligned in geographical and ideological alienation.
But cinema has its limits.And this is where we can start contemplating the rise of digital multimedia systems in our own era.A definitive characteristic of the movies is the way they 'lock off' their several dynamic parts into a final version, the 'release print'.This ultimate inflexibility of cinema mirrors the way most national-scale communities responded to the turbulence of modernity by insisting that their societies first synchronise energetically to the machine world and then stabilise permanently once the new political state has been realised.In its end, cinema is a conservative form, like nationalism.
Cinema and nationalism: each serves a popular, paradoxical desire for the acknowledgement and the cessation of change.Indeed, this is one of the traits we love about cinema: it shows us the thrill of energetic do.Thus, a digital multimedia system can be both a reflection and a stimulant of an everyday reality that is constantly altered by contingencies like globalisation, scepticism, migration, multiculturalism and economic devolutions.Digital multimedia is the cultural form that reflects and impels our open-system world, our skittish life 'on the edge of chaos'.
One challenge when writing about these protean new forms is that, on the page, it is so difficult to bring in the concrete evidence.When writing about writing, one can quote an exemplary section of text and analyse it in text.By contrast, with a digital multimedia system, the commentator must evoke the exemplar textually, in an alien medium, before using words to analyse its non-verbal potency and 'shiftiness'.
Similar issues confront the cinema critic, of course.But at least cinema now has a canon of well-known 'classics' which can be nominated so readers and writers all know somewhat the thing they have gathered around.Because digital multimedia is such a new cultural form, there are very few canonical references yet; it is still difficult to have confidence that everyone knows the cited examples.
Accordingly, I need to describe an example now so that I can focus my assertions.It's a digital multimedia system, that I've been developing with a team of collaborators. 3 Life After Wartime is a 'story-engine' or speculative 'conjunction-machine' that restlessly combines still images, haiku-like texts and musical sound files all responding to an extraordinary collection of crime scene photographs belonging to the New South Wales Police.The original archive is a jumble of evidence associated with actual people who have been caught in painfully real outbreaks of fate, desire or rage.Most significantly, the documents that you would expect to be attached to the pictures --the conclusive texts such as the prosecution case, the defence case, the judge's summation, the jury verdict --all these documents are missing.
Each crime scene is represented by a dozen or so different photographic negatives swaddled in a tatty old buff envelope.Scribbled on each envelope: the names of an investigating detective and a police photographer, plus an often incomplete address plus a date and the photographer's guess at the crime being documented.And that's it, that's the extent of the interpretive cues offered by the archive.
Therefore we have to work with a collection of files that are meaningfully but contentiously associated with each other.Because of the 'aftermath quality' of the pictures, we cannot help but proffer stories to account for them, but because of the dearth of accompanying information, we must accept that our accounts will always be speculative, restless, inconclusive.
After several years analysing how to use the images in a provocative and evocative street history of Sydney, we have composed a volatile sound+image device that mimics the dramatic disturbance that plays in In this respect Life After Wartime is a kind of instrument with which you continuously strum little chords in your mind so that an ever-developing concatenation of minor-key epiphanies can chime for you as you readership which is really a forensic audience, an audience looking to take charge of their own conviction, looking to construct and test rather than receive their worldview.It's an audience that knows that there are many variabilities and volatilities defining life now, so many that it is implausible to rely on one line of argument or explanation (ie serial thought), because the premises on which any one serial discourse are founded are always debateable and subject to rapid redundancy.
Instead, people are always looking to assess a multi-dimensional array of repercussions and possibilities associated with every action in the world.No longer are people merely 'consumers' who prefer to hear the singular delineation of effect following cause.Rather they scan the field of experience for the strengths, weakness, opportunities and threats prevailing in a dynamic complex of tendencies, mutations and options that constitute the life of the somewhat free-willed subject today.
Digital file-systems have arisen partly to address the need for cultural forms that enable us to think and feel in synch with the volatilities of contemporary existence.Borrowing my phrasing from Michael Joyce, I contend that digital multimedia databases have arisen and become popular because they prioritise dynamic-systems thought over structural thought and serial thought.Responding to the quickness of digital and transnational cultures, we need cultural forms that allow us to become sceptical and curious investigators.We need operable, speculative databases surging with ideational and affective elements that can be searched, combined and activated to create combinative complexes that unfold and re-align, that converge and diverge through time.And crucially this operability must be accepted as the right and responsibility not only of the author or designer but also of the participants.
As the multimedia artist David Rokeby has observed, with digital aesthetics one aspires to create relationships rather than finished artworks and one yearns to participate in systems which 'reflect the consequences of our actions or decisions back to us." 5 .To the extent that an interactive system is relational, cross-referential and dynamically re-configurable, it is an aesthetic model of our dynamic everyday experience.
More than an informatic or technical tool, every multimedia database -even the most expedient or functional -is infused with aesthetics and semantics.
Every multimedia database involves human-computer interaction and is therefore 'dramatic' somehow.The interactive multimedia database has a history at the same time as it represents an innovative break with other representational forms such as the novel, the oil painting or the cinema feature.It has arisen to address the psychic and social dynamics of our times.It can be used for dramatic and aesthetic purposes.It can be used like music, painting or cinema: to tingle the intuition, to intertwine our emotions and our ideas, to conjure experiences of complexity and richness which help us reflect upon our everyday experiences as desiring and conspiring citizens.
As a run of spray of parting thoughts, can I conclude by reiterating that database thinking is open-ended and restless rather than conclusive?
Can I point out the limited value of linear-narratological theories and cinema history when analysing the cultural worth of digital multimedia systems?Can I suggest that gardening theory, ecological philosophy, and even aquarium-design practices provide more useful kits of wisdom to help us examine contemporary dynamics and complexity?
I
If I acted willfully in this way, what might flow from the assertion of my new freedoms?Or … How is the old, customary world being renovated?Or … Within these settings, what gaggle of testy characters might I encounter as I grant my malleable personality imaginative latitude?
And it was shaped by and for the contemporary culture.Watt shows that by examining the structural characteristics of new cultural forms, we can gain insight into periods of psychic, political and philosophical flux.By studying how aesthetic and semantic systems engage with the intellect and the sensorium of the user, we can understand the temper of the times.When a new form of art or popular communication arises and takes hold, it reflects changes that have recently occurred or are presently occurring in psychology and society.
investigate and consider the ways to make sense of the seemingly endless power in the photographs.Colliding the fleeting images with the haiku-short texts plus the changeful music and city noises, you cause all these elements to circulate so you can 'promiscuously' (even libidinously) essay multiple liaisons and then disengage before seeking again another sparking chain of connections that might light up portions of the occulted world represented by these crime scenes.Life AfterWartime is just one example of 'digital system' art.I think of it as a 'dramatic database'.Such artforms are beginning to abound now.(Consider the popularity of the Sims dynasty of fictive, fauxecological environments.)But why do they really matter?How do they warrant serious attention from aestheticians and systems theory specialists?And why analyse them in the cultural history context that I've established here by privileging the approach pioneered in The Rise of the Novel?The operative dynamics and the recombinative readiness of the file systems in digital multimedia databases closely mirror the dynamics and recombinative readiness of contemporary post-industrial societies.This notion first came into focus for me when I read Michael Joyce's cultural history of hypertext, Of Two Minds.Early in the book, Joyce observes that hypertext is special because it is a means by which we can prioritise structural thought over serial thought. 4He explains how the crossreferencing and branching allowed by hypertext have arisen to serve a transnationalism.)Because of the dynamics of its file structures and the integrating and operating codes applied to those files, any digital multimedia configuration or event is always ready to be dismantled and re-assembled into new alignments as soon as the constituent files have been federated.In other words, because multimedia rarely gets 'lockedoff', its component elements can always be pulled apart, sent back to their databases and then instantaneously re-arranged into newly iterated federations.(Yes, in this respect it is like our unstable contemporary lives.)By thus dramatising divergence as well as convergence, a digital multimedia project can react to variant stimuli from the environment or from its investigative participants.It can reconform itself restlessly in ways that a cinema print is not designed to convergence and world-creation at the same time as it proposes an eventual end to flux and uncertainty.With a film, the final edit is a stable state, a kingdom of kinetic excitement with a reassuring climate of completion.Now let's compare the cinema/nationalism nexus with the contemporary dyad of digital media and transnationalism (or globalisation).How have digital multimedia systems arisen to reflect and analyse our contemporary psychic and social conditions?Like cinema, digital multimedia simultaneously reflects and shapes reality.And like cinema, digital multimedia can federate disparate elements (sounds, texts, graphics, perspectives, vistas and audio-visual rhythms) in astonishing configurations.These similarities prompted Lev Manovich in his influential The Language of New Media to create a myth about multimedia being first generated literally out of cinematic material, out of old film stock stippled with data-entry integers in Konrad Zuse's 'digital computer' constructed in 1936. 2 But unlike cinema (and unlike nationalism), digital multimedia produces syntheses that are always explicitly provisional.(Yes, in this respect it is 2 Lev Manovich, The Language of New Media, Cambridge (Mass): MIT Press, 2001.like Life After Wartime offers you germs of stories which you both doubt and appreciate.With this sceptical yet postulative attitude, you wonder about the world that is witnessed in the pictures.You speculate about what might have happened.And you test those speculations against the contextually established knowledge, against whatever is felt to be true and likely for that place and time, given what is already agreed to have happened there and then.Again and again, without rest, you must speculate and test.You are never receiving a single line of interpretation.Rather, you are amidst an ever-developing dramatic hypothesis that is always offering many different foci and perspectives even as you pursue your own line of inquiry.
3 Kate Richards, producer; Greg White, programmer and sound design; Aaron Rogers, graphic design; Chris Abrahams, music and sound design.modulationsaccrue until eventually a kind of debateable meta-narrative builds up to account for the entire image-world of the archive.Crucially, each investigator will gather up a different set of micro-narratives and moods and each investigator will tend toward a larger story in idiosyncratic and personally stamped ways.Each investigator will encounter qualities of themselves as well as qualities of the archive.In part, it's yourself you find when you delve into this interactive archive.But it's yourself in relation to real patterned evidence shaped by a real patterned world.Engaging with Life After Wartime, you quickly deduce that are not a reader or a receiver of this artwork.You are an investigator.You are figuring 'what if' speculations.Sceptically and imaginatively, you are making and interrogating a world of meaning even as you are attuning to the designed yet dynamic systems of sense that you discover in the work.
5
David Rokeby, "Transforming Mirrors: subjectivity and control in interactive media", in Simon Penny (ed.), Critical Issues in Electronic Media, Albany: State University of New YorkPress, 1995, p. 152At least it seems appropriate to pose questions like this, to finish an inquest into investigative cultural forms.And it seems right to finish by looking into nature/culture systems such as gardens.For they do indeed 'reflect the consequences of our actions and decisions back to us'.At least that's what I think I need to investigate now, as I seek to turn these thoughts into something as useful as The Rise of the Novel.
|
2019-01-02T21:48:47.834Z
|
2013-08-05T00:00:00.000
|
{
"year": 2013,
"sha1": "599e31b67043cadaf6b55f4b525ec7e78e2d7f3a",
"oa_license": "CCBY",
"oa_url": "https://epress.lib.uts.edu.au/journals/index.php/csrj/article/download/3421/3569",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "599e31b67043cadaf6b55f4b525ec7e78e2d7f3a",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
258887427
|
pes2o/s2orc
|
v3-fos-license
|
Predicting Radiotherapy Patient Outcomes with Real-Time Clinical Data Using Mathematical Modelling
Longitudinal tumour volume data from head-and-neck cancer patients show that tumours of comparable pre-treatment size and stage may respond very differently to the same radiotherapy fractionation protocol. Mathematical models are often proposed to predict treatment outcome in this context, and have the potential to guide clinical decision-making and inform personalised fractionation protocols. Hindering effective use of models in this context is the sparsity of clinical measurements juxtaposed with the model complexity required to produce the full range of possible patient responses. In this work, we present a compartment model of tumour volume and tumour composition, which, despite relative simplicity, is capable of producing a wide range of patient responses. We then develop novel statistical methodology and leverage a cohort of existing clinical data to produce a predictive model of both tumour volume progression and the associated level of uncertainty that evolves throughout a patient’s course of treatment. To capture inter-patient variability, all model parameters are patient specific, with a bootstrap particle filter-like Bayesian approach developed to model a set of training data as prior knowledge. We validate our approach against a subset of unseen data, and demonstrate both the predictive ability of our trained model and its limitations. Supplementary Information The online version contains supplementary material available at 10.1007/s11538-023-01246-0.
Introduction
Radiotherapy remains a mainstay of cancer treatment, with approximately half of all cancer patients receiving radiotherapy as part of their standard of care [1][2][3].It is common for a patient's course of treatment to be determined solely by tumour etiology, location, and stage.Other patient-specific factors, such as the intrinsic radiosensitivity and composition of a tumour, are not typically used to inform protocol selection in the clinic [4].Clinical studies suggest that patients at a similar tumour, node, and metastasis (TNM) stage, and with comparable pretreatment tumour volumes, may respond differently to the same radiotherapy fractionation schedule [5,6].Mathematical models have the potential to capitalise on real-time clinical observations to both predict patient specific responses and guide clinical decision-making.It is hoped that such a tight integration could eventually be used to personalise fractionation schedules either a priori or adaptively during a patient's course of treatment [7].
Challenges associated with the application of mathematical models to interpret data and draw predictions are perhaps most acute for single-patient clinical data.Models must be sufficiently complex to reproduce the full gamut of patient responses [8][9][10].However, clinical data are often limited, typically comprising solely noisy measurements of the gross tumour volume (GTV) at sparse time intervals throughout a patient's course of treatment [10,11].The necessity to start treatment as soon as possible after diagnosis means that pre-treatment predictions are often drawn from only one or two observations.Consequently, models aimed at clinical applications are relatively simple [12,13], incorporate limited biological detail, and often describe only the time-evolution of the GTV [6,12,13].While simplicity can elicit parameter identifiability and avoid overfitting, predictions can be poor-or even misleading-if a model is so simple as to be unable to capture the range of observed (possible) responses.The dangers of overfitting are particularly pronounced for single-patient clinical data used for prediction, where model validation must be assessed pre-treatment; in diametric opposition to experimental data, technical replicates are never available.It is, therefore, crucial to validate models across a wide range of responses, and to accurately quantify uncertainty in predictions used in clinical decision-making [10].
In this work, we present a predictive mathematical modelling framework using clinical GTV data from a previously published cohort of head-and-neck cancer patients who exhibit a variety of treatment responses (Fig. 1) [14,15].The primary goal of our framework is to integrate previously observed clinical observations to predict the time course of radiotherapy response in new patients.To demonstrate our framework, we focus our analysis on prediction of the tumour volume progression in four patients presented in Fig. 1 and in our previous work [16]: these patients are excluded from the otherwise randomly-selected cohort of patients used to train the mathematical model.All patients in the clinical data set receive a standard radiotherapy fractionation schedule, comprising fractions of 2 Gy delivered on weekdays over a four-to sevenweek period [17].To keep our study as widely applicable as possible, we work with the most fundamental, albeit limited, mode of single patient data.Computed tomography (CT) scans are routinely used to image tumours pre-treatment at both the diagnosis and treatment planning stages (Fig. 1a) [18][19][20].Further scans, such as cone beam CT, may be taken upon the delivery of each fraction but are usually used solely for alignment purposes; for our data, scans were available once per week during treatment.These CT scans are not of a high spatial resolution, are noisy, of a low contrast, and do not differentiate heterogeneity in tumour composition.As such, only noisy measurements related to an estimate of the GTV are available at relatively sparse intervals throughout each patient's course of treatment (typically, once per week).The heterogeneity in radiotherapy response exhibited in Fig. 1b-e raises several important questions: in particular, how early into treatment can a practitioner determine if a patient is responsive, and to what extent is it possible to predict the final tumour volume during treatment using only GTV measurements?Given the side-effects associated with radiotherapy, and possible indirect costs of switching treatments at too late a TNM stage, any improvement in prediction accuracy is of great clinical value.
Mathematical models of tumour progression vary significantly in complexity; ranging from simple phenomenological models of GTV, such as logistic and Gompertz growth [12,[21][22][23][24][25][26][27], to highly detailed spatial models that capture multiple facets of tumour heterogeneity [16,23,[27][28][29].The limitations and challenges imposed by clinical data yield an overrepresentation of the former, meaning that the functional forms for both growth and radiotherapy response are motivated almost entirely by empirical observations rather than the underlying biological mechanisms.Yet, it is now well established that intra-tumour heterogeneity and the tumour microenvironment play important roles in overall growth, and may significantly influence treatment outcome [16,[29][30][31][32][33].Motivated by these findings and in consideration of the noisy data available for prediction, we take an intermediate approach and utilise a two compartment extension of the so-called proliferation-saturation-index (PSI) model of Prokopiou et al. [12] and later Poleszczuk et al. [26].This choice of ordinary differential equation (ODE) model balances simplicity, through a phenomenological description of radiation-free tumour growth saturation, with biological detail, through a radiotherapy response corresponding to a transfer of cellular material from a living to a dead state.Compared with purely statistical or machine learning models, our mathematical approach allows a full interpretable integration of clinical data from individuals, whereby the radiotherapy schedule is imported directly from the reported patient fractionation schedule.Finally, our model contains sufficient detail to allow us to quantify the potential utility of expanding clinical data collection to include information relating to tumour composition in addition to GTV.
We take a Bayesian pseudo-hierarchical approach to inference and model calibration, by leveraging observed population-level information to draw predictions and quantify corresponding levels of prediction uncertainty.To account for inter-patient heterogeneity, all model parameters are allowed to vary between patients.A schematic of the approach is provided in Fig.
Methods
In this section, we outline the clinical data, and the mathematical and statistical methodology developed and later employed in this work.First, in Section 2. we outline the novel statistical methodology employed in the analysis.Finally, in Section 2.5 we outline the procedure for resampling from the joint posterior to produce synthetic patient data.A Julia implementation of the model and inference algorithm, along with data used in the analysis, are available on GitHub * .
Tumour volume data
Current clinical practice involves two CT scans collected for each patient; one at diagnosis and one at treatment planning.These scans are then used to estimate GTV [18][19][20].While it is feasible to obtain further scans at the time of delivery of each fraction, these scans are often of a low quality, being used primarily to position the patient.As such, they are not typically stored for research purposes.
In this paper, we use retrospective volumetric data, collected weekly, from head-and-neck cancer patients, across multiple anatomical locations, including the oropharynx, tonsil and base of tongue.Patients were immobilised via a thermoplastic mask with or without bite block.
Isocenter and positioning was verified daily via orthoganal kV or CBCT imaging.Each CT scan was segmented by the same radiation oncologist, giving weekly tumour volumes throughout treatment in addition to a volume measurement at the treatment planning stage.Weekly cone beam CT (CBCT) scans were extracted from the record and verify system (Mosaiq, Elekta).
Suitable CBCTs with minimal artifact were selected for contouring.Clinical target volume (CTV) was created from GTV with a 5 mm isotropic expansion.CTV was then trimmed from barriers to spread including air, bone, fascial planes, and in some cases muscle.Planning target volume (PTV) was created from CTV via 3 mm isotropic expansion.An example suite of contoured CT scans from a single patient is shown in Fig. 1a.The GTV data shown in * https://github.com/ap-browning/clinical-inferenceFig. 1b-e correspond to those presented and discussed in Lewin et al. [16].In total, GTV data from 51 patients was collected and made available as supplementary material.All methods were carried out in accordance with the institutional policies of the Moffitt Cancer Center.The clinical protocol covering patient data and methods used in this paper was approved by the Moffitt Cancer Center's Institutional Review Board (IRB).Since this is a retrospective study using de-identified data of adult human subjects, informed consent was waived by the IRB.
Given the limitations imposed by GTV clinical data, we present a relatively simple mathematical model that is able to capture the four classes of tumour response observed in Fig. 1.
In particular, we extend the PSI model [26] where δ(t − t i ) is a delta function, representing a transfer of a volume γL from the living compartment to the dead compartment, such that γ d −1 quantifies the strength of radiotherapy response.We assume further that necrotic material is degraded at a constant rate ζ d −1 .To capture inter-patient heterogeneity, all parameters are allowed to vary between patients [47].
The data suggest that initial GTV is comparable between responsive and poorly responsive patients (Table 1).Therefore, we normalise L(t) and N (t) with the initial GTV such that V (0) = 1 and describe the initial tumour composition as where 0 ≤ ϕ 0 ≤ 1 is an unknown, patient-specific parameter to be estimated that represents the proportion of the tumour occupied by dead material at t = 0. We note further that the interpretation of the carrying capacity parameter K is with respect to the measured initial GTV.Thus, GTV measurements presented throughout the paper may be interpreted as the fold change (FC) compared to the initial GTV.The interpretation of all other parameters remains unchanged by this choice of units.
In the supplementary material (Figs.S1 and S2), we perform a parameter sweep across parameters relating to necrosis and necrotic material decay (η and ζ, respectively), for a patient subject to daily doses of radiotherapy on weekdays over a six week period, to verify that the model is able to reproduce the wide range of dynamics observed in the clinical data.While the parameter sweep is not exhaustive, the results demonstrate that varying only these two parameters is sufficient to produce the range of responses observed in Fig. 1.
Classifying responses
We observe four classes of qualitative response within the clinical data, as highlighted in Fig. 1 and summarised in Table 1.In Fig. 1b, the patient responds well to radiotherapy, with the tumour decreasing markedly in volume throughout treatment.Hereafter, we refer to a patient exhibiting this type of behaviour as a fast responder.By contrast, there are patients for whom the effects of radiotherapy appear to be marginal when viewed in terms of tumour volume over time alone, as is the case in Fig. 1c.We classify these patients as poor responders.In a number of cases, the initial response of the tumour to radiotherapy appears to be favourable, but the response plateaus in the latter stages of treatment, resulting in a non-negligible final tumour volume (Fig. 1d).Such patients are classified as having a plateaued response.However, this radiographic volume may subsequently recede in the weeks after radiotherapy.Occasionally, as in Fig. 1e, a patient may appear to exhibit continued tumour progression throughout the first few weeks of radiotherapy before showing a delayed response, characterised by a decrease in tumour volume towards the end of treatment.We characterise this type of response as pseudo-progression.
We classify a model realisation into one of four classes of response based on a standard patient receiving doses on weekdays over a six week period, with CT measurements taken at the start of each treatment week and at the time of the final dose (the pre-treatment volume measurement is not used to classify patients).Based on the set of noise free synthetic measurements generated Table 1.Prior classification of each patient response class, based on the full posterior, p(θ|{D i } n i=1 ) and the second-level prior p 2 (θ), the latter corresponding to an expanded kernel density estimate constructed from samples of the full posterior.The statistics related to the initial volume are based on the classifications of the prior samples corresponding to each patient in the training set, hence non-integer counts arise due to probabilistic classification of patients.An approximate statistical test, based on Welch's approximate unequal variance t-test [48], indicates no statistically significant difference between fast and poor responders (P = 0.582), nor between responders ( * ) and poor responders (P = 0.557).Asterisks indicate classifications corresponding to patients who show an eventual response.The specific thresholds chosen in the classification algorithm yield excellent results that reliably distinguish between each class (Fig. S4).However, the relatively small number of plateaued responders and pseudo-progressors in the training set (Table 1) suggests that the criteria will need to be reassessed should more data become available.
Statistical model
We take a standard approach and assume that CT scan data are independent and normally distributed about the model prediction [49] such that where the standard deviation is assumed to be a linear function such that the statistical model captures both additive and multiplicative normal noise: α 1 represents an absolute contribution to the variance, and α 2 a relative contribution.
While the dynamical parameters are assumed to vary between patients, we assume that the noise parameters remain fixed.Therefore, we pre-estimate the noise parameters α 1 and α 2 by first inferring them alongside dynamical parameters for each patient.We then pool an equal number of noise parameter posterior samples for each patient and approximate (α 1 , α 2 ) as the marginal posterior mode.We are motivated to take this relatively standard approach of pre-estimating the noise parameters to reduce both the dimensionality of the parameter space and the complexity of the statistical methodology.
Bayesian inference
An important difference between clinical and experimental data relates to the sample size: in clinical studies, each patient undergoes therapy only once.Given that patients are highly heterogeneous and data are relatively limited (Fig. 1), this poses a significant statistical challenge Parameter Units Prior Description for computational inference.To account for this, we take a pseudo-hierarchical approach to inference and prediction by first training the model on a subset of the data (the training set).
We are motivated to develop this novel approach to inference as opposed to a more standard Bayesian hierarchical approach as there is no sensible means by which to propose a particular distributional form for the joint parameter distributions at the population-level: given the distinct classes of response observed in Fig. 1, for example, we expect the joint parameter distribution to be multimodal.The correlation structure between model parameters is also unclear.
From a full cohort of 51 patients, we randomly select a group of 40 patients to act as the training set; these patients represent those that have been observed throughout an entire course of treatment, prior to the present.For each patient in the training set, we assume that initial knowledge about the model parameters is encoded in a "first-level prior", p 1 (θ), where θ = (log λ, log K, log γ, log ζ, log η, log ϕ 0 ) (Fig. 2).We then update our knowledge about the parameters pertaining to patient i using Bayes theorem such that where D i represents data (including both volume measurements and the radiotherapy schedule) for patient i.We choose p 1 (θ) to an independent multivariate uniform (see Table 2 and Fig. 3), an uninformative choice.
The posterior for patient i can be interpreted as the full posterior, conditioned on knowledge that the parameters relate to patient i The full posterior can be obtained by marginalising over all patients in the training set and is given by where w i = P(i) represents the prior probability (i.e., weighting) of patient i.The result in Eq. ( 7) follows immediately from Eq. ( 6) by the law of total probability.For simplicity, we set w i = const, however, such weights may be allowed to differ if additional knowledge informs patient similarity; for example, based on characteristics known to affect radiotherapy response, such as the clinical stage or age of a patient [50].Another way to interpret the full posterior is that of a uniform mixture of the individual-level posterior distributions.We then denote the full posterior as the "second-level prior", p 2 (θ), which represents our knowledge about the parameters when analysing new patients (we drop notational dependence on already observed data for convenience) (Fig. 2).An interpretation of our procedure is to identify the similarity between the new patient and the observed treatment outcomes for patients in the training set, and to combine the additional knowledge obtained from past patients when predicting outcomes for the new patient.In Fig. 3 we compare the first-level prior p 1 (θ) to the full posterior, and in Fig. 4 we show pairwise marginal distributions of samples from the full posterior Eq. ( 7).
Given a possibly temporally incomplete set of measurements from a new patient, D new , the posterior distribution of the parameters is again given by A simple technique to obtain a set of weighted samples from p new (θ|D new ) is to apply a bootstrap particle filter to pre-obtained samples from p 2 (θ).Since patients in the training set are weighted equally, these may comprise a concatenation of samples from each posterior (we obtain these using an adaptive MCMC algorithm [51], diagnostic statistics and convergence plots are given as supplementary material).An advantage of the bootstrap particle filter approach is that it requires minimal computational effort to update the posterior for new patients.The primary limitation introduced by this choice is that we cannot distinguish between parameters that vary between patients and those that are fixed: hence, we pre-estimate and fix the noise parameters in this work.
In practice, this approach may be problematic since patients in the training set are unlikely to be identically representative of new patients, particularly for small training sets (in our case, n = 40).In the bootstrap particle filter, this would lead to a small number of heavily weighted particles (that may or may not produce model realisations similar to the new patient data).We address this potential issue by forming p 2 (θ) by resampling perturbed particles from p(θ|{D i } n i=1 ) using a multivariate normal distribution with covariance matrix, denoted Σ ε , constructed by expanding the covariance matrix of Silverman's rule for kernel density estimation, where β is an expansion factor (we choose β = 2), m is the number of samples of θ|{D i } n i=1 and Σ θ is the covariance matrix of the samples.We reject samples outside the support of the first-level prior p 1 (θ) (see Table 2), in effect constructing p 2 (θ) as a kernel density estimate with truncated multivariate normal kernels.This approach is also similar to a one-step sequential Monte Carlo algorithm [52].
Quantifying goodness-of-fit
We quantify goodness-of-fit using the so-called Bayesian R 2 statistic [53], defined for a single posterior sample by where V fit denotes the set of fitted values, and V obs denotes the set of observed values.A given posterior distribution yields a distribution of R 2 statistics: in this work, we report the median of the resultant distribution.Similarly to the frequentist R 2 statistic, a Bayesian R 2 statistic of unity indicates that the model captures all data variability (i.e., the variance of residuals, Var(V fit − V obs ) is zero), while a Bayesian R 2 statistic of zero indicates that all fitted values lie on a horizontal line (hence, we expect low R 2 statistics for poor responders).
Generation of synthetic patient data
We generate synthetic patient data by resampling parameters from the full posterior and expos- to each class is shown in Table 1.
First, it is evident from results in Fig. 4 that the predicted value of the initial necrotic proportion, ϕ 0 , does not vary between fast and poor responders.This is seen in bivariate denisties between ϕ 0 and all other parameters.The statistic does, however, appear to distinguish pseudo-progressors from the other response types: estimates for ϕ 0 suggest that tumours in such patients contain a much larger necrotic region pre-treatment.Faster responders are characterised in relation to poor responders by both a higher radiotherapy sensitivity, γ, and necrotic material decay rate, ζ.The necrotic material decay rate also appears to distinguish poor, fast, and plateaued responders: poor responders through a very low decay rate, plateaued responders by a high decay rate, and fast responders an intermediate rate.Finally, results in Fig. 4 suggest that pseudo-progressors are characterised by both a high cell proliferation rate and correspondingly high radiotherapy response.
Model predictions
Given that the training set is relatively small, a potential obstacle is that responses of new patients may not be similar enough to those of existing patients to produce reliable predictions; indeed only 6.0% of posterior samples correspond to patients that exhibit a poor response to treatment.To address this with the existing data, we "expand" the full posterior to form the second-level prior, p 2 (θ), by resampling and perturbing (essentially, forming p 2 (θ) as a multivariate kernel density estimate based on the full posterior, with a kernel variance expanded from Silverman's rule to account for new patient dissimilarity).The updated proportions, based on 100,000 samples from p 2 (θ), are given in Table 1, and suggest an updated prior probability of a new patient exhibiting a response at 65.3%.An alternative approach that is beyond the scope of the current work would be to stratify perturbed full posterior samples based on an external and accepted classification ratio: for example, to choose the prior weights {w i } to achieve a desired prior ratio of patients in each classification.These results highlight the difficulty of classifying patient outcomes based on a relatively small cohort of patients with little prior parameter knowledge.Using the first-level prior (i.e., excluding all knowledge gained through analysis of the training data) further reduces the prior probability of an eventual response to 44.7%.
We first assess the predictive ability of our trained model by generating data from four synthetic patients exhibiting a fast response (Fig. 5a); a poor response (Fig. 5b); a plateaued response (Fig. 5c); and pseudo-progression (Fig. 5d).Given that each set of patient-specific parameters is resampled from the full posterior, we expect each synthetic patient to display a similar response to at least one patient in the training set.Additionally, as each set of synthetic data is generated by the mathematical model, we are guaranteed that the observed response is within the possible gamut of model responses.We provide a table summarising the parameter values used for each patient in the supplementary material (Table S1).
In Fig. 5 we simulate real-time predictions by calibrating and forming predictions each week throughout treatment (i.e., at the time of each weekly CT scan).We show predictions made at the start of treatment (t = 14 d), and every second week following (t = 28 d, 42 d and 56 d).
The results shown for t = 56 d correspond to retrospective analysis of the trajectory, after all measurements have been taken, while the predictions drawn at t = 14 d are made pre-treatment, before any radiotherapy response has been observed.As a class under-represented in the data set and hence the prior, predictions made for the pseudo-progressor at t = 28 d almost entirely miss the true trajectory.Consequently, the single data point at t = 28 d that sees a decrease is judged alongside both prior knowledge and potential measurement noise.
To quantitatively compare the time-evolution of prediction confidence, we plot in Fig. 6a-d the evolution of posterior information relating to the radiotherapy response, γ, and in Fig. 6e-h the time evolution of predicted final tumour volume (i.e., the fold-change GTV at t = 56 d compared to the measurement at t = 0 d).The most immediate result is that both the fast responders and pseudo-progressors yield a posterior density for γ higher than that for the poorresponders.The results in Fig. 6e show that the predicted final GTV quickly narrows around the true value for the fast responder, but takes longer for the plateaued progressors and pseudoresponders.At the same time, the results in Fig. 6f show that by two weeks into treatment, the model predicts with 95% confidence that a patient will not see a final GTV less than 50% of that pre-treatment.The results in Fig. 6g highlight again the difficulties faced when drawing predictions for patients exhibiting relatively rare responses: working with synthetic data eliminates the question of model-misspecification, however the 95% credible intervals produced from predictions drawn at t = 21 d and t = 28 d do not cover the true value (which can be calculated by resimulating data from each synthetic patient without measurement noise).
Given GTV alone, it is not until t = 42 d (four weeks into treatment) that the model predicts with 95% confidence that the patient's tumour will eventually see a reduction in volume.This is in line with previous reports that mid-treatment responses correlate with outcome [15].
Mean 50% CrI 95% CrI Figure 6.Predictions for four synthetic patients.For the four patients analysed in Fig. 5 we show (a-d) the evolution of the posterior distribution relating to radiotherapy response, γ; and (e-h) evolution of predictions for the relative tumour volume at the conclusion of treatment.In all cases, data up to, and including, the relevant time are included in the prediction.In (e-h), we show the mean (black disc), and both 50% and 95% credible intervals for the final tumour volume, together with the true final tumour volume (red dashed), both given as the fold-change (FC) relative to the initial volume, V (0).The true values of γ used for each patient are given as supplementary material (Table S1).
Clinical data
Now that we have validated the model's ability to predict the time course of GTV for synthetic patients with a variety of radiotherapy responses, we turn to focus on drawing real-time predictions from unseen clinical data.
In Figs. 7 and 8, we repeat the analysis performed in Figs.response, the model predicts with 95% confidence that the patient will eventually achieve an overall reduction in tumour volume.Indeed, for both of these patients the precision in predictions of the final tumour volume narrows quickly around what is eventually observed.In contrast, at day 28 the patient that eventually exhibits a poor response sees roughly half of all predicted trajectories indicating an eventual increase in volume, and half a decrease.Throughout treatment, the mean prediction remains around the eventually observed value of unity.The results for the pseudo-progressor mirror those observed in the synthetic data: the predictions are perhaps initially misleading due to the relatively small (2.3%) prior probability of a patient exhibiting such a response.
To quantitatively explore the model's ability to predict patient classification, in Fig. 9 we plot the posterior classification probabilities for predictions drawn at each time point, in addition to a pooled classification probability of a patient displaying a response (i.e., not a poor responder).Initially, at t = 0 d, the classification probabilities represent those in the secondlevel prior, p 2 (θ) (Table 1).The most notable results are for the relatively rare classifications of plateaued response and pseudo-progressor.In the case of the former, the patient has a posterior classification mode (i.e., the most likely classification given all the information collected during the patients' course of treatment) of a fast responder.This again highlights the difficulties distinguishing plateaued responses from observation noise seen in faster responders.
The pseudo-progressor, however, begins to gain a correct posterior classification probability by t = 42 d, just over four weeks into treatment.The classifications following the first measurement at t = 14 d are qualitatively similar to that observed in the prior, subsequent measurements which show an increase in gross tumour volume lead to classification as a poor responder, highlighting the limitations of the currently trained model in distinguishing pseudo-progressors from poor responders.
To explore the relative value of existing and newly collected information, in the supplementary material we produce additional results that show temporal predictions for both synthetic and validation patients, produced using the uninformative (i.e., first-level) prior.These results correspond to a prior probability of 44.7% that a patient will eventually respond to treatment; much lower than that estimated from analysis of the training data (94.0%) and that in the second-level prior (65.3%).For the synthetic patients presented in Fig. 5, the results show a decrease in prediction fit (as measured by Bayesian R 2 ) for predictions drawn prior to t = 28 d.
For times later than t = 42 d, predictions drawn using both the uninformative and informative priors are comparable.Similar results are seen for the validation patients presented in Fig. 7, although the differences are less pronounced from the third-post-radiotherapy observation point onwards.The difference between results for the synthetic and validation patients is expected: the informative prior is known to be representative for the synthetic patients, whereas we do not have this guarantee for the validation patients.Hence, observed information is more important than prior information in newly informed patients that are not well-represented by the prior.
Value in collecting measurements of tumour heterogeneity
The weekly GTV used for our analysis already exceeds clinical practice of just two pretreatment CT scans per patient.To assess the potential value of collecting higher-quality scan data that additionally enables identification of the tumour's necrotic volume, we repeat our analysis of the synthetic patient in Fig. 5a given that noisy measurements of both V (t) and N (t) are now available.The results in Fig. 10a,b show that, by day 28, relatively precise predictions relating to the trajectories of both variables can now be made.In Fig. 10c we quantitatively compare predictions for the final GTV in both scenarios.As expected, more precise estimates can be made should data relating to both variables be available.
In Fig. 11, we repeat the analysis for two new synthetic patients that experience a poor response.In the case of the first patient, a small gain in predictive ability is seen from the inclusion of necrotic volume measurements (Fig. 11c); interestingly, this improvement is not seen for the second patient (Fig. 11f).Overall, these results highlight a key challenge with using the population-calibrated mathematical model to draw predictions relating to tumour composition and the underlying cause of a poor response, particularly given the wide-ranging spatial compositions seen in poor responders.The first synthetic patient exhibits a poor response due to the development of a tumour comprising almost entirely necrotic material, which does not degrade (Fig. 11b), while the tumour composition in the second synthetic patient is perhaps more realistic, with the necrotic fraction comprising approximately 60% of the GTV at the end Figure 9. Classification of the four patients excluded from the training set.We predict each patient's classified response using data up to and including the relevant time (height of each region indicates the predicted proportion).The predicted probability of the patient responding (i.e., receiving a classification that is not that of a poor responder) is shown in black dashed.Before the start of treatment, the predicted classifications correspond to those of the second-level prior in Table 2.
of treatment.(Fig. 11e).Since the model is not trained using clinical data relating to tumour composition, it cannot distinguish between tumour compositions that are clinically realistic and those that are not.This is not an issue for prediction of the GTV, as prediction uncertainty incorporates all possible tumour compositions through prior knowledge.Predictions of necrotic volume, meanwhile, represent predominantly prior knowledge in addition to restrictions imposed by the modelled relationship between the observed GTV of patients in the training set and their potential inner tumour composition.
Conclusion
The development of predictive mathematical models of patient-specific tumour response is hindered by multiple challenges.Mathematical models must incorporate sufficient detail to capture a wide range of potential responses, while clinical data are highly limited, often comprising just one or two noisy measurements of tumour volume prior to treatment initiation.Advances in imaging technologies or the use of magnetic resonance imaging embedded in radiation delivery devices may, in future, provide a cost-effective means of collecting more detailed information, allowing the calibration of correspondingly more detailed mathematical models [54][55][56][57].In this work, however, we work with a fundamental set of measurements, and present the statistical methodology and an appropriately complex mathematical model to maximise data utility and draw clinically relevant predictions by leveraging a cohort of patients that exhibit a variety of treatment responses.
Importantly, the two compartment model is able to reproduce the full range of patient responses observed in our cohort of clinical data, representing an improvement over previously proposed one-compartment models which may not capture more complex behaviours, such as the plateaued response and pseudo-progressor behaviour.This is particularly important for prediction, since the choice of model and gamut of possible responses form a significant part of prior knowledge.While the mathematical literature presents an extensive catalogue of more complex models, we find that our choice of model with six unknown parameters, all with a direct biophysical interpretation, is simultaneously both sufficiently simple to ensure practical identifiability in some cases, and sufficiently complex to produce the variety of responses seen in the clinical data.Parameter identifiability is clearly not essential to produce predictions (single patient predictions drawn early in the course of treatment from the first-level prior, where the number of parameters exceeds the number of data points, are still sensical), however the relatively small parameter space and resultant tightly constrained second-level prior (Fig. 4) ensures adequate coverage in our resampling-based inference method: we expect our approach to become prohibitively expensive for models with large numbers of parameters.
The overarching goal of the presented framework is to leverage existing clinical data to produce a predictive model for GTV that accurately captures the uncertainty in predictions made for new patients.By benchmarking against both synthetic and a validation clinical data set, we show that our approach excels at this goal for patients with more typical responses: the fast and poor responders.Given the relatively small size of our training data-comprising measurements from 40 patients-it is no surprise that our approach does not perform as well for patients with atypical responses: pseudo-progressors, for instance, make up only 2.3% of the prior, meaning that the GTV progression of these patients is informed by (on average) a single patient in the training set.In this case, it takes six on-treatment measurements before the patient is identified as more likely to exhibit an eventual response than a poor response.
The most effective remedy would be to accumulate significantly more clinical data with better representation of outliers.Should enough data become available, stratification could be used to ensure that representation of patients in the training data either concords with that in the population, or incorporates non-quantitative prior knowledge (such as patient characteristics) that pre-inform similarities with patients in the training set.Our modelling framework is wellpoised to incorporate more detailed clinical data, including, for instance, radiotherapy plan We highlight that, in general, our statistical methodology is entirely model agnostic.Thus, informed by more detailed data, our approach could be used to develop a fully validated predictive model of not just GTV, but tumour composition, cell density, proliferation, hypoxia, and more.However, this proposition is not without limitation: our current choice to bootstrap parameter samples is likely to perform poorly for models with a large number of parameters.Such dimensionality-induced issues can be in part alleviated by sampling the full posterior directly, although this would introduce additional computational challenges.Further statistical developments are also needed to include parameters that are fixed between patients (for example, the noise parameters), or parameters that are assumed to be uncorrelated to others.
Our results add to a growing body of work [60][61][62][63] that highlights the utility that mathematical models could bring to the clinic; in future informed by highly detailed and representative patient data to provide objective, real-time, and personalised patient predictions that inform clinical decision-making.
Supplementary material for
"Predicting radiotherapy patient outcomes with real-time clinical data using mathematical modelling" Alexander P Browning 2.9 × 10 −1 * These authors contributed equally.† These authors also contributed equally.‡ Corresponding author: browning@maths.ox.ac.uk or HEnderling@mdanderson.org 1 arXiv:2201.02101v3[q-bio.TO] 13 Dec 2023 0 14 28 Figure S8.Temporal predictions from four patients excluded from the training set using the uninformative prior.We reproduce the analysis from Fig. 7 in the main document using the uninformative (i.e., population-level) prior.
Figure 1 .
Figure 1.Gross tumour volume (GTV) measurements taken during radiotherapy.(a) Example CT scan of an oropharyngeal cancer patient throughout treatment, showing tumour contoured in blue (data courtesy of CD Fuller and MD Anderson).(b-e) Clinical data representing four qualitatively different radiotherapy results.Predictions from the mathematical model, along with a 95% credible interval for the modelled observation means, are shown in purple.In all cases, the radiotherapy schedule starts at the time of the second observation.The four patients shown are excluded from the training data analysed in later parts of our study.The units of GTV are given as the fold change (FC) relative to the initial volume.
Fig. 1b-e as they undergo their course of treatment.
( a )
Pseudo-progressor.A second (noise-free) measurement greater than 102% of the first following radiotherapy onset.(b) Plateaued response.Not a pseudo-progressor, with a final measurement greater than 20% of the initial, and with a final rate-of-change less than 10% of the maximum rate-of-change observed.(c) Fast responder.Not in any other classification.
Figure 3 .
Figure 3. Parameter posteriors from analysis of training data.first-level prior distribution (blue) and full posterior (purple) following analysis of the training data.The first-level prior, p 1 (θ), comprises independent uniform distributions in the log of each unknown parameter.Parameters relate to the cell proliferation rate, λ, the carrying capacity, K, the radiotherapy response strength, γ, the decay rate of necrotic debris, ζ, the cell necrosis rate, η, and the initial proportion of the population that is necrotic, ϕ 0 .
Figure 4 .
Figure 4. Parameter clustering according to classified patient response.Kernel density of the full posterior distribution, following analysis of the training data set.Samples are classified into one of four patient responses according to criteria set out in Section 2.2.1, and kernel density estimates of bivariate marginal distributions conditioned on each classification shown.To aid comparison in the vicinity of the mode of each conditional posterior, only regions with densities greater than 50% of the maximum are shown.Parameters relate to the cell proliferation rate, λ, the carrying capacity, K, the radiotherapy response strength, γ, the decay rate of necrotic debris, ζ, the cell necrosis rate, η, and the initial proportion of the population that is necrotic, ϕ 0 .
5 and 6 for the four patients initially exhibited in Fig. 1 .Figure 7 .
Figure 7. Temporal predictions for the four patients excluded from the training set.We reproduce the analysis from Fig. 5 for the four patients in Fig. 1.These patients were not included in the training set, and so these results are representative of clinical predictions made throughout a new patient's course of treatment.Patients were classified previously as (a) a fast responder; (b) a poor responder; (c) exhibiting a plateaued response; and, (d) exhibiting pseudo-progression.Results related to the remaining seven patients excluded from the training set are given in the supplementary material (Fig. S6).
Figure 8 .
Figure 8. Predictions for the four patients excluded from the training set.We reproduce the analysis from Fig.6for the four patients in Fig.1b-e.These patients were not included in the training set, and so these results are representative of clinical predictions made throughout a new patient's course of treatment.
Figure 10 .
Figure 10.Predictions for a synthetic patient with a fast response subject to both GTV and necrosis measurements.(a-b) We reproduce the analysis from Fig.5ain the case that information relating to both V (t) and N (t) is available.(c) Mean, 50%, and 95% credible intervals for the final GTV in both data collection scenarios.The true value (calculated by resimulating data from each synthetic patient without measurement noise) is also shown (red dashed).Lower plot in (c) is a cropped inset of the upper.
Figure 11 .
Figure 11.Predictions for two synthetic patients with a poor response subject to both GTV and necrosis measurements.(a-b,d-e) We produce dynamic predictions of tumour progression for each patient in the case that information relating to both V (t) and N (t) is available.(c,f) Mean, 50%, and 95% credible intervals for the final GTV in both data collection scenarios.The true value (calculated by resimulating data from each synthetic patient without measurement noise) is also shown (red dashed).Lower plot in each set is a cropped inset of the upper.
adaptation and information relating to variations in delivered dose throughout the course of treatment.Inclusion of such information is likely to lead to better response classification, particularly if the radiotherapy dose is modified during the course of treatment.Both the accuracy and precision of predictions could also be improved for all patients through a better biological understanding of radiotherapy response.The final set of results presented in this work highlight that GTV measurements alone are insufficient to identify the root cause of a poor response.Indeed, predictions related to the inner tumour composition must be treated with as much caution as with predictions for atypical patients that are dissimilar to all patients in the training set.The absence of tumour composition data in the training set means that all predictions of tumour composition are only informed by data indirectly through the model, which has, in turn, been validated against solely GTV data.The prospect of training a model with joint GTV-composition measurements is at present hypothetical, although entirely possible through advanced imaging technologies[58][59][60].At this stage, our framework could additionally be applied to answer important questions relating to the number of tumour composition measurements required to accurately predict patient outcome throughout their course of treatment.
50 Figure S3 .
Figure S3.Fits for patients in the training set.Individual fits for all patients in the training set, using the population-level prior.Shown are the data (black disc), 50% credible interval (light grey), and 95% credible interval (dark grey).Note that model predictions are only drawn at times corresponding to clinical measurements: results for intermediate time points show as a linear interpolation.
Figure S4 .
Figure S4.Subset of classified prior samples.We demonstrate the choices in the classification algorithm by simulating eight patients of each class from the prior distribution.All patients undergo the standard course of treatment.Shown is the tumour volume (black), a horizontal line indicating unity (dashed grey), and a vertical line showing the time of the first radiotherapy dose (dashed grey).
Figure S5 .FigureFigure S7 .
Figure S5.Results for eight additional synthetic patients.We reproduce the analysis from Fig. 5 in the main document for eight additional synthetic patients (two from each response classification).
Figure2.Pseudo-hierarchical approach used in the analysis.Devoid of any data, knowledge about model parameters is encoded in the "first-level prior", denoted by p 1 (θ), and is used to individually form a set of posterior distributions for patients in the training set.The "second-level prior", denoted by p 2 (θ), represents knowledge gained from analysis of the training set and is used to form posterior distributions for new patients, denoted by p new (θ|D new ).In effect, the approach identifies patients in the training set with possibly similar outcomes to new patients.
1 we describe and present the clinical data set used for quantitative analysis and which demonstrate four disparate treatment response classifications.Secondly, in Section 2.2 we present a mechanistic mathematical model of tumour volume progression, along with a set of objective criteria that we use to classify model realisations into the four observed classifications.Subsequently, in Section 2.3 we present a statistical model that connects model predictions to clinical measurements.In Section 2.4
Table 2 .
Parameters and first-level prior distributions.The description relates to the exponentiated log parameter.
Table S1 .
* ‡1 , Thomas D Lewin*1,2, Ruth E Baker 1 , Philip K Maini 1 , Eduardo G. Moros 3 , Jimmy Caudell 3 , Helen M Byrne †1 , and Heiko Enderling †3,4,5 1 Mathematical Institute, University of Oxford, Oxford, UK 2 Roche Pharma Research and Early Development, Roche Innovation Center, Basel, Switzerland 3 Department of Radiation Oncology, H. Lee Moffitt Cancer Center & Research Institute, USA 4 Department of Integrated Mathematical Oncology, H. Lee Moffitt Cancer Center & Research Institute, USA 5 New Address: Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA Parameters resampled from the full posterior corresponding to four synthetic patients.
Parameter Fast responder Poor responder Plateaued responder Pseudo progression λ
Table S2 .
R MCMC diagnostic statistics for each patient.
|
2022-01-07T02:15:36.475Z
|
2022-01-06T00:00:00.000
|
{
"year": 2024,
"sha1": "e5730f44aa4911646563f37c997cb6af22844591",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11538-023-01246-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "ad4289dc28cfdb2f0b291531317633b427714a08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
15649915
|
pes2o/s2orc
|
v3-fos-license
|
Biofilm Formation and Adherence Characteristics of Listeria ivanovii Strains Isolated from Ready-to-Eat Foods in Alice, South Africa
The present study was carried out to investigate the potential of Listeria ivanovii isolates to exist as biofilm structures. The ability of Listeria ivanovii isolates to adhere to a surface was determined using a microtiter plate adherence assay whereas the role of cell surface properties in biofilm formation was assessed using the coaggregation and autoaggregation assays. Seven reference bacterial strains were used for the coaggregation assay. The degree of coaggregation and autoaggregation was determined. The architecture of the biofilms was examined under SEM. A total of 44 (88%) strains adhered to the wells of the microtiter plate while 6 (12%) did not adhere. The coaggregation index ranged from 12 to 77% while the autoaggregation index varied from 11 to 55%. The partner strains of S. aureus, S. pyogenes, P. shigelloides, and S. sonnei displayed coaggregation indices of 75% each, while S. Typhimurium, A. hydrophila, and P. aeruginosa registered coaggregation indices of 67%, 58%, and 50%, respectively. The ability of L. ivanovii isolates to form single and multispecies biofilms at 25°C is of great concern to the food industry where these organisms may adhere to kitchen utensils and other environments leading to cross-contamination of food processed in these areas.
Introduction
In nature, bacterial cells are most frequently found in close association with surfaces and interfaces, in the form of multicellular aggregates embedded in an extracellular matrix generally referred to as biofilms [1]. Biofilms are usually heterogeneous; in that they contain more than one type of bacterial species, but they can be homogeneous in cases such as infections and medical implants [2]. Microbial biofilms pose a challenge in clinical and industrial setting especially in food processing environments where they act as a potential source of microbial contamination of foods that may lead to spoilage and transmission of foodborne pathogens [3,4]. They can also compromise the cleanliness of food contact surfaces and environmental surfaces by spreading detached individual microorganisms into the surrounding environment [5].
Environmental conditions in food production areas including the presence of moisture, nutrients, and inocula of microorganisms from the raw materials might favour the formation of biofilm. Furthermore, when food processing equipments are not easily cleaned due to its design and food particles not completely removed, the particles aid in the formation of biofilms by providing a coat that not only provides the biofilm with nutrients but also a surface to which it can easily stick on [6]. Once biofilms have formed on food processing surfaces, they are hard to eliminate often resulting in persistence and endemic population.
Biofilms offer their member cells several benefits, including channeling nutrients to the cells and protecting them against harsh environments. In particular, it has been noted that cells within biofilms are more resistance to antibiotics, disinfectants, and to host immune system clearance than their planktonic counterparts [3,7]. Several mechanisms account for this increased antibiotic resistance, including the physical barrier formed by exopolymeric substances, a proportion of dormant bacteria that are inert toward antibiotics, and resistance genes that are uniquely expressed in biofilms [8]. Outbreaks of pathogens associated with biofilms have been related to the presence of species of Listeria, Yersinia, Campylobacter, Salmonella, Staphylococcus, and Escherichia coli O157 : H7. These bacteria are of special significance in ready-to-eat and minimally processed food products, where microbiological control is not conducted in the terminal processing step [6].
L. monocytogenes and L. ivanovii are potential pathogens of listeriosis, a rare but serious disease with a high mortality rate of 30% in pregnant women or immunocompromised individuals [9][10][11]. Listeria strains have been reported to survive for months to years in food processing environments and, thus, colonize various food products leading to food contamination [12]. L. monocytogenes biofilms in food processing plants have been widely studied [13]. However, there is a dearth of information on the ability of L. ivanovii to form biofilm; this might be due to the fact that it rarely causes human illnesses due to its low prevalence in the environment. Nonetheless, recent studies in the environment of the present study have reported high prevalence of the organism in wastewater effluents and various ready-to-eat foods [14,15], suggesting that the organism might be endemic in the area. Therefore the present study was carried out to investigate the ability of L. ivanovii isolates to exist as biofilm structures, in an effort to establish the factors for this endemicity.
Biofilm Formation and Quantification.
The biofilm forming ability test was done in accordance with the method of Stepanovic et al. [16]. L. ivanovii isolates obtained from different food sources as reported in our previous study [15] were cultured on Nutrient agar (Oxoid, Basingstoke, England) and plates were incubated at 37 • C for 24 hours. Few single colonies were suspended in sterile saline to a turbidity standard comparable to a 0.5 McFarland. The suspension was vortexed for 1 minute from which 20 µL was pipetted into a 96-well U-bottomed microtiter plate (Greiner Bio-one GmbH, Germany) containing 180 µL of Brain Heart Infusion (BHI) broth (Oxoid, Basingstoke, England). The plates were incubated aerobically for 24 hours at 25 • C ± 2 • C. After incubation, the contents of the wells were decanted into a waste container and each well was washed three times with 200 µL of sterile normal saline. Following every washing step, the well were emptied by carefully aspirating the content into a waste container and the plates were left to dry overnight in an inverted position before they were fixed with hot air at 65 • C for 1 hour. Plates were stained with 150 µL of 1% crystal violet for 30 minutes; the excess stain was aspirated and plates rinsed off by placing them under running tap water until the washings were free of the stains. The plates were left to dry at room temperature in an inverted position overnight before resolubilizing the dye bound to adherent cells with 150 µL of 33% (v/v) glacial acetic acid; the optical density (OD) of each well was measured at 595 nm using a microtiter plate reader (SynergyMx, Biotek R , USA). Reference strains of P. aeruginosa ATCC 15442 and S. aureus NCTC 6571 were used as positive controls while negative control well contained broth only. Tests were performed in triplicates on three occasions, the results averaged, and biofilms quantified as nonadherent, weakly adherent, moderately adherent or strongly adherent.
Autoaggregation and Coaggregation
Assays. Twelve (three each nonadherent, weakly adherent, moderately adherent, and strongly adherent) L. ivanovii isolates and seven reference strains (S. aureus NCTC 6571, S. pyogenes A ATCC 49399, S. Typhimurium ATCC 13311, P. aeruginosa ATCC 15442, P. shigelloides ATCC 51903, A. hydrophila ATCC 35654, and S. sonnei ATCC 29930) were used for these assays. The bacteria strains were grown separately in 20 mL of BHI broth at 37 • C for 48 hours. Cells were harvested by high-speed centrifugation (11,000 ×g for 10 min) and washed twice in 3 mM NaCl containing 0.5 mM CaCl 2 . Subsequently, the cells were resuspended in the same solution (3 mM NaCl containing 0.5 mM CaCl 2 ) and centrifuged at 650 ×g for 2 min, and the supernatant carefully aspirated and discarded into a waste container. The OD of the cell suspension was measured and adjusted to 0.3 using an automated spectrophotometer (Optima Scientific V-1200) at a wavelength of 660 nm; the cell suspension was used for coaggregation assay. Equal volumes (1 mL each) of the coaggregating partners were mixed and the OD (OD Tot ) of the mixture was immediately read at 660 nm before incubation at room temperature for 2 hours. Subsequently, the tubes were centrifuged at 2,000 rpm for 2 min and the OD of the supernatant (OD s ) measured at the same wavelength (660 nm) [17].
The degree of coaggregation of the paired isolates was determined using the equation For autoaggregation assay, the individual bacterial suspension adjusted to an OD of 0.3 was incubated at room temperature for 1 hour and the cell suspension centrifuged at 2000 rpm for 2 minutes. The supernatant (2 mL) was transferred into a cuvette and the OD measured at 660 nm.
The degree of autoaggregation was calculated as follows: OD 0 refers to the initial OD of the organism, and OD 60 is the OD of the supernatant after 60 min of incubation.
Characterization of Biofilm Formation Using Scanning
Electron Microscope. The biofilms were further examined using scanning electron microscope (SEM) according to the method previously described by Greetje et al. [18] with some modifications. A representative of the biofilm forming strain population was studied. Briefly, a microscope cover slip (22 × 22 mm) on a glass slide was placed in a petri dish half filled with BHI broth. Subsequently, a few colonies of
Data Analysis.
Tests were done in triplicate on three separate occasions and the results averaged. The cutoff OD (OD C ) for the microtiter plate test was defined as three standard deviations above the mean OD of the negative control. Isolates were classified as follows: OD ≤ OD C = nonadherent, OD C < OD ≤ (2 × OD C ) = weakly adherent; (2 × OD C ) < OD ≤ (4 × OD C ) = moderately adherent, and (4 × OD C ) < OD = strongly adherent [17].
Microtiter Adherence Assay.
The biofilm formation ability of 50 L. ivanovii strains is summarized in Table 1.
Variations in biofilm formation were observed. A total of 44 (88%) strains adhered to the wells of the microtiter plate while 6 (12%) did not adhere. The majority of the isolates demonstrated weak (44%) and moderate (34%) adherence while only 5 (10%) strains strongly adhered to the wells. The optical density range of nonadherent and strong adherent isolates was 0.332-0.503 and 2.32-3.846, respectively.
In order to evaluate the architecture of the biofilms, SEM was used. Figure 1 shows the scanning electron micrographs of autoaggregates and coaggregates biofilms of L. ivanovii and their coaggregates partner S. aureus NCTC 6571 and P. aeruginosa ATCC 15442. The different biofilm phenotypes were clearly distinguishable; the strong and moderate adherent strains (SA and MA) were seen as densely packed colonies while for the weak adherent strains (WA) few cells were stuck together and the cell morphology was clear (short thick rods), Figure 1. However, contrary to the microtiter results, it was observed that L. ivanovii isolates preferred to grow in single species than multispecies biofilm.
Discussion
Control of foodborne pathogens to ensure food safety requires the consideration of many aspects of its natural and industrial ecology. Some Listeria spp. strains have been reported to be persistently present in environments for a range from eight months to ten years [19]. These resident strains are alleged to form biofilms in food processing equipment; the formed biofilms survive most processes used to kill microorganisms in food production; hence, increasing 4 The Scientific World Journal the chances of food contamination [20]. The present study was carried out to investigate the ability of L. ivanovii isolates to exist as single and mixed species biofilm structures. A number of methods have been developed for cultivation and quantification of biofilms; nevertheless, the microtiter plate method remains among the most frequently used assays for investigation of biofilm formation and quantification of bacterial biofilms. The study therefore used the microtiter plate assay to assess the ability of L. ivanovii strains to form biofilm.
The potential of bacteria to form biofilms is affected by a number of factors including strain characteristics, physical and chemical properties of the solid phase, temperature, composition of growth medium, and the presence of other microorganisms [21]. Previous works have observed low biofilm quantities with tryptic soy broth [22,23]; therefore this study used BHI broth which has been shown to strongly influence biofilm development in many organisms such as Staphylococcus and Listeria species [23,24]. The present study observed that 88% of the strains were able to form biofilm 6 The Scientific World Journal at 25 • C and four biofilm phenotypes were demonstrated. This is of great concern to the food industry especially in the tropics whose room temperature usually falls between 22 and 28 • C; implying that with favorable conditions, these organisms at room temperature may grow and adhere to kitchen utensils or the environment if not properly cleaned, hence creating a source for cross-contamination. The attached cells in part also form a substrate for other microorganisms less prone to biofilm formation; this will lead to an increased survival rate of pathogen and further spreading during food processing. The findings concur those of Di Bonaventura et al. [25] who reported biofilm formation of L. monocytogenes at low temperatures (4, 12, and 22 • C) on a glass. However, hydrophobicity was found to be higher at 37 • C than at 4, 12, and 22 • C.
Autoaggregation and coaggregation are of great importance in biofilm formation; they integrate biological structures, by mediating the juxtapositioning of species next to favorable partner species within taxonomically diverse biofilms. Autoaggregation is a process whereby a strain within the biofilm will utter polymers to boost the integration of genetically identical strains; these interactions are enhanced by increased hydrophobicity [17]. In the present study, isolates displayed variations in their autoaggregating abilities suggesting differences in strains and serotypes.
Rickard et al. [26] defined coaggregation as a process by which genetically different bacteria become attached to one another via specific molecules. It was observed that S. aureus, S. pyogenes A, P. shigelloides, and S. sonnei were the strong partners while P. aeruginosa, a strong biofilm producer, recorded the least potential to coaggregate. The findings are in agreement with those of Jacobs and Chenia [27].
However, coaggregation results were contrary to SEM images where strong biofilms were observed in single species than in multispecies biofilms. Worthy of note is the fact that in autoaggregation assays, the individual isolates were grown separately, mixed, and incubated for only 60 minutes before the OD was read; while with the SEM the partner isolate was added after 2 hours of initial growth and the mixture was incubated for 72 hours. Previous studies have demonstrated the ability of Listeria species to grow on surfaces with other microorganisms, both Gram-positive and Gram-negative species, in a mixed species biofilm in food processing environments [24,28]. However, Van der Veen and Abee [24] using plate counts and fluorescence microscopy showed that the cell count of L. monocytogenes was more than the partner strain, Lactobacillus plantarum cells. These findings are in agreement with our findings where few cells of the partner organism were apparent in a strong biofilm structures under SEM. Studies on Flavobacterium spp. observed that isolates which were unable to autoaggregate or showed low aggregation indices displayed varying levels of coaggregation with diverse aquatic bacteria [17]. In this study, the nonadherent strain (Liv 188-2) displayed both autoaggregation and coaggregation characteristics entailing that some of the strains though cannot attach to a solid surface as primary colonizers may interact with already formed organisms later as biofilm partners. Microorganisms can adhere to a surface where it acts as primary colonizers or as later biofilm partners by establishing interactions with other microorganisms [29]. Cell surface components (flagella, pili, adhesin proteins, capsules, and surface charge) are the major contributors to attachment and coaggregation in biofilms [27].
As crystal violet basically stains the number of cells that have attached to the wells of the microtiter, SEM analysis was employed to evaluate the architecture of the biofilms. Unlike weak biofilms where the morphology of single colonies were distinguishable, moderate and strong biofilms showed the presence of densely packed colonies of pleomorphic organisms (very short rods and coccobacilli); this could be in part that the cells were smaller due to competition hence they had to adjust for survival. This could explain the high level of resistance observed in biofilms as nutrient and oxygen depletion within the biofilm cause some bacteria to enter a stationary state, in which they are less susceptible to growthdependent antimicrobial killing. Also some bacteria might differentiate into a phenotypically resistant state and express biofilm-specific antimicrobial resistance genes that are not required for biofilm formation but contributes to the survival of organisms in the biofilm.
Conclusion
The study demonstrated the ability of L. ivanovii isolates to form single and multispecies biofilms at 25 • C with strong biofilms from single species. This is of great concern to the food industry where these organisms may adhere to kitchen utensils and the environment leading to cross-contamination. Some strains could not adhere to a surface but could autoaggregate and coaggregate implying that preventing primary adhesion would prevent biofilm formation in these strains. Future studies are required to determine the antimicrobial susceptibility of the biofilms as well as determine the virulence genes expression of adherence traits in these biofilms, to throw more light on their pathogenic potential in our environment.
Conflict of Interests
The authors declare no conflict of interests.
|
2018-04-03T02:15:51.794Z
|
2012-12-26T00:00:00.000
|
{
"year": 2012,
"sha1": "5d17c89ed2bc941fdfdff304adea08a6ecdaf184",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1100/2012/873909",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cae05b1d7c656f3ef6d430ee6b3636466808f477",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
145854462
|
pes2o/s2orc
|
v3-fos-license
|
Trends in Recently Emerged Leishmania donovani Induced Cutaneous Leishmaniasis, Sri Lanka, for the First 13 Years
Sri Lanka reports a large epidemic of cutaneous leishmaniasis (CL) caused by an atypical L. donovani while regional leishmaniasis elimination drive aims at achieving its targets in 2020. Visceralization, mucotrophism, and CL associated poor treatment response were recently reported. Long-term clinico-epidemiological trends (2001-2013) in this focus were examined for the first time. Both constant and changing features were observed. Sociodemographic patient characteristics that differ significantly from those of country profile, microchanges within CL profile, spatial expansion, constant biannual seasonal variation, and nondependency of clinical profile on age or gender were evident. Classical CL remains the main clinical entity without clinical evidence for subsequent visceralization indicating presence of parasite strain variation. These observations make a scientific platform for disease control preferably timed based on seasonal variation and highlights the importance of periodic and continued surveillance of clinic-epidemiological and other characteristics.
Introduction
Leishmaniasis is considered as a neglected tropical disease [1,2].Out of many virulent species in the genus, L. donovani is probably the worst and the causative agent of visceral leishmaniasis (VL).Estimated annual VL incidence in developing countries amounts to 200 000-400 000 cases [3].Both cutaneous leishmaniasis (CL) and VL have shown increasing global trends in the recent past [3,4].Understanding the epidemiology of leishmaniasis infections and new clinical patterns are useful for disease management and epidemic control.
Leishmaniasis remained rare and mainly is imported in nature until 2001 in Sri Lanka, when autochthonous CL was detected in a patient referred to author's institution during an attempt to identify a parasitic aetiology.A series of professional and public awareness programs (2001)(2002)(2003) were subsequently carried out by the investigators, which probably resulted in the detection of many more cases from the same region in Northern Sri Lanka.Investigations pointed towards already established local transmission cycle [5].
Initially reported cases were consistent with a clinical picture of CL with no clinical or immunological evidence for visceralization (negative formol gel screening or rK39 rapid assay and absence of systemic features) [5].Patients presented with papules, nodules ulcerating nodules, or completed ulcers [5].Atypical manifestations of CL were extremely rare [6].Causative species was identified as an atypical genetic variant of L. donovani [7,8].
During the subsequent years, spatial widening, case clustering, and peri-domestication were evident [9][10][11][12].VL and mucosal leishmaniasis (MCL) were also reported subsequently [13][14][15].Studies suggested a genetic basis for the different local phenotypes [16].Meanwhile 34% of active CL infections showed a humoral response though followup studies have failed to detect any visceralization of initial CL [17,18].The felt local need for disease containment prior to the establishment of more virulent forms was highlighted [19,20].
Regional drive on leishmaniasis control aims at eliminating VL in the Indian subcontinent (ISC) by 2020 [21].However, predictions suggest that L. donovani transmission will continue in this region even after 2020 necessitating the careful surveillance and control [22].Based on the general belief of the absence of animal reservoir hosts for L. donovani, reduction of human reservoirs is considered useful in this regard [23].Detailed understanding of local disease trends and efforts on enhancing case detection are therefore urgently necessary for disease control.Local leishmaniasis clinical profile has been studied using smaller patient populations spanning a few years during the current outbreak [5, 10-12, 24, 25].Clinical and spatial trends of leishmaniasis have not been investigated to any level of depth in this focus.Meanwhile increasing number of case referrals to central laboratory in author's institution was observed until decentralization of diagnostic facilities by the same in 2014 and increasing case numbers continued to be reported at hospital settings even after that.Therefore, a relook at the current profile and study of trends of leishmaniasis situation are warranted in this country.
Materials and Methods
Details from 1953 patients referred to the Department of Parasitology, Faculty of Medicine in University of Colombo (UCFM), over a period of 13 years (2001 -2013) with suspected leishmaniasis, were included in the analysis following informed written consent.Subjects were clinically assessed by the principal author or medically qualified assistants on suspected cases.Sociodemographic and systemic clinical data was gathered using interviewer administered case record forms.Lesion data were collected for each lesion from all suspected CL cases.In case of multiple lesions, randomly selected first lesion was evaluated.
Bone marrow (BM) and lesion material (lesion aspirates, LA; slit skin scrapings, SSSs; punch biopsies, PB) were obtained from suspected cases of VL and CL, respectively.Light microscopy (LM) was carried out on all BM, LA, and SSS samples.In vitro cultivation (IVC) of Leishmania parasites was carried out using BMs and LAs of LM negative patients [26].PCR was performed on LM and IVC negative patient's BM, LA, and PBs [27].Two millilitres of intravenous blood was drawn from all patients and sera were separated.A proportion of laboratory confirmed cases of CL were screened by formol gel test (FGT) (n=700) and rK 39 (n=200) assay, respectively.If a patient was reported positive at least by one parasitological investigation (LM, IVC, and PCR) they were included in the laboratory confirmed group.Those patients who turned negative with all three tests were considered as true negative cases.Inconclusive cases were excluded from analysis.Laboratory confirmed CL cases were considered for sociodemographic and clinical data analysis.Four subgroups were selected from four different time points during the reporting period, that is, A: 2001-2004, B: 2005-2007, C: 2008-2010, and D: 2011-2013.Case referral rates as a projection of case incidence rate (CRR, number of confirmed cases/ 100 000 population) were calculated for each time period in each administrative district within the country.
. .Data Analysis.Age and sex distribution and clinical presentations were compared between different study periods using -2 test.Maps of spatial distribution patterns were generated using ArcGIS 10.0.
. . Working Definitions
Typical Onset.This is single, painless skin papule of less than 1cm in size.
Size.Maximum diameter of the observable lesion is measured to the closest centimetre excluding induration.
. . CL clinical Presentations and FGT/rK Examinations.
Presence of nonspecific systemic features on examination (fever, loss of appetite, reported loss of weight, splenomegaly, hepatomegaly, anemia, jaundice, and skin colour changes as noticed by patient) was minimal (the highest being 0.3% for loss of appetite and loss of weight) or absent (skin discolouration, 0.0%) among patients with CL.Combinations of any of these features were also minimal (loss of weight and loss of appetite in 0.1%) in the CL patient group.FGT and rK39 screening assay were negative in all examined cases of clinically suspected CL. showed varying trends.However, increasing number of cases was reported from other regions (Figures 1(b)-1(d)).Cases reported in the previously leishmaniasis prevalent areas in North and South remained almost unchanged over the time (Figure 1).There was a reduction in the total number of cases referred to the institutional laboratory over the years.
. .Seasonality of Infections.Although reported annual case numbers varied over time, they showed a general seasonal trend; that is, there were a low transmission season from March to June and a major high referral season from July to November (Figure 2).This seasonal variation pattern was more pronounced during the initial stages (2001 -2004), as shown in Figure 2.
. . .Age and Sex Cross-Examination.More young adult patients (21-40 years) were found among males, while there was a wider age distribution among females with involvement of higher proportions of younger and elderly individuals (Figure 5).This trend remained nearly constant at different time periods during the reporting period (Figure 5).Females were more likely to present with head and neck lesions than males while males had more trunk lesions than those of female group.This difference was seen at all stages of the outbreak.Nature of onset, number, type, or size of lesions did not demonstrate a gender based variation (data not shown).Higher proportion of elderly individuals (>40 years) presented early (51.49% within 3 months) but 53.3% lesions were ulcerated.This trend remained unchanged throughout the reporting time.However, majority of lesions that occurred in all age groups started as primary lesions, remained single and small (<2cm), and were reported early (within 6 months of duration).There was no significant variation in the basic clinical profile between different age categories (data not shown).
Discussion
Leishmaniasis was made notifiable in the country only in 2009.National data for spatial distribution of reported cases is available since then.However, first leishmaniasis patient diagnostic laboratory was set up at University of Colombo's Faculty of Medicine (UCFM) in 2001 and functioned as the main referral centre for patient diagnosis in the country.Microscopic training provided by this unit for health sector and other institutional technical officers may have led to the decentralization of the diagnostic facilities during the fourth stage (approximately after 2013).This likely has resulted in a drastic reduction of the referred case numbers to UCFM.Until such time, the case referral rates observed at UCFM laboratory could be considered as a near accurate projection of the true case incidence of the country that differ slightly based on the proportion of cases that were not self-referred, misdiagnosed, or treated on clinical grounds due to difficult patient or sample transport to Colombo Laboratory.
Clinical leishmaniasis cases in Sri Lanka are still on the rise, with the reported new clinical cases of 1,508 in 2017 by the Ministry of Health [28].However, the number is relatively small compared to that of dengue fever and other tropical diseases; considering the fact that Sri Lanka is a small country, the increasing trend in L. donovani caused CL in Sri Lanka is definitively worrisome with the mission of leishmaniasis elimination in Indian subcontinent.Understanding the epidemiology of leishmaniasis and past control practices is useful for future targeted disease control planning.Findings from this study may well serve that purpose.Main clinical picture of leishmaniasis in Sri Lanka over the study period continued to remain as CL (Figure 6).Other clinical forms of leishmaniasis still remain a minority, though it may be an underestimation of the true picture [14].Minimal or absent systemic features in a clear majority of CL cases, negative VL screening assay results in CL, absence of an initial history of CL in reported VL cases [14], and already reported strain difference between CL and VL causing local Leishmania strains [18] indicate that most local skin infections progress without visceralization.However, early evidence of serological response may be indicative of a potential for visceralization on the other hand, though it can also be due to a transient CL associated seroconversion [17].Spatial dimensions of the CL outbreak have expanded during the study period.In spite of this, few districts in Northern and Southern Sri Lanka remained the highest case prevalent areas indicating the possibility of existence of independent disease transmission foci in these areas.Case clustering in the South was identified previously [12].Spatial expansion of existing transmission foci and increased free movements of people between disease foci and other areas resulting in new foci, fast progressing infrastructure developments, and vector abundance confounded by favourable climatic conditions may have contributed to the disease spread in the island.The improved awareness among clinicians, public health personnel, and the general public may also have contributed to improved case detection.
Changes in spatial distribution showed the locations of the transmission foci and how these foci shifted from time to time.In this study, we found that the transmission hotspot in Southern Sri Lanka along the coastline remained unchanged over time.However, transmission hotspots in the Northern part expanded from 2001 to 2007 and then shifted southwards from 2008 to 2010.However, this study only included clinical data collected from University of Colombo.Importance of case notification [29] and utilization of such data for local surveillance [30] has been indicated.It will be important to include the updated national data so that the movement and current locations of hotspots can be updated.Nonetheless, spatial trends of the epidemic will be important in designing future epidemiological surveillance plans and for resource allocation processes in disease control.
Seasonal pattern of transmission is useful in timing disease surveillance and control activities.Seasonal variation of case distribution with two annual peaks that coincides with the monsoon rain patterns in Sri Lanka was evident since the onset of the epidemic seems to remain unchanged over the time.Presence of an established pattern maintained over a decade period indicates the long-term existence of the local infection, which has been backed up through phylogenetic analyses [8,18,31].This is likely the reflection of seasonal activity of vectors.May and September monsoon brings rain to the Southwest of Sri Lanka, while the dry season in this region is from December to March.In the Northern part of the country, the North-Eastern monsoon from October and January brings both wind and rain in that region, and drier weather is between May and September.From 2001 to 2004, the peak transmission season was from August to March, during which most of the cases were from Northern part; that is, they peaked during monsoon season.CL affected a wider age range (1-81 years, data not shown) and both genders.However disease associated male and young adult age preponderance observed since the onset of the epidemic was clearly different from the age and sex composition of the country's normal population.Outdoor associated behaviours and occupational exposure are the most likely reasons for the male and young adult age preponderance.Though these patterns remain unchanged over the epidemic, a changing age sex pattern with increased involvement of older age groups and females during the later stages was observed.This may be indicative of an association with behavioural patterns and peridomestication of transmission cycle that increases the risk of disease being transmitted to them.Peri-domestication was also indicated in previous studies that identified household risk factors [10].The expansion of transmission from mainly male young adults to female and older age groups may pose a significant threat to disease control activities.Majority of cases presented early had typical CL lesions (Figure 5).Majority of lesions in the total study population remained as single and small that were seen on exposed body sites.Proportion of chronic (>3months old) lesions increased significantly over time together with increased proportions of large (>2cm) lesions observed later on.However a considerable proportion of lesions were either presented or detected over a year from the onset/notice of the skin lesion indicating the need for improved case suspicion or early diagnosis.Common involvement of exposed areas is consistent with the common clothing patterns that expose the affected sites to the sand fly vector.CL lesions in a clear majority may occur on the site of vector bite as single lesions indicating minimal cutaneous spread.Itching was not a widely recognized known feature associated with leishmanial skin lesions.However, since a good minority of skin lesions were associated with itchiness, suspected CL lesions should not be exempted as non-CL and therefore from proceeding to laboratory confirmation merely due to presence of itchiness.Earlier described broad CL profile still remains applicable/undisturbed, though there were minor fluctuations between different time periods of the epidemic.Underlying reasons for this observation may be the noninfection associated factors like changing case referral patterns.Study of data that were collected through active case detection is required for better understanding.
Current or previously observed patterns of clinical features were not shown to be gender dependent except for the differences seen in affected body sites probably due to the clothing patterns.Clinical profile was not age dependent as well, except for the lesion progression which seems to be faster in elderly individuals.Increase in number of single and early lesions may be indicative of better awareness and case detection rates though this may also indicate rapid progression of skin lesions which promote early treatment seeking behaviours.
CL still remains as the main clinical entity with few cases from other clinical entities.These observations further support the existence of multiple genetic variants that cause skin and visceral/mucosal infections, a phenomenon that has already been demonstrated [18].Though major sociodemographic changes are not observed, minor and continued changes are observed.Likely reasons are peridomestication, increased host immunity, spatial expansion, improved awareness, or a combination of many factors.Main disease foci are still reported in resource limited areas in the local setting [15,25].The observed reduction in self-referral time is encouraging, though there were some chronic lesions.Accurate clinical suspicion especially in a laboratory resource limited setting is required to enhance case detection in new disease foci.Already reported other clinical forms caused by L. donovani in local population may further complicate the disease control activities unless they are launched in an evidence based and a timely manner.Dependency on passive case detection is a limitation in this study.
Conclusions
It is important to carry out periodic surveillance to understand the changing trends within the existing picture and they will provide more accurate projections on the true case burden and epidemiology in the island.
Figure 4 :
Figure 4: Trends in the clinical profile of study population over time (2001-2013).The individual figures show the variation of (a) number of lesions, (b) size of lesions, (c) location of lesion, (d) type of lesion, (e) itchiness associated with lesions, and (f) duration of lesions.
Table 2 :
Trends in the clinical profile over the study period.
* Missing cases or variables were excluded.
Table 3 :
Trends in mean durations of lesion according to selected clinical features.
|
2019-05-07T13:41:01.022Z
|
2019-04-14T00:00:00.000
|
{
"year": 2019,
"sha1": "9c3bfa042c5e31e1e061870c4d2f751c3d3e1520",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2019/4093603.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9c3bfa042c5e31e1e061870c4d2f751c3d3e1520",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
245126227
|
pes2o/s2orc
|
v3-fos-license
|
Integrated geophysical and geochemical assessment of submarine groundwater discharge in coastal terrace of Tiruchendur, Southern India
Submarine groundwater discharge (SGD) study is essential for groundwater in coastal terrace at Tiruchendur. The famous Murugan Temple is located in the area and around 25,000 people who visit this temple use the SGD well water at NaaliKinaru (a small open well) as holy water and drink it. The rock and soil type are sandy clay, silt, beach sand, calcarenite, kankar, gneissic rock and charnockite in base rock. Megascopic identification method was used to identify the porous and permeable rocks such as calcarenite, sandstone and kankar to support to increase SGD flux. Grain size study was used to identify the paleo-coastal estuarine environment with sediment deposits in the terrace. The square array electrical resistivity method was used to study the subsurface geology and aquifer depth. The 2d ERT technique was used to identify the subsurface shallow perched aquifer of freshwater. The magnetotelluric survey method was used to scan the entire subsurface geological and tectonic uplift, coastal ridges, rock folded subsurface structural features of continental and oceanic tectonism. Darcy’s law was used to calculate the SGD flux rate in the above study area.
Introduction
The submarine groundwater discharge (SGD) study plays a major play role in the coastal aquifer and water resources management. Taniguchi and Makoto (2002), Robb (1990), Peng et al. (2008), Land and Paull (2000). Studying the SGD discharge with flux rate estimation is essential one for coastal zones monitoring of freshwater environment. Porubsky (2014), Burnett et al. (2006), The SGD discharge with flux rate estimation is essential one for monitoring of freshwater environment in coastal zones. Zhang et.al (2020), Duque et.al (2020). The huge volume of the freshwater is discharged into the oceans regularly due to heavy rain. Therefore, aquifer characteristic study is essential for SGD flux estimation. Prakash et al. (2018), Manivannan and Elango et al. (2019), Babu et al. (2009Babu et al. ( , 2021). An estimation of freshwater discharge using electrical resistivity methods of 2D ERT and magnetotelluric method. Ma and Zhang (2020), George et al. (2018), Jeyapaul et al. (2020), Ravindran et al. (2021). The main aim of discharge of freshwater is move through the sandstone formation. Geochemical studies were carried out by some researchers, Selvam et al. (2021a, b).
The study area is mostly covered with recent deposits of coastal alluvium, shell with marine environments of calcareous sandstone with shell material, Oolitic structure of lime, clay deposits in the study area. The terrace was formed due to the tectonic upliftment of coastal and continental movement. The Valli cave was formed by waves and is made up of lime calcareous kankar and calcarenite rocks. The terrace is completely made of calcareous materials. The "Naalli Kinaru" small open well is considered to be sacred and of divine origin. This is a small open well received the water from SGD discharge of water. A sub-stream of the Tamirabarani River once flowed toward the northern end of the Tiruchendur terrace.
The study area has semi-tropical climate condition. May to August is the hottest months. December to February is 9 Page 2 of 17 coolest months of the study area. The highest average temperature of around 35 °C is recorded in the month of June every year. The mean annual temperature is 28.3. The mean annual precipitation is 675 mm. Major rainfall is received during the northeast monsoon period between October and December. The maximum rainfall is usually received during November which is around 1131 mm.
The coastal zone tapers downwards towards the sea. The costal sediments assume different forms along the coastal belt due to Neotectonic activities. Raised beaches and cliffed shoreline are observed in Tiruchendur. These beach ridges or terraces are covered by aeolian sands and are undulated with calcareous cementing medium. There are many silted up lagoons behind the beach ridges during the rainy season and the lagoons are filled with freshwater but become hypersaline during summer.
The important coastal features are bays, lagoons, estuaries, cliffs, dunes, backshore width, beach width, and wave cut features. The cliff section of rock calcarenite is made by high energy wave action and is cut into notches and caves at different levels in the terrace. In Tiruchendur, wave pressure generates high energy for erosion activity. The study focuses on the coast terrace of Tiruchendur. It is tectonically connected by the tributaries of the Tamirabarani channel. The study area Tiruchendur terrace is located in the middle part of the Gulf of Mannar coast.
Material and methods
The present study focuses on the discharge of freshwater into subsurface through hydraulic connection of sea bottom in the coastal terrace of Tiruchendur coast. The 2D electrical resistivity imaging (2D ERI), Electromagnetic, Geotechnical logging, Aquifer system, Topographical study, Tectonic settings of faulted movement through the satellite imagery wave were used for the aquifer characteristic study.
The 2D electrical resistivity imaging technique is used to find out the subsurface geological and hydrogeological aquifer thickness. The multicore cable with resistivity meter was used for the data collection in imaging study. The Werner configuration was used for this survey. The collected data were plotted in the Res2DINV software for pseudosection preparation with data interpretation.
In the present study, SGD flux estimation was studied with the help of 2D electrical resistivity, magnetotelluric, Bore hole drilling, grain size Granulometric study, Microscopic and Megascopic study of rocks. In thin section identification study was used for megascopic identification of rock with Quartz grains and association of Magnetite, Ilmenite, Zircon embedded to consolidate with cementing matrix of calcareous material.
Megascopic Identification
The petrological study of the rock revealed megascopically fine and oolitic texture of calcareous cemented sandstone, ferrous rich calcareous limestone, and oolitic texture with coarse and fine grained porous siliceous material cemented with lime material. Most of the parent rock is intergrowth and porous filling magnetite, illuminate mineral enriched. The paleo-river sediments and sandstone deposits are completely altered into metamorphic sheared grains from granitic rock with accessory minerals. The garnet, illuminate, embedded quartz and feldspar grains are seen in the entire rock. The Arkose rock contains feldspar and quartz type of mineral intergrowth in step like terrace (Barnard et al. 2013). The grains texture in the top to bottom level of the terrace is low to high sorting. The temperature variation and upliftment of shearing grain calcarenite are recorded in the stone. (Fig. 2).
The Valli Cave is made up of porous and permeable formation of calcarenite, sandstone and Kankar formation that is identified in the microscopic study.
Microscopic identification
In the microscopic identification of mineral and quartz with cementing matrix material, the followings were observed: the shape and size of minerals, grains with porous medium are also measured in the microscopic study.
Five VES profiles were taken near the beach of Tiruchendur. The Azimuthal square array method was used to identify the deeper aquifer system and compare it with bore well litho log data. The aquameter is used for the data collection, the four copper electrodes, wire spool, with equal short interval of 1.75 m depth is followed for shallow aquifer identification. Azimuthal square array is a scientific method to cover more depth of penetration compare to Wenner configuration. Spacing (A) is equal to the depth of penetration. pa = k×v i ; Pa = apparent resistivity, K = Geometric factor for the array, V = Potential differences in volt. I = Current magnitude in amperes, K = Geometrical facto for square array formula. K = 2 A 2−(2) 1 ∕ 2 = k factor calculation formula of side length of the square array method. Square Array method electrode arrangement (Habberjam 1972;Habberjam and Watkins 1967; Antony Ravindran 2012). quartz + biotite (c) aggregate of quart + feldspar + biotite (d) feldspar alteration e) quartz cementing medium (f) shell material (g) garnet + apatite (h) milky quartz from quartzite area (i). wave sorted quartz grain (j) Rutile + apatite (k) quart grain weathered from quartzite ridge (l) well sorted quart grain from estuarine environment (m) Rutile (n) Zircon mineral (o) quartz
Fig. 3 ( a) Feldspar + quart matrix (b) Feldspar + biotite (c) Quartz cemented in feldspar (d) quartz + feldspar + biotite grains (e)
rutile + feldspar (f) pyroxene grain + quart (g) quartz class embedded by calcareous material (h) fossil in sandstone (i) angular shape quartz (j) quart matrix with other minerals (k) feldspar + quart matrix presents in quart (l) garnet grains with quartz The shallow depth of resistivity is 10 ohms, and this is indicating the freshwater SGD flow in 10 m depth. The depth of 40, 70 m is a freshwater resistivity range of 4 ohms. The seawater or saline interface is identified in the resistivity of 0.1-3 Ω.
2D electrical resistivity imaging (ERI) study
The 2D electrical resistivity imaging is a useful tool for SGD path study. The subsurface geology, freshwater and saline water studies were carried in the the 2D ERT equipment with multicore cable, 40 copper electrode, 12 V battery. The two ERT profiles were carried out from parallel to the coast of dunal area and near shoreline area covers a 300 m distance.
Profile1
Profile 1 covered a length of 300 m with Wenner-Schlumberger configuration. The subsurface geology showed the freshwater perched aquifer in the dunal area at depth of 6-8 m resistivity ranging from 4.00 to 5.00 Ω.m and this seen in the middle part of the area. The uplift of weahtered gnessic rock and its resisitivty is 5.00-700 Ω.m. The low resistiity is obtained from 2d ERI pseudosection due to the seawater intrusion by dynamic wave action in the depth of 14 m (Fig. 10).
Profile 2
Profile 2 is covering 300 m distance of NE-SW direction of shoreline area. The upliftment of coastal granitic rock is intruded from the deeper level to top. The beach exposure is clearly seen in the 2D ERT pseudosection at an apparent resistivity of 2.38 Ω.m. The low resistivity of seawater intrusion at shallow level is 0.863 Ω.m. In the depth of 31.3 m, an apparent resistivity is 4 Ω.m which indicates SGD. The freshwater is moving from highly elevated continental area to oceanic plate of coastal shore line area. (Fig. 11).
Magnetotelluric method
Magnetotelluric method is used to identify the subsurface geology, formation and freshwater discharge, Li and Jie (2017), Vozoff and Keeva (1991), Abdelzaher et al. (2012), (2017), Albouy et al. (2001). The magnetotelluric method has been used by ADMT-300S, equipment with the help of M, N copper probe continuously shifted by equal distance and depth of coverage which was also changed to cover 300 m. The resistivity variation clearly democrats the different soil, rock types and coastal boundaries. The fresh/saline water interface was also distinguished with the help of magnetotelluric images.
The magnetotelluric method is useful for subsurface deep or aquifer study. The aquifer thickness, freshwater movement, saline intrusion, folded and faulted tectonic movement of landscapes Geomorphologic changes of the river and coastal sediment into meta-sedimentary rock were studied.
The petrological study of the soil type and rock type in the area showed that it was mostly covered by the shell, sandstone, calcareous sandstone, sandy loam, clay loam, shell with lime material, calcareous limestone and paleomarine clay deposits in the study area.
Tiruchendur profile 1
Profile 1 is at a distance of 130 m parallel to the coast of Tiruchendur terrace. The other side of terrace has highly intrusive rock from deeper level range of 0.27-0.33 Ω.m. The temple of Murugan is placed on the hard terrace. The lowest value of 0.03-0.06 Ω.m is indicating the SGD flow. The SGD discharge is at the depth of 30 m, 60 m, 80 m and 120 m depth in the magnetotelluric method. (Fig. 13).
Tiruchendur profile 2
Tiruchendur profile 2 covers a stretch of east to west a distance of 170 m the range of resistivity 0.03-0.07 and this indicates the high seawater intrusion in the highly weathered granitic, gneissic rock. The range of resistivity 0.14 to 0.317 ohms is also indicating the highly lithified meta-sedimentary rock associated with Charnockite. The freshwater SGD is occurring at the depth of 60 m at a resistivity of 0.03-0.04 Ω. (Fig. 14).
Tiruchendur profile 3
Profile 3 is measured in Tiruchendur coast E-W direction for a distance of 173 m. The highrock has neem completely sheared by oceanic and continental plate movements and the coastal terrace layer was scanned in magnetotelluric method.
The hard and compact rock of gneissic, Charnockite is massively formed in 30 m, 60 m, and 90 m depth and freshwater is discharged in the same depth. (Fig. 15).
The Darcy's flow rate calculation formula was used to estimate the SGD flow rate. Ravindran and Ramanujam (2014).
In the Darcy's law, is Q = ki; Where, Q: Flow rate, m/s; k: Hydraulic conductivity, in m/s; I: Hydraulic gradient, dimensionless; A: Flow cross section area, in m 2 .
BW data collection for rock/soil identification
In the study area in two places near Tiruchendur, beach sampling was collected by depth-wise change of subsurface soil, lithology study. The geotechnical study was done with the help of hand auguring and hammering techniques, the collected samples were analyzed using sieve techniques, and the soil properties for the construction in near coastal area were studied. Profile 1 geotechnical study of soil variation shows fine sand, silty-clay, sandy loam, coarse sand with water table depth is used to correlate the resistivity data. Profile 1 is well graded and silty mixture with silty -clay-sandy mixture up to 6ft is expensive clay rich area. Profile 2 was near the coastal and estuarine area and the well logging estuarine site was high well and had fine graded sand, coarse sand with water column at depth of 9ft around 3.3 m depth. The clay with shell material at depth of 15ft occurs in this area due to river, wave and tidal depositional environment.
This systematic sieve method has adopted for twenty-four soil samples and they were collected from in and around Tiruchendur coast. The grain size study using mechanical sieving method with systematic analysis using sieve involves a column of sieve wire mesh cloth and a different sieve size. The size-wise grain separately settled in the upper to lower opening sieve size. The sieve shaker is equipment used for the experimental test. The collected sample was analyzed mechanically by sieving method using sieve shaker for the size and shape of soil grains (Fig. 20).
The graphical representation of the Kurtosis value is plotted in the lower side of graph which indicated the mesokurtic, platykurtic, and melanokurtic sizes. The mean value of the grains min 1.8 and max 2.2 is obtained from the graph. The sorting of grains is 0.164-0.0781 and kurtosis 0.517-1.373. The fine sand formation occurred in the sampling point of 1,2,6,7, and medium sand is found in the samples of 3, 4, 9, 10.
Sieve analysis of terrace sediments was analyzed with the mechanical sieve analysis of grains. The excel worksheet plotted in the percentage-wise and used to explain the size in the terrace area.
The statistical analysis mean, median, mode, skewness, kurtosis values obtained from the sieve analysis. The sorting of the grain is used to identify the depositional environment of shallow marine, beach, and environments. The lagoon stagnant water altered to hypersaline water and becomes shallow marine deposition.
Water geochemistry
The geochemical study of water samples in NaaliKinaru and the adjoining area of Tiruchendur terrace was used to identify the SGD in the terrace. The cation and anion concentration study of water samples from open well, bore well and push point was done with the help of water sampling from the beach shoreline side (Srinivas et al. 2020). The systematic water sample collection was followed in the study. The water bottle is completely sealed and monitored the cation and anion in the collected water samples. The major ion and cation concentrations were analyzed in the laboratory of V.O. Chidambaram College, Thoothukudi.
Result and discussion
The study area is tectonically associated with the Achaean-tertiary contact in the trends of NNE-SSW linear fault of Tamirabarani River. The microscopic and megascopic showed rock and minerals such as calcarenite, Kankar, shell limestone, clay, sandy soil, sandy clay, sandy loom, sandstone, calcareous limestone, gneissic rock and the Charnockite basement was formed in the folded nature (Figs. 1, 2, 3, 4). The Azimuthal square array method is a supporting tool for indirect identification the depth of soil, rock type, seawater, freshwater (10 Ω.m, 10 m depth) in shallow perched aquifer and (4 Ω.m in 70 m depth) (Figs. 5,6,7,8,9). The 2D ERI profiles1 SGD was recognize at 10 m resistivity of 4-5 Ω.m, in profile 2 is 4 ohms at depth of 13 m (Figs. 10,11,12).
The grain size analysis is used to find out the fine sand, medium sand in dune, beach, and estuarine environment in and around the terrace area (Figs. 19 and 20) Table 2.
The groundwater quality and geochemical facies was studied using the scientific method freshwater (Na, Mg surface), seawater (Na, Mg defects). The proper systematic geochemical analysis showed the cation and anion value of water at the minimum and maximum value of Ca, Mg, Na, K, Hco 3 , SO4, No 3 , and freshwater phase of 1 ,2, 3, 5, 8, 10 and saline/seawater zone 4, 6, 7, 9, intrusion of seawater (Table 3). The hydrochemical plot of diagram of samples is T1, T2, T3, T5, T8 and T10 which are plotted to the freshwater zone. Then, remaining samples T4, T6, T7, and T9 plot of seawater intrusion zone. The facies of Mg 2+ , HCO 3 − is a freshwater zone (Fig. 21). The Ca 2+ , HCO 3 − seawater intrusion zone of the study area. The shallow aquifer well acts as a permeable layer of perched aquifer lens of water table available in "Nallikinaru." The open small freshwater well is utilized for Murugan Temple pleasing water for 25,000 people visiting every day this temple. The bore well soil and rock sampling was used to validate the azimuthal square array, magnetotelluric and 2D ERI data identified the soil and rock of the study area. The geophysical and geochemical assessment to find out the Calcarinite rock which act as leaky aquifer from Avudaiyur Kulam (Western side of the Terrace) in to Sea shore area. It is a useful study for SGD flux rate for public using up of groundwater in coastal aquifer system.
Conclusion
The study has analyzed the submarine groundwater discharge flows that take place in Tiruchendur by subsurface water penetration into the sea. It is used to investigate the type of rock, the quality and quantity of soil and groundwater. This study focuses on SGD in porous sandstone with water permeability for public utility. Since the density of freshwater is much lower than that of seawater, freshwater floats to the surface in coastal areas. The floating of freshwater (SGD in the form of perched aquifer) oozed out to occur in the seaside area of the Murugan Temple in Tiruchendur. The cliffs and rocky coasts are formed due to uplift, and freshwater springs can be seen under the sea. The porous sandstone and sand bar are of sandy nature with dunes area occurring in Tiruchendur beach. SGD study has been conducted using electrical resistivity and geochemical groundwater parameters, as well as particle size analysis of water behavior in the soil and rocks; resistivity studies were based on the information from VES profiles, 2D profiles and magnetotelluric profiles of SGD under the sea is at a level of 10 feet 30 feet 60 feet 90 feet. Samples of groundwater in dug, open, bore wells were collected from the study area, for water quality and quantity of SGD water for public utility. Groundwater quality and geochemical phases were adopted as scientific methods for analyzing freshwater (Na, Mg surface) and seawater (Na, Mg defects). The minimum and maximum values of Ca, Mg, Na, K, HCO 3 , SO 4 , NO 3 were seen on fresh water phase 1, 2, 3, 5, 8, 10 and where salt water/sea water 4, Areas 6, 7, and 9 where the sea water invades. The water chemistry diagram of the chart plotted the samples T1, T2, T3, T5, T8, and T10 into the freshwater area. Then, the remaining samples T4, T6, T7 and T9 showed seawater intrusion in the area map. The SGD in the coastal environment of water is mixed with fresh, saline and seawater. The significant variation of water geochemistry and electrical resistivity has been the method used to find out SGD flow in porous sandstone in Tiruchendur area. 9 Page 16 of 17 the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-12-14T14:33:37.190Z
|
2021-12-14T00:00:00.000
|
{
"year": 2021,
"sha1": "e84db5f2da65fa86f020865d89dd30d33a78754f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13201-021-01553-8.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "e84db5f2da65fa86f020865d89dd30d33a78754f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
119199186
|
pes2o/s2orc
|
v3-fos-license
|
Phase Diagrams of Systems of 2 and 3 levels in the presence of a Radiation Field
We study the structure of the phase diagram for systems consisting of 2- and 3- level particles dipolarly interacting with a 1-mode electromagnetic field, inside a cavity, paying particular attention to the case of a finite number of particles, and showing that the divergences that appear in other treatments are a consequence of the mathematical approximations employed and can be avoided by studying the system in an exact manner quantum-mechanically or via a catastrophe formalism with variational trial states that satisfy the symmetries of the appropriate Hamiltonians. These variational states give an excellent approximation not only to the exact quantum phase space, but also to the energy spectrum and the expectation values of the atomic and field operators. Furthermore, they allow for analytic expressions in many of the cases studied. We find the loci of the transitions in phase space from one phase to the other, and the order of the quantum phase transitions are determined explicitly for each of the configurations, with and without detuning. We also derive the critical exponents for the various systems, and the phase structure at the triple point present in the {\Xi}-configuration of 3-level systems is studied.
Introduction
While some observed phenomena such as the Rabi cycles in 2-state quantum systems may be explained by a semi-classical theory, other occurrences such as the revival of the atomic population inversion after its collapse [1][2][3] are quantum effects derived as a consequence of the discreteness of the field states. The revival property appears as well, for instance, in the dynamics of electron currents in monolayer graphene subject to a magnetic field [4]. Even fractional revivals have been identified with information entropies in different physical systems of interest [5]. (A review of the formalism required to understand some aspects of the revival behaviour is presented in [6].) These and other purely quantum effects need to be studied through a quantum optics model such as the Jaynes-Cummings model (JCM) [7], which describes the behaviour of a 2-level system in the presence of a quantised radiation field. This model works very well when the radiation field and the system energy gap are close to one another and of the order of optical frequencies (∼ 10 15 Hz); this approximation is the so-called rotating wave approximation (RWA). The extension of the model to many "atoms" or systems is the Tavis-Cummings model (TCM) [8], and the removal of the RWA approximation including the so-called counter-rotating terms leads to the Dicke model (DM) [9], which describes the interaction of a single mode quantized radiation field with a sample of N A two-level atoms located inside an optical cavity, in the dipolar approximation (i.e., located within a distance smaller than the wavelength of the radiation). (Hereafter we will refer to "atoms", but the theory applies to any finite-level system, including spin systems and Bose-Einstein condensates.) The Dicke Hamiltonian has the expression H =hω F a † a +ω A J z +γ √ N A (a † J − + aJ + ) +γ √ N A (a † J + + aJ − ). (1) Here, N A is the number of particles; the first term in the rhs represents the field Hamiltonian, where ω F is the field frequency and a † , a are the creation and annihilation photon operators; the second term represents the atomic Hamiltonian, withω A the atomic energy-level difference, and J z the atomic relative population operator. The 2 last terms represent the interaction Hamiltonian; we have written them separately in order to differentiate the rotating term (first), with J ± the atomic transition operators, and the counter-rotating term (second).γ is the dipolar coupling constant. The parameters appearing in (1) are related to the physical properties of the atom/system, and have dimensions as shown in Table 1, where d is the dipole moment of the atom, e the electron charge, and ρ the atomic density inside the quantisation volume. It is convenient to redefine ω A =ω A ω F , γ =γ ω F , and take ω F = 1 (i.e., measure frequency in units of the field frequency), which we do hereafter.
The expression (1) was derived from a multipolar expansion of the dipole interaction with the electromagnetic field. A different derivation related to the radiation gauge, where the long wavelength approximation is considered as well as the approximation to 2-level systems, leads to an extra diamagnetic term quadratic in the electromagnetic vector potential A, of the formκγ 2 N A (a † + a) 2 , withκγ 2 the diamagnetic coupling constant. 2πρ This has led to some confusion in the literature as to the correct expression to use. Both the multipolar and the radiation Hamiltonians are related by a unitary gauge transformation, thus yielding the same physics; it is the approximation to 2-level systems that breaks this symmetry (cf. [10] for details). When using the Hamiltonian derived from the radiation gauge, the Thomas-Reiche-Kuhn sum rule would place contradictory bounds to the parameters of the model [11]; furthermore, since the coupling strengthγ is much smaller than the atomic level separationω A for optical systems, it was believed that gauge invariance requires the presence of the diamagnetic term [12]. To the benefit of the Hamiltonian in (1), not only has a very strong case been made in its favour as a consistent description of the interaction of a one-mode light field with the internal excitation of atoms inside a cavity [13], but experimental results indicate that transitions apparently forbidden by the no-go theorem from the sum-rule mentioned above can actually be observed [14,15] by using Raman transitions between ground states in an atomic ensemble.
An important feature of atom-field interactions is the presence of phase transitions [16] from the normal to a collective behaviour: effect involving all N A atoms in the sample, where the decay rate is proportional to N 2 A instead of N A (the expected result for independent atom emission). Quantum fluctuations may drive a change in the ground state of a system, even at zero temperature, T = 0. A simple way to see this is to consider a Hamiltonian H(χ), whose degrees of freedom vary as a function of a dimensionless coupling parameter χ. The ground-state energy of H(χ) would generally be a smooth, analytic function of χ [17]. Exceptions occur, for example, in the case when χ couples only to a conserved quantity where H 0 and H 1 commute. Then H 0 and H 1 can be simultaneously diagonalised, but while the eigenvalues vary with χ, the eigenfunctions are independent of χ. We can then have a level-crossing where an excited level becomes the ground state at a certain critical value of the coupling χ = χ c (cf. Fig. 1). For χ χ c and χ χ c the ground state of the system is clear; for χ → χ c terms start to compete, and the system would undergo a phase transition: we say that the limiting states realise distinct quantum phases. The crossing of levels in the spectrum of a physical system is an indication of a first order transition while, in general, the second order ones correspond to other causes (e.g., avoided crossings) and they are continuous. Each phase is a region of analyticity of the free energy per particle, and different phases are separated by separatrices which are singular loci of the free energy. Thus, the study of the phase diagram of a system is an important means to understand its behaviour.
There have been various contributions to the study of phase transitions in 2-level systems [18][19][20][21]. In particular, the Husimi function has been used for phase space analysis [22] and entropic uncertainty relations to detect quantum phase transitions [23]. Here, we want to stress the role of the catastrophe formalism to determine significant changes in the ground state of the system under small changes in the parameters of the model. Quantum phase transitions and stability properties have been extensively studied through the catastrophe formalism and the coherent states theory [24][25][26][27][28][29][30][31]. In particular, as these quantum systems cannot be solved analytically in an exact manner (except in the thermodynamic limit), in the latter references a procedure based on the use of the fidelity susceptibility of neighbouring states was established to determine with fine precision the location of the separatrices (but see also [19,32]).
For applications such as quantum memories and other quantum information and quantum optics purposes it seems more appropriate to use 3-level atoms. Furthermore, approximations to 3-level systems in the Λ configuration are plentiful: e.g., alkali metals, as confirmed by the electromagnetically induced transparency effect. For practical applications, off-resonant systems protect one from spontaneous emission and have thus been favored because of their advantage when subjected to coherent manipulations; in fact, schemes have been presented for various quantum gates using 3-level atoms and trapped ions [35,36]. The study of 3-level systems thus deserves attention. In particular, the importance of their phase diagrams has drawn the attention of some authors [33,34]. In [33] the energy surface method was applied to obtain an estimation of the ground state energy and the phase diagrams as well as the order of the phase transitions, in the three configurations, using the multipolar Hamiltonian. The results were compared with those of the exact quantum solution. In [34] the radiation Hamiltonian containing the diamagnetic term is used, in the Holstein-Primakoff realisation. They analyse the phase diagram of the three configurations in the thermodynamic limit, taking care of the regions where the Thomas-Reiche-Kuhn (TRK) sum rule holds, and they show that transitions from the normal to the collective regimes are possible even when the TRK rule is satisfied; this is in direct contrast to the situation of 2-level systems.
Here, for the aforementioned reasons, we will consider the multipolar Hamiltonian and we will make use of the catastrophe formalism to study 3-level systems. Their Hamiltonian may be written as [37] where H D and H int are the diagonal and interaction contributions, respectively given by Here a † , a are as before the creation and annihilation electromagnetic field operators, ij the collective matter operators obeying the U (3) algebra with a possible realisation A (s) ij = |i (s) j (s) |, and the total number of atoms is given by The i-th level frequency is denoted by ω i with the convention ω 1 ≤ ω 2 ≤ ω 3 , and the coupling parameter between levels i and j is µ ij . The different atomic configurations are chosen by taking the appropriate value µ ij = 0 (cf. Fig. 2). We have written Ω (instead of ω F ) for the frequency of the radiation field. The way in which equations (4,5) are written lends itself to be easily generalised for a system of n-level atoms interacting with m-modes of a radiation field where the values of j, k, are determined by the possible transitions according to the specific atomic configuration, and where we have N A = n k=1 A kk . In this work we shall review and extend the study of the phase diagrams for 2-and 3-level quantum systems consisting of a finite number of atoms interacting through a 1-mode electromagnetic field. We show how the use of variational trial states that are adapted to the symmetry of the system Hamiltonian give an excellent approximation not only to the exact quantum phase space, but also to the energy spectrum and the expectation values of the atomic and field operators. When in the RWA approximation, the total number of excitations is an integral of motion of the system; using trial states adapted to the symmetry of the Hamiltonian then means essentially projecting onto this integral of motion. In the full model (rotating and counter-rotating terms), however, it is the parity in the number of excitations that is conserved, and to obtain symmetryadapted states (SAS) we therefore take linear combinations of coherent states of the same parity. These symmetry adapted states were first used in [38], named "even and odd coherent states", as nonclassical states for the study of singular non-stationary quantummechanical harmonic oscillators, and later to discuss the properties of the tomographic representation of quantum mechanics [39,40]. Here, we use them to look in detail at the structure of the phase diagram and the behaviour of the phase changes. We also present some virtues and limitations of these symmetry-adapted states, use the fidelity and the fidelity susceptibility of neighbouring quantum states to find the loci of the transitions in phase space from one phase to the other, and derive the critical exponents for the various systems.
This work is dedicated with great appreciation to Professors Vladimir and Margarita Man'ko in their joint 150-year celebration, for their numerous contributions to the development and promotion of quantum optics and mathematical physics.
Two-level Systems
The simplest completely soluble quantum-mechanical model of one 2-level atom in an electromagnetic field is described by the Jaynes-Cummings (JCM) model [7]. This, and its generalisation to N A identical 2-level atoms given by Tavis and Cummings [8], the TCM model, were fundamental to study basic properties of quantum electrodynamics and to understand phenomena like the existence of collapse and revivals in the Rabi oscillations (observed experimentally for the first time in 1987 [3]). Both the JCM and the TCM models discard the terms in the Hamiltonian which do not conserve the total number of excitations of the field plus matter by using the RWA approximation. When these terms are considered we obtain the full Dicke model (DM) [9]. In this Section we consider the phase diagrams presented by these models for the ground state, both in the case of a finite number of atoms N A and in the thermodynamic limit, by making use of the catastrophe formalism to determine when significant changes to the ground state occur for small changes of the external environment (the parameters of the model). The influence of the phase transitions on the behaviour of observables of interest for the matter and the field are also presented.
The choice of the use of the catastrophe formalism allows us to obtain analytic descriptions for the phase diagram in parameter space, which distinguishes the normal and collective regions, and which gives us all the quantum phase transitions of the ground state from one region to the other as we vary the interaction parameters (the matter-field coupling constants) of the model, in functional form. This approach thus allows also for the study of the asymptotic behaviour in any of the quantities of interest: the number of particles, the constants of motion, and the interaction parameters themselves.
Catastrophe theory derives from the research of René Thom in topology and differential analysis on the structural stability of differentiable maps [41]. Dissipative systems, for example, always reach equilibrium; this equilibrium is characterised by a certain function µ(x) which at x represents the minimum of usually the energy of the system, and when this minimum µ(x) is stable x will be a regular point in the space of parameters describing the system. But when the energy changes abruptly at µ(x) due to slight disturbances the local minimum is destroyed in a neighbourhood of x, µ(x) ceases to be an attractor of the dynamics, and x is a catastrophic point: the state of the system will present sudden jumps from x to another point x (another attractor) and back. The dynamics of the system thus bifurcates. It is these bifurcations that we are interested in studying analytically.
The Jaynes-Cummings and the Tavis-Cummings Models
A 2-level system of N A atoms interacting dipolarly with an electromagnetic field of frequency ω F is described by the Tavis-Cummings Hamiltonian [8], which we may write as where we have seth = 1 and all quantities are dimensionless. We have also divided the expression by N A in order to consider an intrinsic Hamiltonian, which we do hereafter. We can set ω F = 1 (i.e., measure frequency in units of the field frequency), and define a detuning parameter ∆ = ω F − ω A = 1 − ω A ; thus, ∆ = 0 when particles and field are in resonance and ∆ = 0 when away from resonance. It is convenient to introducê Λ = Ĵ2 + 1/4 − 1/2 +Ĵ z + a † a because it turns out to be an integral of motion for the system. Its eigenvalues are λ = ν + m + j, with j = N/2, j + m the number of atoms in their excited state, and ν the number of photons. The Hamiltonian can then be rewritten as The eigenvectors and eigenvalues of H can be obtained through diagonalisation of its associated matrix, thus allowing us to calculate the expectation value of all important field and matter observables, as well as the entanglement entropy, the squeezing parameter, and the population distributions [25,26]. For instance, taking the natural Hilbert space basis |ν, j, m , where ν is the eigenvalue associated to the photon number operator, j(j + 1) is the eigenvalue associated to the total angular momentum operator, m is the particle occupation number |m| ≤ j ≤ N A /2, where the j = N A /2 holds for identical atoms. Substituting the label m for the eigenvalue of the constant of motion, λ = ν + m + j, we can obtain the full energy spectrum of H. This is shown in Figure 3 (left) for N A = 6 atoms, λ up to 10, and a detuning parameter of ∆ = 0.2. One can see the avoided crossings due to ∆ = 0; had we ∆ = 0 they would touch at γ = 0 (cf. Figure 3 (right)). Pairs of curves of the same colour emanating from almost the same point on the energy axis (or the same one in the case for ∆ = 0) correspond to the same value of λ. The thicker horizontal line at E = −0.4 (magenta) is the energy of the ground state in the normal region.
We are interested, however, in studying the system analytically. To this end, we propose to use as a test-state a direct product of coherent Heisenberg-Weyl HW (1) states and SU (2) states |α, ζ = |α ⊗ |ζ as: being the parameters on the Bloch sphere and (q, p) the field quadratures.
The energy surface, defined as the expectation value of the Hamiltonian on the test state: H = α, ζ|H|α, ζ , is then given by The critical points of H determine 3 regions, as given by θ c = 0 (North Pole), θ c = π (South Pole), and θ c = arccos(ω A /γ 2 ) (Parallels); for each of these regions the minima of the energy E 0 and values λ c := Λ c (the expectation values of the constant of motion) are as follows: At these critical points, q c = − N A /2 γ sin θ c cos φ c , and p c = N A /2 γ sin θ c sin φ c , so that matter and field variables combine. As φ is a cyclic variable, φ c may be taken arbitrarily. We set φ c = 0, and the expressions in terms of this variable may be recovered by performing a rotation through an angle φ around the z-axis in the appropriate phase space: (q, p) and (J x , J y ) for field and matter quantities respectively.
We can write explicitly the form that the states take in each of these 3 regions: with The 3 regions define also a separatrix, where the Hessian of H is singular. This is given by ω A = ±γ 2 , and is shown in Fig. 4. Crossing the separatrix along paths I, II, III, and IV (horizontal (green) and vertical (brown) straight lines in the figure) leads to second-order phase transitions; crossing it along path V to first order transitions.
In general, these coherent variational states approximate very well the properties of the ground state of the quantum solution [25]. This is true for the energy, the constant of motion λ(γ), and the matter observables J z and its fluctuation squared (∆J z ) 2 , etc. Even the expectation value of the number of photons n = N ph is well approximated; but its fluctuation, as well as other properties of the system such as the occupation probabilities, are not: Fig. 5 (left) shows how bad an approximation to the photon number fluctuation we get. The noticeable differences arise from the fact that the coherent state contains contributions from all eigenvalues λ = ν + m + j of Λ, and therefore does not reflect the symmetry of the Hamiltonian leading to the constant of motion.
One may maintain the symmetry through a projection of the variational tensorial product of coherent states onto the value of the constant of motion of the TCM which minimises the (classical) energy of the ground state. This projection restores the Hamiltonian symmetry and is amiable to analytical calculations.
Projecting, the state becomes . The factor N is the normalisation factor. With respect to these projected states, the energy surface is given in terms of associated Laguerre polynomials [26] as follows A better way to measure the "distance" between states is via the fidelity, where 1 and 2 denote the density matrices of the states in question. For pure states, this definition coincides with the square of the scalar product between the states [19]. Figure 2.1 shows a perfect overlap F = 1 between the projected and quantum states in the normal region, dropping to F = 0.996 when crossing the separatrix into the Parallels region, only to recover again towards F = 1 as γ grows. Even if our approximation by projected (symmetry-adapted) states is not exact, an excellent approximation to the exact quantum solution of the ground state of the TCM model is obtained. What is gained is that these states have an analytical form in terms of the model parameters and allow for the analytical calculation of the expectation values of field and matter observables, as well as for the study of the phase diagram of the system.
The Dicke Model
When the RWA approximation is not taken, we have the full Dicke Hamiltonian given in equation (1). Once again, one may obtain analytical expressions for the energy and expectation values of the relevant operators of the system via the use of the Heisenberg-Weyl and SU (2) coherent states (11) as trial states, and the variational procedure described above. This trial state contains N = 2j particles distributed in all the possible ways between the two levels and up to an infinite number of photons in the cavity. The energy surface in this case is given by and the separatrix shrinks to ω A = 4γ 2 c , for γ c the critical value of γ. As before, the crossings of this separatrix are second-order phase transitions, except for the firstorder crossing through the origin. The energy minima in the normal and collective (superradiant) regions are and the expected number of photons are which calls for the definition of x = γ/γ c . AsΛ = Ĵ2 + 1/4−1/2+Ĵ z +a † a is no longer a constant of motion for the system we cannot simply project onto one of its eigenvalues, rather, we have a dynamical symmetry associated with the projectors of the symmetric and antisymmetric representations of the cyclic group C 2 , given by This symmetry allows, however, for the classification of the eigenstates in terms of the parity of the eigenvalues λ = j + m + ν ofΛ [27]. Adapting the coherent states to the parity symmetry of the Hamiltonian then amounts to sum over λ even or odd, with two resulting orthogonal states |α, ζ, ± . For these states the energy surface associated to the superradiant regime takes the form with and the limit x → 1 gives the expressions for the normal region. The fidelity between these symmetry-adapted states and the exact quantum states is very close to 1 except in a small vicinity of the transition region in phase space, so it is no surprise that they provide an excellent agreement with the expectation values of the quantum operators for the system, an example of which is shown in Fig. 7 (left) for the fluctuation in the expectation value of the number of photons (∆n) 2 /N A as given by our projected state approximation (continuous, red curve) compared with the exact quantum solution (discontinuous, blue curve), as functions of γ. We have used N A = 20, and the resonant condition ∆ = 0. If we calculate the overlap between the coherent and adapted states we obtain Phase Diagrams of Systems of 2 and 3 levels in the presence of a Radiation Field 14 and, since the behaviour of F falls very rapidly with γ (cf. Fig. 7 (right)), this overlap will be at best equal to 1/2, which makes the ordinary coherent states a good approximation only in special cases. Appendix A compares the expectation values and fluctuations of matter and field observables for the coherent and symmetry-adapted states, evaluated at the critical points for the energy surface (18). For expectation values different from zero in the symmetry-adapted states, the coherent state results can be obtained from the former by letting F go to zero. Notable exceptions are the field quadratures (q, p) and the atomic operator fluctuations. For large N A the function F tends to zero even more rapidly; this is why coherent states have been so successful in the past as trial functions. Like the quantum states, the symmetry-adapted states show no divergences for field or matter expectation values at the phase transition. This is in contrast with results found previously [15,21,42], which are an artifact of an inappropriate truncation of the Hamiltonian. For more good properties of the symmetry-adapted states, including probability distributions of photons, of excited atoms, and their joint distribution, cf. [28]. In particular, even though the coherent states, the symmetry-adapted states, and the quantum states, are quantities arrived at via very different methods, they show a universal character in that a universal parametric curve for any number of atoms N A is obtained for the first quadrature of the electromagnetic field, q, and for the atomic relative population J z , as implicit functions of the atom-field coupling parameter γ, valid for both the ground-and first-excited states [29]. Furthermore, for all values of the coupling parameter and again any number of atoms, the behaviour of the number of photons vs. the relative atomic population is universal.
Critical Exponents
For a homogeneous function f (r) we have f (βr) = g(β) f (r) for all values of β. The scaling function g(β) is of the form g(β) = β s ; s is called the critical exponent. It is known that the singular part of many potentials in physics are homogeneous functions near second-order phase transitions; in particular, this is true for all thermodynamic potentials [43]. The behaviour of important observables of a system near phase transitions may thus be described by the system's critical exponent, and these are believed to be universal with respect to physical systems.
Our treatment for finite 2-level systems in a cavity, in the presence of a radiation field, allows us to study the critical value of the atom-field coupling parameter γ c as a function of the number of atoms N A , from which its critical exponent may be derived. Figure 8 shows this relationship for the ground state of both the quantum states (left) and the symmetry-adapted states (SAS) (right). For the quantum states the points correspond to a numerical solution from diagonalising the Hamiltonian, and the continuous curve to a model fit. The value of γ c was obtained by calculating in parameter space the place where the fidelity function between neighbouring states (cf. equation (30) variables is plotted for a more demanding fit, obtaining ln γ q c − or, equivalently, Except for a small vicinity of the phase transition, the SAS states do approximate very well the quantum solutions. However, the critical exponent obtained for the asymptotic behaviour of the adapted states is −11/21, as opposed to −2/3, as shown in the figure (right). This is precisely because the evaluation takes place at the phase transition point, where the states (quantum and adapted) differ most [10]. The value of γ c for the SAS states was obtained by calculating in parameter space the place where the minimum of the energy E + min for the state |α c ζ c , + presents a discontinuity. Since we are interested in the asymptotic behaviour, we took N A from 200 to 1000; the continuous curve shows the fit γ sas Table 2 shows a sample of values of (N A , γ c ) for the quantum and the SAS ground states, in order to make explicit the fact that for small N A the values of the quantum critical interaction parameter γ q c differ considerably from that of the SAS states γ sas c . This difference tends to zero as N A increases, and in the limit N A → ∞ the phase transition region in phase space coincides for both states at γ c = 0.5.
It is interesting to note, from equation (24), that only for γ = 0 (i.e., no matter-field interaction) do we have i.e., the overlap between coherent and symmetry-adapted states is perfect only when the interaction Hamiltonian H int vanishes. As soon as there is an interaction, no matter how small, the states differ. This is due to the fact that the ground states coincide only at γ = 0. Even in the normal regime, where the coherent ground state has exactly zero photons, the SAS ground state (just as the quantum ground state) is a superposition of states with an expectation value for N ph different from zero. This is true for any finite number of atoms N A . Figure 9 shows the energy per particle and the expected number of photons per particle for the ground symmetry-adapted state inside the normal region. We have taken N A = 10 to make the distinction visually clear.
In the asymptotic limit x → ∞, equation (24) gives a value of 1/2 for the overlap of the coherent and the SAS ground states. The same is true in the limit N A → ∞. This is to be expected, as the SAS ground state has contributions only from the even-parity components of the coherent ground state.
Three-level Systems
A 3-level system of N A atoms interacting dipolarly with an electromagnetic field of frequency Ω is described by the intrinsic Hamiltonian given in equations (3,4,5). Once again, we may take Ω = 1 and measure all frequencies in units of the field frequency. As mentioned before, the i-th level atomic frequency is denoted by ω i with the convention ω 1 ≤ ω 2 ≤ ω 3 , and the coupling parameter between levels i and j is µ ij . The three different atomic configurations are chosen by taking the appropriate value µ ij = 0 (cf. Fig. 2). It is also convenient to define a detuning parameter ∆ ij = ω i − ω j − Ω between levels i and j.
In the RWA approximation the Hamiltonian reduces to [37] and it has 2 constants of motion, viz., the total number of atoms N A = 3 i=1 A ii , and the total number of excitations M = a † a+λ 2 A 22 +λ 3 A 33 , where the value of λ i (i = 2, 3) depends on the configuration taken (cf. Table 3).
Notice that the Hamiltonian (29) is invariant under the transformation a → −a and a † → −a † , which preserves the commutation relations of the bosonic operators. For this reason we consider only positive values for µ ij . As the system cannot be solved analytically, one may solve via numerical diagonalization. A natural basis in which we diagonalize our Hamiltonian is |ν; q, r [33]. Here, ν represents the number of photons of the Fock state; r, q − r and N A − q are the atomic population of levels 1, 2, 3, respectively.
In order to study the phase diagram of the system we make use of the fidelity F and the fidelity susceptibility χ of neighboring states [32,44], defined by Whereas the fidelity is a measure of the distance between states which vary as functions of a control parameter τ , the fidelity susceptibility, essentially its second derivative with respect to the control parameter, is a more sensitive quantity. The fidelity measure goes to zero at each phase transition, as the nature of the ground state changes completely and orthogonaly; the fidelity susceptibility has divergences at these critical points in phase space. Crossing a separatrix produces a change in the total excitation number M .
To follow a similar procedure as for the 2-level systems, and be able to study the phase diagram analytically, we consider as a variational trial state the direct product of Heisenberg-Weyl HW (1) coherent states for the radiation part, |α} = e α a † |0 , and U (3) coherent states constructed by taking the exponential of the lowering generators acting on the highest weight states of U (3) [33] |ζ} where where the parameter γ 1 no longer appears since A 32 | [N a , 0, 0] = 0. It is convenient to use a polar form for the complex parameters and minimising with respect to these new parameters the energy surface H(α, ζ) = {α; ζ| H |α; ζ}/{α; ζ|α; ζ} in the RWA approximation takes the form where ρ c , ρ 2c and ρ 3c denote the critical values of the corresponding variables, and we have taken ρ 1 = 1. It is important to stress that equation (34) is valid for all three configurations. From this minimal surface the first separatrix corresponding to the phase change M = 0 → M = 0 (i.e., from the normal to the collective regimes) is given by [33] i) for the Ξ-configuration ii) for the Λ-configuration iii) for the V -configuration where ω ij = ω i − ω j and Θ is the Heaviside function. These are shown in Fig. 10 for double atomic resonance with respect to the radiation field frequency in the Ξconfiguration (i.e., ω 31 = 2 ω 21 = 2), a small detuning in the Λ-configuration (ω 21 = 0.2, ω 31 = 1), and double atomic resonance in the V -configuration (ω 21 = ω 31 = 1). For equal atomic detuning in the Λ-configuration the separatrix is identical to that of the V -configuration. Further analysis shows that in the Ξ-configuration the phase transition across µ 12 = √ ω 21 is of second-order, while that across the segment of the circumference is of first-order. In the Λ-configuration with unequal atomic detuning we have the same behaviour. In the Λ-configuration with equal atomic detuning and in the Vconfiguration, however, all transitions are of second-order. Being M = ν + λ 2 A 22 + λ 3 A 33 a constant of motion, we obtain states adapted to the symmetry of the Hamiltonian by projecting onto the appropriate value of M . This is done in practice by substituting ν = M −λ 2 A 22 −λ 3 A 33 and keeping the only relevant value of M . These are the (projected) SAS states in the RWA approximation.
In the thermodynamic limit, given by ν ∝ N A with N A → ∞, the loci in parameter space of the quantum phase transitions are exactly those shown by Fig. 10. But even for a small number of atoms the approximation to the separatrices given by the projected SAS states is remarkably good: the figure shows, in its lower right, the fidelity susceptibility divergences at each phase crossing along the path µ 12 = µ 23 −0.2 for the Ξ-configuration and N A = 2, as a function of µ 23 . Since µ 12 = 1 is fixed and independent of N A at this separatrix (vide infra), the projected SAS prediction gives µ 23 = 1.2 which compares well with the value of µ 23 = 1.28 for the first transition of the exact quantum ground state, even though N A = 2. This good approximation by the chosen variational states obeys the fact that the fidelity between the quantum and projected SAS ground states gives a perfect overlap except in a small vicinity of the phase transitions, as shown in Figure 11 (left). The reader may compare this vs. the overlap between the quantum and the coherent ground states shown at right. In both cases N A = 3 and we have chosen to illustrate the result for the V -configuration (those for the other configurations being very similar).
All phase transitions tend to those given by equations (35,36,37) as N A → ∞ with ν ∝ N A . In this thermodynamic limit these are the only ones that remain. For the V -configuration the consequent transitions take place at a family of curves congruent with and ever more distant to the one shown in Fig. 10, which approach the latter uniformly as N A grows. For the Ξ-configuration we have curves with similar shape to that shown in Fig. 10, with a vertical straight edge and an upper circular arc. The vertical edge tends to that at µ 12 = 1 as N A grows, while the circular arcs "slide down" the µ 23 -axis, intersect the arc of the transition M = 0 → M = 0 shown, and continues sliding down this arc tending to (µ 12 , µ 23 This shows how the loci of the quantum phase transitions change as the number of atoms grow. In the limit N A → ∞ they converge to the separatrix between the normal and collective regions. Figure 12. The subfigure at left shows the critical value µ 12 qc of µ 12 for the quantum transition M → M + 1 as a function of the number of atoms, i.e., how the transitions to the right of the straight vertical line µ 12 = 1 move as N A changes. They all tend to the limit µ 12 qc = 1, as given by equation (35) when N A → ∞. At right we plot µ 23 qc as a function of N A , to see how the phase transitions above the circular arc move; the first transition is M = 0 → M = 2 since the phase region M = 1 stops at µ 23 = √ 2 and does not reach the upper arc. We see that the point where these phase regions intersect the circular arc slide towards µ 23 = √ 2, again as given by equation (35). In the thermodynamic limit, then, the separatrix reduces to the line segment given by µ 12 = 1 and µ 23 ∈ [0, √ 2], plus the arc of circumference starting at µ 23 = √ 2. The Λ-configuration has a similar behaviour as the V -configuration when in double resonance, and a behaviour much like that of the Ξ-configuration when away from double resonance.
A Triple Point in Phase Space
The Ξ-configuration is special in that it shows a richer structure. In particular, it has a triple point in parameter space, corresponding to the place where the phases for M = 0, M = 1, and M = 2 meet [47]. The term triple point is mainly used in the context of fluids, where different phases of the fluid meet in parameter space. Here we use the same terminology since the different values of the total excitation number M correspond to completely different structures of the ground state, even though the energy is the same for all of them, and since these three regions meet at a point in parameter space as shown in Figure 13. The meaning is also the same as for a thermodynamic triple point: any fluctuation (in this case quantum) will drastically change the composition of the ground state. And since in the collective region we have a decay rate proportional to N A 2 , as opposed to N A for the normal region, this gives hope for experimental exploitation of the triple point.
In the RWA approximation and in double resonance this triple point resides at (µ 12 , µ 23 ) = (1, √ 2) (cf. Fig. 10). In the full model, contemplating the counter-rotating terms, we just divide these values by 2 (vide infra, Subsection 3.2). It is a fixed point, independent of N A , which subsists in the thermodynamic limit. It is also characteristic of the Ξ-configuration; it does not appear in the Λ or the V configurations. We can calculate the ground state |ψ gs at the triple point for each phase, by diagonalising the Hamiltonian in the basis |ν; q, r . In the RWA approximation one gets analytic expressions. For N A ≥ 2 we have: M=0 : : M=2 : It is clear that, even when we are at the same point in phase space, the ground state may acquire very different structures. Away from double resonance the triple point is still present, though its coordinates in phase space vary as well as the specific combination given by the equations above. When the number of excitations M is small the dimension of the Hilbert space does not depend on N A , making it possible to study the system in the limit N A → ∞. The energy spectrum, in particular, does not depend on µ 23 in this limit, and it shows a collapse of energy levels at precisely µ 12 = 1 for all values of M . Figure 14 shows this for M = 0 to M = 5, and it is interesting to compare it with the spectrum of the 2-level Tavis-Cummings model, Fig. 3. As a function of M , at the triple point, we have an equidistant spectrum with only even harmonics [47], and it is interesting to note that at µ 12 = 1 we have precisely all the even harmonics as degenerate energy levels, and no others.
The behaviour at the axis µ 12 = 0 is also interesting. The total degeneracy for each M found at µ 12 = 0 in the limit N A → ∞ only survives for finite N A at µ 23 = 0, i.e., when there is absolutely no matter-field coupling. As soon as the coupling is "turned on", this degeneracy breaks down. Figure 15 shows E vs. µ 12 for N A = 4 (left) and N A = 100 (right), when µ 23 = 0 (blue, continuous line) and when µ 23 = 1.5 (red, dashed line). While the ground and first excited states still show degeneracies at µ 12 = 0, these are broken for the second excited state.
Counter-Rotating Terms: the full model
When we do not make the rotating wave approximation, i.e., we include the counter-rotating terms in equations (3,5), minimising the energy surface H(α, ζ) = {α; ζ| H |α; ζ}/{α; ζ|α; ζ} with respect to the polar parameters takes the form [46] H(ρ c , ρ 2c , ρ 3c ) = 1 Comparing equations (39,34), the energy surfaces H and H RWA coincide if we identify This means that H RWA will inherit the properties of H at values of (µ ij ) RWA equal to 1 2 µ ij . (This is the same behaviour as that mentioned earlier for the Dicke model.) In particular, the shape of the phase diagram will be inherited in full at coordinates half those of the RWA scenario, and the order of the phase transitions will be the same. Whereas M is a constant of motion in the RWA approximation, it is not in the full model. As in the 2-level DM model, it is the parity in the number M of excitations that is conserved, as U (θ) := exp (i θ M ) is only a symmetry operator for θ = 0, π. To obtain symmetry-adapted states we therefore take linear combinations of coherent states of the same parity |α; ζ} ± := (1 ± exp[i π M ]) |α; ζ} , (41) and the energy surface for these SAS states, in any configuration, results in [46] H ± = ± {α; ζ| H |α; ζ} ± Phase Diagrams of Systems of 2 and 3 levels in the presence of a Radiation Field 25 where γ = (γ 1 , γ 2 , γ 3 ),γ = (γ 1 , (−1) λ 2 γ 2 , (−1) λ 3 γ 3 ), and γ 1 = 1. Again, one may use the polar form of these parameters to minimise with respect to each one in order to obtain the minimum energy surface for the system. In general this has to be done numerically, but the V -configuration lends itself to an analytic treatment; furthermore, all transitions in this configuration are of second order, making it a good candidate for the study of its critical exponents.
Critical Exponents
Using the polar form given in equation (33), and further defining ρ 2 = ξ cos(η), ρ 3 = ξ sin(η), the SAS energy surface (42) in the V -configuration takes the form where we have defined We minimise with respect to the parameters (ρ, ξ), and having an analytical expression for the SAS ground state allows to evaluate the relevant field and atomic operators in this state. Of particular interest is the comparative behaviour of the system in the normal and collective regimes: Figure 16 (left) shows the (normalised) atomic number of excitations in comparison with the (normalised) field excitations. This suggests that, in the normal region, as is to be expected from the atomic decay rate in a non-collective regime; as soon as the system enters a collective regime the number of field excitations increases much more rapidly than the atomic excitations. Differently to the coherent states, the SAS and quantum ground states in the normal regime contain non-zero contributions of photonic and atomic excitations. Equation (46) thus may be used, in general, as a criterion to define the normal region by using the appropriate atomic operators for the atomic excitations in each configuration. A phase transition causes a change in the structure of the ground state, which is reflected by a discontinuity in the phase parameters (see Figure 16 (right)). We use this discontinuity to find the critical value of the interaction strength µ c at the phase transition that separates the normal from the collective regimes as a function of the number of atoms, from N A = 100 to N A = 2000. Figure 17 shows a logarithmic plot for these two variables, together with the linear fit It is interesting to compare this relation with that obtained for the Dicke model using SAS states, equation (27). The critical exponent is exactly the same. An analysis of residuals shows a confidence interval of [−0.530, −0.519] for the exponent, to a confidence level of 0.95, where − 11 21 fits perfectly, with a goodness-of-fit R 2 = 0.9997.
Discussion and Conclusions
We have reviewed and expanded upon the structure of the phase diagram for systems consisting of 2-and 3-level particles dipolarly interacting with a 1-mode electromagnetic field, inside a cavity, paying particular attention to the case of a finite number N A of particles, and showing that the divergences that appear in other treatments are a consequence of the mathematical approximations employed, and can be avoided by studying the system in an exact manner quantum-mechanically or via a catastrophe formalism with variational trial states that satisfy the symmetries of the appropriate Hamiltonians.
We have shown how the use of these variational states give an excellent approximation not only to the exact quantum phase space, but also to the energy spectrum and the expectation values of the atomic and field operators. Furthermore, they allow for analytic expressions in many of the cases studied, even for finite N A . We have made use of the fidelity and the fidelity susceptibility of neighbouring quantum states to find the loci of the transitions in phase space from one phase to the other; having analytic expressions allows for the order of the quantum phase transitions to be determined explicitly for each of the configurations, with and without detuning. Finally, we have derived the critical exponents for the various systems. The Ξ-configuration in 3-level systems is particular in that it exhibits a triple point in phase space. This means that any quantum fluctuation at this location will drastically change the composition of the ground state. The exact form of the ground state at this triple point has been studied; the same can be done for excited states in the vicinity of this point or elsewhere in parameter space.
Finally, a criterion (equation (46)) to define the normal region for the full Hamiltonian in the different configurations was suggested, acknowledging the fact that the SAS and quantum ground states in the normal regime contain non-zero contributions of photonic and atomic excitations.
With the promise of the benefits of quantum information the study of these systems acquire greater importance, as they constitute the basic q-dit blocks themselves as well as the possible quantum logical gates for computational purposes. The properties of the systems treated here have been intriguing, no less because of the search for a fine control of the light-matter interaction at the level of single and few atom-photon pairings. It is hoped that this manuscript conveys an accurate account of the properties and structure of the phase space of these interesting systems. Table A1. Expectation values and fluctuations of matter and field observables for the coherent and symmetry-adapted states in the superradiant regime. The mean-field behaviour obtained in the normal region can be recovered by taking the limit x → 1.
Coherent
Symmetry Adapted
|
2015-02-03T17:47:11.000Z
|
2015-02-03T00:00:00.000
|
{
"year": 2015,
"sha1": "e4e412d1e738dacad860823a7bad9ba4c474edc3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e4e412d1e738dacad860823a7bad9ba4c474edc3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.